ERIC Educational Resources Information Center
Cornell Univ., Ithaca, NY. Dept. of Computer Science.
Four papers are included in Part One of the eighteenth report on Salton's Magical Automatic Retriever of Texts (SMART) project. The first paper: "Content Analysis in Information Retrieval" by S. F. Weiss presents the results of experiments aimed at determining the conditions under which content analysis improves retrieval results as well…
Trends of Science Education Research: An Automatic Content Analysis
ERIC Educational Resources Information Center
Chang, Yueh-Hsia; Chang, Chun-Yen; Tseng, Yuen-Hsien
2010-01-01
This study used scientometric methods to conduct an automatic content analysis on the development trends of science education research from the published articles in the four journals of "International Journal of Science Education, Journal of Research in Science Teaching, Research in Science Education, and Science Education" from 1990 to 2007. The…
A System for the Semantic Multimodal Analysis of News Audio-Visual Content
NASA Astrophysics Data System (ADS)
Mezaris, Vasileios; Gidaros, Spyros; Papadopoulos, GeorgiosTh; Kasper, Walter; Steffen, Jörg; Ordelman, Roeland; Huijbregts, Marijn; de Jong, Franciska; Kompatsiaris, Ioannis; Strintzis, MichaelG
2010-12-01
News-related content is nowadays among the most popular types of content for users in everyday applications. Although the generation and distribution of news content has become commonplace, due to the availability of inexpensive media capturing devices and the development of media sharing services targeting both professional and user-generated news content, the automatic analysis and annotation that is required for supporting intelligent search and delivery of this content remains an open issue. In this paper, a complete architecture for knowledge-assisted multimodal analysis of news-related multimedia content is presented, along with its constituent components. The proposed analysis architecture employs state-of-the-art methods for the analysis of each individual modality (visual, audio, text) separately and proposes a novel fusion technique based on the particular characteristics of news-related content for the combination of the individual modality analysis results. Experimental results on news broadcast video illustrate the usefulness of the proposed techniques in the automatic generation of semantic annotations.
Paediatric Automatic Phonological Analysis Tools (APAT).
Saraiva, Daniela; Lousada, Marisa; Hall, Andreia; Jesus, Luis M T
2017-12-01
To develop the pediatric Automatic Phonological Analysis Tools (APAT) and to estimate inter and intrajudge reliability, content validity, and concurrent validity. The APAT were constructed using Excel spreadsheets with formulas. The tools were presented to an expert panel for content validation. The corpus used in the Portuguese standardized test Teste Fonético-Fonológico - ALPE produced by 24 children with phonological delay or phonological disorder was recorded, transcribed, and then inserted into the APAT. Reliability and validity of APAT were analyzed. The APAT present strong inter- and intrajudge reliability (>97%). The content validity was also analyzed (ICC = 0.71), and concurrent validity revealed strong correlations between computerized and manual (traditional) methods. The development of these tools contributes to fill existing gaps in clinical practice and research, since previously there were no valid and reliable tools/instruments for automatic phonological analysis, which allowed the analysis of different corpora.
A Theory of Term Importance in Automatic Text Analysis.
ERIC Educational Resources Information Center
Salton, G.; And Others
Most existing automatic content analysis and indexing techniques are based on work frequency characteristics applied largely in an ad hoc manner. Contradictory requirements arise in this connection, in that terms exhibiting high occurrence frequencies in individual documents are often useful for high recall performance (to retrieve many relevant…
Hanson, M.L.; Tabor, C.D. Jr.
1961-12-01
A mass spectrometer for analyzing the components of a gas is designed which is capable of continuous automatic operation such as analysis of samples of process gas from a continuous production system where the gas content may be changing. (AEC)
Tahara, Haruna; Matsuda, Shun; Yamamoto, Yusuke; Yoshizawa, Hiroe; Fujita, Masaharu; Katsuoka, Yasuhiro; Kasahara, Toshihiko
2017-11-01
Various cytotoxicity assays measuring indicators such as enzyme activity, dye uptake, or cellular ATP content are often performed using 96-well microplates. However, recent reports show that cytotoxicity assays such as the ATP assay and MTS assay underestimate cytotoxicity when compounds such as anti-cancer drugs or mutagens induce cell hypertrophy whilst increasing intracellular ATP content. Therefore, we attempted to evaluate the reliability of a high-content image analysis (HCIA) assay to count cell number in a 96-well microplate automatically without using a cell-number indicator. We compared cytotoxicity results of 25 compounds obtained from ATP, WST-8, Alamar blue, and HCIA assays with those directly measured using an automatic cell counter, and repeating individual experiments thrice. The number of compounds showing low correlation in cell viability measured using cytotoxicity assays compared to automatic cell counting (r 2 <0.8, at least 2 of 3 experiments) were follows: ATP assay; 7; WST-8 assay, 2; Alamar blue assay, 3; HCIA cytotoxicity assay, 0. Compounds for which correlation was poor in 3 assays, except the HCIA assay, induced an increase in nuclear and cell size. However, correlation between cell viability measured by automatic cell counter and the HCIA assay was strong regardless of nuclear and cell size. Additionally, correlation coefficients between IC 50 values obtained from automatic cell counter and from cytotoxicity assays were as follows: ATP assay, 0.80; WST-8 assay, 0.84; Alamar blue assay, 0.84; and HCIA assay, 0.98. From the above, we showed that the HCIA cytotoxicity assay produces similar data to the automatic cell counter and is highly accurate in measuring cytotoxicity. Copyright © 2017 Elsevier Inc. All rights reserved.
Content-based analysis of news video
NASA Astrophysics Data System (ADS)
Yu, Junqing; Zhou, Dongru; Liu, Huayong; Cai, Bo
2001-09-01
In this paper, we present a schema for content-based analysis of broadcast news video. First, we separate commercials from news using audiovisual features. Then, we automatically organize news programs into a content hierarchy at various levels of abstraction via effective integration of video, audio, and text data available from the news programs. Based on these news video structure and content analysis technologies, a TV news video Library is generated, from which users can retrieve definite news story according to their demands.
Feng, Jingwen; Lin, Jie; Zhang, Pengquan; Yang, Songnan; Sa, Yu; Feng, Yuanming
2017-08-29
High-content screening is commonly used in studies of the DNA damage response. The double-strand break (DSB) is one of the most harmful types of DNA damage lesions. The conventional method used to quantify DSBs is γH2AX foci counting, which requires manual adjustment and preset parameters and is usually regarded as imprecise, time-consuming, poorly reproducible, and inaccurate. Therefore, a robust automatic alternative method is highly desired. In this manuscript, we present a new method for quantifying DSBs which involves automatic image cropping, automatic foci-segmentation and fluorescent intensity measurement. Furthermore, an additional function was added for standardizing the measurement of DSB response inhibition based on co-localization analysis. We tested the method with a well-known inhibitor of DSB response. The new method requires only one preset parameter, which effectively minimizes operator-dependent variations. Compared with conventional methods, the new method detected a higher percentage difference of foci formation between different cells, which can improve measurement accuracy. The effects of the inhibitor on DSB response were successfully quantified with the new method (p = 0.000). The advantages of this method in terms of reliability, automation and simplicity show its potential in quantitative fluorescence imaging studies and high-content screening for compounds and factors involved in DSB response.
ERIC Educational Resources Information Center
Association for Computing Machinery, New York, NY.
Papers in this Proceedings of the ACM/IEEE-CS Joint Conference on Digital Libraries (Roanoke, Virginia, June 24-28, 2001) discuss: automatic genre analysis; text categorization; automated name authority control; automatic event generation; linked active content; designing e-books for legal research; metadata harvesting; mapping the…
Seamless presentation capture, indexing, and management
NASA Astrophysics Data System (ADS)
Hilbert, David M.; Cooper, Matthew; Denoue, Laurent; Adcock, John; Billsus, Daniel
2005-10-01
Technology abounds for capturing presentations. However, no simple solution exists that is completely automatic. ProjectorBox is a "zero user interaction" appliance that automatically captures, indexes, and manages presentation multimedia. It operates continuously to record the RGB information sent from presentation devices, such as a presenter's laptop, to display devices, such as a projector. It seamlessly captures high-resolution slide images, text and audio. It requires no operator, specialized software, or changes to current presentation practice. Automatic media analysis is used to detect presentation content and segment presentations. The analysis substantially enhances the web-based user interface for browsing, searching, and exporting captured presentations. ProjectorBox has been in use for over a year in our corporate conference room, and has been deployed in two universities. Our goal is to develop automatic capture services that address both corporate and educational needs.
Automatic image enhancement based on multi-scale image decomposition
NASA Astrophysics Data System (ADS)
Feng, Lu; Wu, Zhuangzhi; Pei, Luo; Long, Xiong
2014-01-01
In image processing and computational photography, automatic image enhancement is one of the long-range objectives. Recently the automatic image enhancement methods not only take account of the globe semantics, like correct color hue and brightness imbalances, but also the local content of the image, such as human face and sky of landscape. In this paper we describe a new scheme for automatic image enhancement that considers both global semantics and local content of image. Our automatic image enhancement method employs the multi-scale edge-aware image decomposition approach to detect the underexposure regions and enhance the detail of the salient content. The experiment results demonstrate the effectiveness of our approach compared to existing automatic enhancement methods.
Visual content highlighting via automatic extraction of embedded captions on MPEG compressed video
NASA Astrophysics Data System (ADS)
Yeo, Boon-Lock; Liu, Bede
1996-03-01
Embedded captions in TV programs such as news broadcasts, documentaries and coverage of sports events provide important information on the underlying events. In digital video libraries, such captions represent a highly condensed form of key information on the contents of the video. In this paper we propose a scheme to automatically detect the presence of captions embedded in video frames. The proposed method operates on reduced image sequences which are efficiently reconstructed from compressed MPEG video and thus does not require full frame decompression. The detection, extraction and analysis of embedded captions help to capture the highlights of visual contents in video documents for better organization of video, to present succinctly the important messages embedded in the images, and to facilitate browsing, searching and retrieval of relevant clips.
Content-based TV sports video retrieval using multimodal analysis
NASA Astrophysics Data System (ADS)
Yu, Yiqing; Liu, Huayong; Wang, Hongbin; Zhou, Dongru
2003-09-01
In this paper, we propose content-based video retrieval, which is a kind of retrieval by its semantical contents. Because video data is composed of multimodal information streams such as video, auditory and textual streams, we describe a strategy of using multimodal analysis for automatic parsing sports video. The paper first defines the basic structure of sports video database system, and then introduces a new approach that integrates visual stream analysis, speech recognition, speech signal processing and text extraction to realize video retrieval. The experimental results for TV sports video of football games indicate that the multimodal analysis is effective for video retrieval by quickly browsing tree-like video clips or inputting keywords within predefined domain.
Old document image segmentation using the autocorrelation function and multiresolution analysis
NASA Astrophysics Data System (ADS)
Mehri, Maroua; Gomez-Krämer, Petra; Héroux, Pierre; Mullot, Rémy
2013-01-01
Recent progress in the digitization of heterogeneous collections of ancient documents has rekindled new challenges in information retrieval in digital libraries and document layout analysis. Therefore, in order to control the quality of historical document image digitization and to meet the need of a characterization of their content using intermediate level metadata (between image and document structure), we propose a fast automatic layout segmentation of old document images based on five descriptors. Those descriptors, based on the autocorrelation function, are obtained by multiresolution analysis and used afterwards in a specific clustering method. The method proposed in this article has the advantage that it is performed without any hypothesis on the document structure, either about the document model (physical structure), or the typographical parameters (logical structure). It is also parameter-free since it automatically adapts to the image content. In this paper, firstly, we detail our proposal to characterize the content of old documents by extracting the autocorrelation features in the different areas of a page and at several resolutions. Then, we show that is possible to automatically find the homogeneous regions defined by similar indices of autocorrelation without knowledge about the number of clusters using adapted hierarchical ascendant classification and consensus clustering approaches. To assess our method, we apply our algorithm on 316 old document images, which encompass six centuries (1200-1900) of French history, in order to demonstrate the performance of our proposal in terms of segmentation and characterization of heterogeneous corpus content. Moreover, we define a new evaluation metric, the homogeneity measure, which aims at evaluating the segmentation and characterization accuracy of our methodology. We find a 85% of mean homogeneity accuracy. Those results help to represent a document by a hierarchy of layout structure and content, and to define one or more signatures for each page, on the basis of a hierarchical representation of homogeneous blocks and their topology.
ERIC Educational Resources Information Center
Dorça, Fabiano A.; Araújo, Rafael D.; de Carvalho, Vitor C.; Resende, Daniel T.; Cattelan, Renan G.
2016-01-01
Content personalization in educational systems is an increasing research area. Studies show that students tend to have better performances when the content is customized according to his/her preferences. One important aspect of students particularities is how they prefer to learn. In this context, students learning styles should be considered, due…
Integrated approach to multimodal media content analysis
NASA Astrophysics Data System (ADS)
Zhang, Tong; Kuo, C.-C. Jay
1999-12-01
In this work, we present a system for the automatic segmentation, indexing and retrieval of audiovisual data based on the combination of audio, visual and textural content analysis. The video stream is demultiplexed into audio, image and caption components. Then, a semantic segmentation of the audio signal based on audio content analysis is conducted, and each segment is indexed as one of the basic audio types. The image sequence is segmented into shots based on visual information analysis, and keyframes are extracted from each shot. Meanwhile, keywords are detected from the closed caption. Index tables are designed for both linear and non-linear access to the video. It is shown by experiments that the proposed methods for multimodal media content analysis are effective. And that the integrated framework achieves satisfactory results for video information filtering and retrieval.
Woodman, Geoffrey F.; Luck, Steven J.
2007-01-01
In many theories of cognition, researchers propose that working memory and perception operate interactively. For example, in previous studies researchers have suggested that sensory inputs matching the contents of working memory will have an automatic advantage in the competition for processing resources. The authors tested this hypothesis by requiring observers to perform a visual search task while concurrently maintaining object representations in visual working memory. The hypothesis that working memory activation produces a simple but uncontrollable bias signal leads to the prediction that items matching the contents of working memory will automatically capture attention. However, no evidence for automatic attentional capture was obtained; instead, the participants avoided attending to these items. Thus, the contents of working memory can be used in a flexible manner for facilitation or inhibition of processing. PMID:17469973
Woodman, Geoffrey F; Luck, Steven J
2007-04-01
In many theories of cognition, researchers propose that working memory and perception operate interactively. For example, in previous studies researchers have suggested that sensory inputs matching the contents of working memory will have an automatic advantage in the competition for processing resources. The authors tested this hypothesis by requiring observers to perform a visual search task while concurrently maintaining object representations in visual working memory. The hypothesis that working memory activation produces a simple but uncontrollable bias signal leads to the prediction that items matching the contents of working memory will automatically capture attention. However, no evidence for automatic attentional capture was obtained; instead, the participants avoided attending to these items. Thus, the contents of working memory can be used in a flexible manner for facilitation or inhibition of processing.
NASA Astrophysics Data System (ADS)
Wu, Jie; Besnehard, Quentin; Marchessoux, Cédric
2011-03-01
Clinical studies for the validation of new medical imaging devices require hundreds of images. An important step in creating and tuning the study protocol is the classification of images into "difficult" and "easy" cases. This consists of classifying the image based on features like the complexity of the background, the visibility of the disease (lesions). Therefore, an automatic medical background classification tool for mammograms would help for such clinical studies. This classification tool is based on a multi-content analysis framework (MCA) which was firstly developed to recognize image content of computer screen shots. With the implementation of new texture features and a defined breast density scale, the MCA framework is able to automatically classify digital mammograms with a satisfying accuracy. BI-RADS (Breast Imaging Reporting Data System) density scale is used for grouping the mammograms, which standardizes the mammography reporting terminology and assessment and recommendation categories. Selected features are input into a decision tree classification scheme in MCA framework, which is the so called "weak classifier" (any classifier with a global error rate below 50%). With the AdaBoost iteration algorithm, these "weak classifiers" are combined into a "strong classifier" (a classifier with a low global error rate) for classifying one category. The results of classification for one "strong classifier" show the good accuracy with the high true positive rates. For the four categories the results are: TP=90.38%, TN=67.88%, FP=32.12% and FN =9.62%.
ERIC Educational Resources Information Center
Lee, Cynthia; Wong, Kelvin C. K.; Cheung, William K.; Lee, Fion S. L.
2009-01-01
The paper first describes a web-based essay critiquing system developed by the authors using latent semantic analysis (LSA), an automatic text analysis technique, to provide students with immediate feedback on content and organisation for revision whenever there is an internet connection. It reports on its effectiveness in enhancing adult EFL…
Facilitator control as automatic behavior: A verbal behavior analysis
Hall, Genae A.
1993-01-01
Several studies of facilitated communication have demonstrated that the facilitators were controlling and directing the typing, although they appeared to be unaware of doing so. Such results shift the focus of analysis to the facilitator's behavior and raise questions regarding the controlling variables for that behavior. This paper analyzes facilitator behavior as an instance of automatic verbal behavior, from the perspective of Skinner's (1957) book Verbal Behavior. Verbal behavior is automatic when the speaker or writer is not stimulated by the behavior at the time of emission, the behavior is not edited, the products of behavior differ from what the person would produce normally, and the behavior is attributed to an outside source. All of these characteristics appear to be present in facilitator behavior. Other variables seem to account for the thematic content of the typed messages. These variables also are discussed. PMID:22477083
Trends of Science Education Research: An Automatic Content Analysis
NASA Astrophysics Data System (ADS)
Chang, Yueh-Hsia; Chang, Chun-Yen; Tseng, Yuen-Hsien
2010-08-01
This study used scientometric methods to conduct an automatic content analysis on the development trends of science education research from the published articles in the four journals of International Journal of Science Education, Journal of Research in Science Teaching, Research in Science Education, and Science Education from 1990 to 2007. The multi-stage clustering technique was employed to investigate with what topics, to what development trends, and from whose contribution that the journal publications constructed as a science education research field. This study found that the research topic of Conceptual Change & Concept Mapping was the most studied topic, although the number of publications has slightly declined in the 2000's. The studies in the themes of Professional Development, Nature of Science and Socio-Scientific Issues, and Conceptual Chang and Analogy were found to be gaining attention over the years. This study also found that, embedded in the most cited references, the supporting disciplines and theories of science education research are constructivist learning, cognitive psychology, pedagogy, and philosophy of science.
Analysis of Content Shared in Online Cancer Communities: Systematic Review
van de Poll-Franse, Lonneke V; Krahmer, Emiel; Verberne, Suzan; Mols, Floortje
2018-01-01
Background The content that cancer patients and their relatives (ie, posters) share in online cancer communities has been researched in various ways. In the past decade, researchers have used automated analysis methods in addition to manual coding methods. Patients, providers, researchers, and health care professionals can learn from experienced patients, provided that their experience is findable. Objective The aim of this study was to systematically review all relevant literature that analyzes user-generated content shared within online cancer communities. We reviewed the quality of available research and the kind of content that posters share with each other on the internet. Methods A computerized literature search was performed via PubMed (MEDLINE), PsycINFO (5 and 4 stars), Cochrane Central Register of Controlled Trials, and ScienceDirect. The last search was conducted in July 2017. Papers were selected if they included the following terms: (cancer patient) and (support group or health communities) and (online or internet). We selected 27 papers and then subjected them to a 14-item quality checklist independently scored by 2 investigators. Results The methodological quality of the selected studies varied: 16 were of high quality and 11 were of adequate quality. Of those 27 studies, 15 were manually coded, 7 automated, and 5 used a combination of methods. The best results can be seen in the papers that combined both analytical methods. The number of analyzed posts ranged from 200 to 1,500,000; the number of analyzed posters ranged from 75 to 90,000. The studies analyzing large numbers of posts mainly related to breast cancer, whereas those analyzing small numbers were related to other types of cancers. A total of 12 studies involved some or entirely automatic analysis of the user-generated content. All the authors referred to two main content categories: informational support and emotional support. In all, 15 studies reported only on the content, 6 studies explicitly reported on content and social aspects, and 6 studies focused on emotional changes. Conclusions In the future, increasing amounts of user-generated content will become available on the internet. The results of content analysis, especially of the larger studies, give detailed insights into patients’ concerns and worries, which can then be used to improve cancer care. To make the results of such analyses as usable as possible, automatic content analysis methods will need to be improved through interdisciplinary collaboration. PMID:29615384
MedSynDiKATe--design considerations for an ontology-based medical text understanding system.
Hahn, U.; Romacker, M.; Schulz, S.
2000-01-01
MedSynDiKATe is a natural language processor for automatically acquiring knowledge from medical finding reports. The content of these documents is transferred to formal representation structures which constitute a corresponding text knowledge base. The general system architecture we present integrates requirements from the analysis of single sentences, as well as those of referentially linked sentences forming cohesive texts. The strong demands MedSynDiKATe poses to the availability of expressive knowledge sources are accounted for by two alternative approaches to (semi)automatic ontology engineering. PMID:11079899
pyAudioAnalysis: An Open-Source Python Library for Audio Signal Analysis.
Giannakopoulos, Theodoros
2015-01-01
Audio information plays a rather important role in the increasing digital content that is available today, resulting in a need for methodologies that automatically analyze such content: audio event recognition for home automations and surveillance systems, speech recognition, music information retrieval, multimodal analysis (e.g. audio-visual analysis of online videos for content-based recommendation), etc. This paper presents pyAudioAnalysis, an open-source Python library that provides a wide range of audio analysis procedures including: feature extraction, classification of audio signals, supervised and unsupervised segmentation and content visualization. pyAudioAnalysis is licensed under the Apache License and is available at GitHub (https://github.com/tyiannak/pyAudioAnalysis/). Here we present the theoretical background behind the wide range of the implemented methodologies, along with evaluation metrics for some of the methods. pyAudioAnalysis has been already used in several audio analysis research applications: smart-home functionalities through audio event detection, speech emotion recognition, depression classification based on audio-visual features, music segmentation, multimodal content-based movie recommendation and health applications (e.g. monitoring eating habits). The feedback provided from all these particular audio applications has led to practical enhancement of the library.
pyAudioAnalysis: An Open-Source Python Library for Audio Signal Analysis
Giannakopoulos, Theodoros
2015-01-01
Audio information plays a rather important role in the increasing digital content that is available today, resulting in a need for methodologies that automatically analyze such content: audio event recognition for home automations and surveillance systems, speech recognition, music information retrieval, multimodal analysis (e.g. audio-visual analysis of online videos for content-based recommendation), etc. This paper presents pyAudioAnalysis, an open-source Python library that provides a wide range of audio analysis procedures including: feature extraction, classification of audio signals, supervised and unsupervised segmentation and content visualization. pyAudioAnalysis is licensed under the Apache License and is available at GitHub (https://github.com/tyiannak/pyAudioAnalysis/). Here we present the theoretical background behind the wide range of the implemented methodologies, along with evaluation metrics for some of the methods. pyAudioAnalysis has been already used in several audio analysis research applications: smart-home functionalities through audio event detection, speech emotion recognition, depression classification based on audio-visual features, music segmentation, multimodal content-based movie recommendation and health applications (e.g. monitoring eating habits). The feedback provided from all these particular audio applications has led to practical enhancement of the library. PMID:26656189
Content-based analysis of Ki-67 stained meningioma specimens for automatic hot-spot selection.
Swiderska-Chadaj, Zaneta; Markiewicz, Tomasz; Grala, Bartlomiej; Lorent, Malgorzata
2016-10-07
Hot-spot based examination of immunohistochemically stained histological specimens is one of the most important procedures in pathomorphological practice. The development of image acquisition equipment and computational units allows for the automation of this process. Moreover, a lot of possible technical problems occur in everyday histological material, which increases the complexity of the problem. Thus, a full context-based analysis of histological specimens is also needed in the quantification of immunohistochemically stained specimens. One of the most important reactions is the Ki-67 proliferation marker in meningiomas, the most frequent intracranial tumour. The aim of our study is to propose a context-based analysis of Ki-67 stained specimens of meningiomas for automatic selection of hot-spots. The proposed solution is based on textural analysis, mathematical morphology, feature ranking and classification, as well as on the proposed hot-spot gradual extinction algorithm to allow for the proper detection of a set of hot-spot fields. The designed whole slide image processing scheme eliminates such artifacts as hemorrhages, folds or stained vessels from the region of interest. To validate automatic results, a set of 104 meningioma specimens were selected and twenty hot-spots inside them were identified independently by two experts. The Spearman rho correlation coefficient was used to compare the results which were also analyzed with the help of a Bland-Altman plot. The results show that most of the cases (84) were automatically examined properly with two fields of view with a technical problem at the very most. Next, 13 had three such fields, and only seven specimens did not meet the requirement for the automatic examination. Generally, the Automatic System identifies hot-spot areas, especially their maximum points, better. Analysis of the results confirms the very high concordance between an automatic Ki-67 examination and the expert's results, with a Spearman rho higher than 0.95. The proposed hot-spot selection algorithm with an extended context-based analysis of whole slide images and hot-spot gradual extinction algorithm provides an efficient tool for simulation of a manual examination. The presented results have confirmed that the automatic examination of Ki-67 in meningiomas could be introduced in the near future.
ERIC Educational Resources Information Center
Pavlik, Philip I. Jr.; Cen, Hao; Koedinger, Kenneth R.
2009-01-01
This paper describes a novel method to create a quantitative model of an educational content domain of related practice item-types using learning curves. By using a pairwise test to search for the relationships between learning curves for these item-types, we show how the test results in a set of pairwise transfer relationships that can be…
Machine for Automatic Bacteriological Pour Plate Preparation
Sharpe, A. N.; Biggs, D. R.; Oliver, R. J.
1972-01-01
A fully automatic system for preparing poured plates for bacteriological analyses has been constructed and tested. The machine can make decimal dilutions of bacterial suspensions, dispense measured amounts into petri dishes, add molten agar, mix the dish contents, and label the dishes with sample and dilution numbers at the rate of 2,000 dishes per 8-hr day. In addition, the machine can be programmed to select different media so that plates for different types of bacteriological analysis may be made automatically from the same sample. The machine uses only the components of the media and sterile polystyrene petri dishes; requirements for all other materials, such as sterile pipettes and capped bottles of diluents and agar, are eliminated. Images PMID:4560475
Term-Weighting Approaches in Automatic Text Retrieval.
ERIC Educational Resources Information Center
Salton, Gerard; Buckley, Christopher
1988-01-01
Summarizes the experimental evidence that indicates that text indexing systems based on the assignment of appropriately weighted single terms produce retrieval results superior to those obtained with more elaborate text representations, and provides baseline single term indexing models with which more elaborate content analysis procedures can be…
Documents Similarity Measurement Using Field Association Terms.
ERIC Educational Resources Information Center
Atlam, El-Sayed; Fuketa, M.; Morita, K.; Aoe, Jun-ichi
2003-01-01
Discussion of text analysis and information retrieval and measurement of document similarity focuses on a new text manipulation system called FA (field association)-Sim that is useful for retrieving information in large heterogeneous texts and for recognizing content similarity in text excerpts. Discusses recall and precision, automatic indexing…
Automatic textual annotation of video news based on semantic visual object extraction
NASA Astrophysics Data System (ADS)
Boujemaa, Nozha; Fleuret, Francois; Gouet, Valerie; Sahbi, Hichem
2003-12-01
In this paper, we present our work for automatic generation of textual metadata based on visual content analysis of video news. We present two methods for semantic object detection and recognition from a cross modal image-text thesaurus. These thesaurus represent a supervised association between models and semantic labels. This paper is concerned with two semantic objects: faces and Tv logos. In the first part, we present our work for efficient face detection and recogniton with automatic name generation. This method allows us also to suggest the textual annotation of shots close-up estimation. On the other hand, we were interested to automatically detect and recognize different Tv logos present on incoming different news from different Tv Channels. This work was done jointly with the French Tv Channel TF1 within the "MediaWorks" project that consists on an hybrid text-image indexing and retrieval plateform for video news.
Automatic processing of spoken dialogue in the home hemodialysis domain.
Lacson, Ronilda; Barzilay, Regina
2005-01-01
Spoken medical dialogue is a valuable source of information, and it forms a foundation for diagnosis, prevention and therapeutic management. However, understanding even a perfect transcript of spoken dialogue is challenging for humans because of the lack of structure and the verbosity of dialogues. This work presents a first step towards automatic analysis of spoken medical dialogue. The backbone of our approach is an abstraction of a dialogue into a sequence of semantic categories. This abstraction uncovers structure in informal, verbose conversation between a caregiver and a patient, thereby facilitating automatic processing of dialogue content. Our method induces this structure based on a range of linguistic and contextual features that are integrated in a supervised machine-learning framework. Our model has a classification accuracy of 73%, compared to 33% achieved by a majority baseline (p<0.01). This work demonstrates the feasibility of automatically processing spoken medical dialogue.
TRAP: automated classification, quantification and annotation of tandemly repeated sequences.
Sobreira, Tiago José P; Durham, Alan M; Gruber, Arthur
2006-02-01
TRAP, the Tandem Repeats Analysis Program, is a Perl program that provides a unified set of analyses for the selection, classification, quantification and automated annotation of tandemly repeated sequences. TRAP uses the results of the Tandem Repeats Finder program to perform a global analysis of the satellite content of DNA sequences, permitting researchers to easily assess the tandem repeat content for both individual sequences and whole genomes. The results can be generated in convenient formats such as HTML and comma-separated values. TRAP can also be used to automatically generate annotation data in the format of feature table and GFF files.
Fiber Longitudinal Measurements for Predicting White Speck Contents of Dyed Cotton Fabrics
USDA-ARS?s Scientific Manuscript database
Fiber Image Analysis System (FIAS) was developed to provide an automatic method for measuring cotton maturity from fiber snippets or cross-sections . An uncombed cotton bundle is chopped and sprayed on a microscopic slide. The snippets are imaged sequentially on an microscope and measured with custo...
Fast Spatio-Temporal Data Mining from Large Geophysical Datasets
NASA Technical Reports Server (NTRS)
Stolorz, P.; Mesrobian, E.; Muntz, R.; Santos, J. R.; Shek, E.; Yi, J.; Mechoso, C.; Farrara, J.
1995-01-01
Use of the UCLA CONQUEST (CONtent-based Querying in Space and Time) is reviewed for performance of automatic cyclone extraction and detection of spatio-temporal blocking conditions on MPP. CONQUEST is a data analysis environment for knowledge and data mining to aid in high-resolution modeling of climate modeling.
Automatic Digital Content Generation System for Real-Time Distance Lectures
ERIC Educational Resources Information Center
Iwatsuki, Masami; Takeuchi, Norio; Kobayashi, Hisato; Yana, Kazuo; Takeda, Hiroshi; Yaginuma, Hisashi; Kiyohara, Hajime; Tokuyasu, Akira
2007-01-01
This article describes a new automatic digital content generation system we have developed. Recently some universities, including Hosei University, have been offering students opportunities to take distance interactive classes over the Internet from overseas. When such distance lectures are delivered in English to Japanese students, there is a…
The content and process of self-stigma in people with mental illness.
Chan, Kevin K S; Mak, Winnie W S
2017-01-01
Although many individuals with mental illness may self-concur with the "content" of stigmatizing thoughts at some point in their lives, they may have varying degrees of habitual recurrence of such thoughts, which could exacerbate their experience of self-stigma and perpetuate its damaging effects on their mental health. Although it is important to understand the "process" of how self-stigmatizing thoughts are sustained and perpetuated over time, no research to date has conceptualized and distinguished the habitual process of self-stigma from its cognitive content. Thus, the present study aims to develop and validate a measure of the habitual process of self-stigma-the Self-stigmatizing Thinking's Automaticity and Repetition Scale (STARS). In this study, 189 individuals with mental illness completed the STARS, along with several explicit (self-report) and implicit (response latency) measures of theoretically related constructs. Consistent with theories of mental habit, an exploratory factor analysis of the STARS items identified a 2-factor structure that represents the repetition (4 items) and automaticity (4 items) of self-stigmatization. The reliability of the STARS was supported by a Cronbach's α of .90, and its validity was supported by its significant correlations with theoretical predictors (content of self-stigma, experiential avoidance, and lack of mindfulness), expected outcomes (decreased self-esteem, life satisfaction, and recovery), and the Brief Implicit Association Tests measuring the automatic processing of self-stigmatizing information. With the validation of the STARS, future research can consider both the content and process of self-stigma so that a richer picture of its development, perpetuation, and influence can be captured. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
NASA Astrophysics Data System (ADS)
Thébault, Cédric; Doyen, Didier; Routhier, Pierre; Borel, Thierry
2013-03-01
To ensure an immersive, yet comfortable experience, significant work is required during post-production to adapt the stereoscopic 3D (S3D) content to the targeted display and its environment. On the one hand, the content needs to be reconverged using horizontal image translation (HIT) so as to harmonize the depth across the shots. On the other hand, to prevent edge violation, specific re-convergence is required and depending on the viewing conditions floating windows need to be positioned. In order to simplify this time-consuming work we propose a depth grading tool that automatically adapts S3D content to digital cinema or home viewing environments. Based on a disparity map, a stereo point of interest in each shot is automatically evaluated. This point of interest is used for depth matching, i.e. to position the objects of interest of consecutive shots in a same plane so as to reduce visual fatigue. The tool adapts the re-convergence to avoid edge-violation, hyper-convergence and hyper-divergence. Floating windows are also automatically positioned. The method has been tested on various types of S3D content, and the results have been validated by a stereographer.
Colomer Granero, Adrián; Fuentes-Hurtado, Félix; Naranjo Ornedo, Valery; Guixeres Provinciale, Jaime; Ausín, Jose M.; Alcañiz Raya, Mariano
2016-01-01
This work focuses on finding the most discriminatory or representative features that allow to classify commercials according to negative, neutral and positive effectiveness based on the Ace Score index. For this purpose, an experiment involving forty-seven participants was carried out. In this experiment electroencephalography (EEG), electrocardiography (ECG), Galvanic Skin Response (GSR) and respiration data were acquired while subjects were watching a 30-min audiovisual content. This content was composed by a submarine documentary and nine commercials (one of them the ad under evaluation). After the signal pre-processing, four sets of features were extracted from the physiological signals using different state-of-the-art metrics. These features computed in time and frequency domains are the inputs to several basic and advanced classifiers. An average of 89.76% of the instances was correctly classified according to the Ace Score index. The best results were obtained by a classifier consisting of a combination between AdaBoost and Random Forest with automatic selection of features. The selected features were those extracted from GSR and HRV signals. These results are promising in the audiovisual content evaluation field by means of physiological signal processing. PMID:27471462
Colomer Granero, Adrián; Fuentes-Hurtado, Félix; Naranjo Ornedo, Valery; Guixeres Provinciale, Jaime; Ausín, Jose M; Alcañiz Raya, Mariano
2016-01-01
This work focuses on finding the most discriminatory or representative features that allow to classify commercials according to negative, neutral and positive effectiveness based on the Ace Score index. For this purpose, an experiment involving forty-seven participants was carried out. In this experiment electroencephalography (EEG), electrocardiography (ECG), Galvanic Skin Response (GSR) and respiration data were acquired while subjects were watching a 30-min audiovisual content. This content was composed by a submarine documentary and nine commercials (one of them the ad under evaluation). After the signal pre-processing, four sets of features were extracted from the physiological signals using different state-of-the-art metrics. These features computed in time and frequency domains are the inputs to several basic and advanced classifiers. An average of 89.76% of the instances was correctly classified according to the Ace Score index. The best results were obtained by a classifier consisting of a combination between AdaBoost and Random Forest with automatic selection of features. The selected features were those extracted from GSR and HRV signals. These results are promising in the audiovisual content evaluation field by means of physiological signal processing.
Phase transformations of siderite ore by the thermomagnetic analysis data
NASA Astrophysics Data System (ADS)
Ponomar, V. P.; Dudchenko, N. O.; Brik, A. B.
2017-02-01
Thermal decomposition of Bakal siderite ore (that consists of magnesium siderite and ankerite traces) was investigated by thermomagnetic analysis. Thermomagnetic analysis was carried-out using laboratory-built facility that allows automatic registration of sample magnetization with the temperature (heating/cooling rate was 65°/min, maximum temperature 650 °C) at low- and high-oxygen content. Curie temperature gradually decreases with each next cycles of heating/cooling at low-oxygen content. Curie temperature decrease after 2nd cycle of heating/cooling at high-oxygen content and do not change with next cycles. Final Curie temperature for both modes was 320 °C. Saturation magnetization of obtained samples increases up to 20 Am2/kg. The final product of phase transformation at both modes was magnesioferrite. It was shown that intermediate phase of thermal decomposition of Bakal siderite ore was magnesiowustite.
Oscillatory brain dynamics associated with the automatic processing of emotion in words.
Wang, Lin; Bastiaansen, Marcel
2014-10-01
This study examines the automaticity of processing the emotional aspects of words, and characterizes the oscillatory brain dynamics that accompany this automatic processing. Participants read emotionally negative, neutral and positive nouns while performing a color detection task in which only perceptual-level analysis was required. Event-related potentials and time frequency representations were computed from the concurrently measured EEG. Negative words elicited a larger P2 and a larger late positivity than positive and neutral words, indicating deeper semantic/evaluative processing of negative words. In addition, sustained alpha power suppressions were found for the emotional compared to neutral words, in the time range from 500 to 1000ms post-stimulus. These results suggest that sustained attention was allocated to the emotional words, whereas the attention allocated to the neutral words was released after an initial analysis. This seems to hold even when the emotional content of the words is task-irrelevant. Copyright © 2014 Elsevier Inc. All rights reserved.
Document Exploration and Automatic Knowledge Extraction for Unstructured Biomedical Text
NASA Astrophysics Data System (ADS)
Chu, S.; Totaro, G.; Doshi, N.; Thapar, S.; Mattmann, C. A.; Ramirez, P.
2015-12-01
We describe our work on building a web-browser based document reader with built-in exploration tool and automatic concept extraction of medical entities for biomedical text. Vast amounts of biomedical information are offered in unstructured text form through scientific publications and R&D reports. Utilizing text mining can help us to mine information and extract relevant knowledge from a plethora of biomedical text. The ability to employ such technologies to aid researchers in coping with information overload is greatly desirable. In recent years, there has been an increased interest in automatic biomedical concept extraction [1, 2] and intelligent PDF reader tools with the ability to search on content and find related articles [3]. Such reader tools are typically desktop applications and are limited to specific platforms. Our goal is to provide researchers with a simple tool to aid them in finding, reading, and exploring documents. Thus, we propose a web-based document explorer, which we called Shangri-Docs, which combines a document reader with automatic concept extraction and highlighting of relevant terms. Shangri-Docsalso provides the ability to evaluate a wide variety of document formats (e.g. PDF, Words, PPT, text, etc.) and to exploit the linked nature of the Web and personal content by performing searches on content from public sites (e.g. Wikipedia, PubMed) and private cataloged databases simultaneously. Shangri-Docsutilizes Apache cTAKES (clinical Text Analysis and Knowledge Extraction System) [4] and Unified Medical Language System (UMLS) to automatically identify and highlight terms and concepts, such as specific symptoms, diseases, drugs, and anatomical sites, mentioned in the text. cTAKES was originally designed specially to extract information from clinical medical records. Our investigation leads us to extend the automatic knowledge extraction process of cTAKES for biomedical research domain by improving the ontology guided information extraction process. We will describe our experience and implementation of our system and share lessons learned from our development. We will also discuss ways in which this could be adapted to other science fields. [1] Funk et al., 2014. [2] Kang et al., 2014. [3] Utopia Documents, http://utopiadocs.com [4] Apache cTAKES, http://ctakes.apache.org
Analysis of Content Shared in Online Cancer Communities: Systematic Review.
van Eenbergen, Mies C; van de Poll-Franse, Lonneke V; Krahmer, Emiel; Verberne, Suzan; Mols, Floortje
2018-04-03
The content that cancer patients and their relatives (ie, posters) share in online cancer communities has been researched in various ways. In the past decade, researchers have used automated analysis methods in addition to manual coding methods. Patients, providers, researchers, and health care professionals can learn from experienced patients, provided that their experience is findable. The aim of this study was to systematically review all relevant literature that analyzes user-generated content shared within online cancer communities. We reviewed the quality of available research and the kind of content that posters share with each other on the internet. A computerized literature search was performed via PubMed (MEDLINE), PsycINFO (5 and 4 stars), Cochrane Central Register of Controlled Trials, and ScienceDirect. The last search was conducted in July 2017. Papers were selected if they included the following terms: (cancer patient) and (support group or health communities) and (online or internet). We selected 27 papers and then subjected them to a 14-item quality checklist independently scored by 2 investigators. The methodological quality of the selected studies varied: 16 were of high quality and 11 were of adequate quality. Of those 27 studies, 15 were manually coded, 7 automated, and 5 used a combination of methods. The best results can be seen in the papers that combined both analytical methods. The number of analyzed posts ranged from 200 to 1,500,000; the number of analyzed posters ranged from 75 to 90,000. The studies analyzing large numbers of posts mainly related to breast cancer, whereas those analyzing small numbers were related to other types of cancers. A total of 12 studies involved some or entirely automatic analysis of the user-generated content. All the authors referred to two main content categories: informational support and emotional support. In all, 15 studies reported only on the content, 6 studies explicitly reported on content and social aspects, and 6 studies focused on emotional changes. In the future, increasing amounts of user-generated content will become available on the internet. The results of content analysis, especially of the larger studies, give detailed insights into patients' concerns and worries, which can then be used to improve cancer care. To make the results of such analyses as usable as possible, automatic content analysis methods will need to be improved through interdisciplinary collaboration. ©Mies C van Eenbergen, Lonneke V van de Poll-Franse, Emiel Krahmer, Suzan Verberne, Floortje Mols. Originally published in JMIR Cancer (http://cancer.jmir.org), 03.04.2018.
NASA Astrophysics Data System (ADS)
Mantini, D.; Alleva, G.; Comani, S.
2005-10-01
Fetal magnetocardiography (fMCG) allows monitoring the fetal heart function through algorithms able to retrieve the fetal cardiac signal, but no standardized automatic model has become available so far. In this paper, we describe an automatic method that restores the fetal cardiac trace from fMCG recordings by means of a weighted summation of fetal components separated with independent component analysis (ICA) and identified through dedicated algorithms that analyse the frequency content and temporal structure of each source signal. Multichannel fMCG datasets of 66 healthy and 4 arrhythmic fetuses were used to validate the automatic method with respect to a classical procedure requiring the manual classification of fetal components by an expert investigator. ICA was run with input clusters of different dimensions to simulate various MCG systems. Detection rates, true negative and false positive component categorization, QRS amplitude, standard deviation and signal-to-noise ratio of reconstructed fetal signals, and real and per cent QRS differences between paired fetal traces retrieved automatically and manually were calculated to quantify the performances of the automatic method. Its robustness and reliability, particularly evident with the use of large input clusters, might increase the diagnostic role of fMCG during the prenatal period.
Multiple Object Retrieval in Image Databases Using Hierarchical Segmentation Tree
ERIC Educational Resources Information Center
Chen, Wei-Bang
2012-01-01
The purpose of this research is to develop a new visual information analysis, representation, and retrieval framework for automatic discovery of salient objects of user's interest in large-scale image databases. In particular, this dissertation describes a content-based image retrieval framework which supports multiple-object retrieval. The…
Information Storage and Retrieval Scientific Report No. ISR-22.
ERIC Educational Resources Information Center
Salton, Gerard
The twenty-second in a series, this report describes research in information organization and retrieval conducted by the Department of Computer Science at Cornell University. The report covers work carried out during the period summer 1972 through summer 1974 and is divided into four parts: indexing theory, automatic content analysis, feedback…
Measuring patterns in team interaction sequences using a discrete recurrence approach.
Gorman, Jamie C; Cooke, Nancy J; Amazeen, Polemnia G; Fouse, Shannon
2012-08-01
Recurrence-based measures of communication determinism and pattern information are described and validated using previously collected team interaction data. Team coordination dynamics has revealed that"mixing" team membership can lead to flexible interaction processes, but keeping a team "intact" can lead to rigid interaction processes. We hypothesized that communication of intact teams would have greater determinism and higher pattern information compared to that of mixed teams. Determinism and pattern information were measured from three-person Uninhabited Air Vehicle team communication sequences over a series of 40-minute missions. Because team members communicated using push-to-talk buttons, communication sequences were automatically generated during each mission. The Composition x Mission determinism effect was significant. Intact teams' determinism increased over missions, whereas mixed teams' determinism did not change. Intact teams had significantly higher maximum pattern information than mixed teams. Results from these new communication analysis methods converge with content-based methods and support our hypotheses. Because they are not content based, and because they are automatic and fast, these new methods may be amenable to real-time communication pattern analysis.
ERIC Educational Resources Information Center
Woodman, Geoffrey F.; Luck, Steven J.
2007-01-01
In many theories of cognition, researchers propose that working memory and perception operate interactively. For example, in previous studies researchers have suggested that sensory inputs matching the contents of working memory will have an automatic advantage in the competition for processing resources. The authors tested this hypothesis by…
Automatic generation of pictorial transcripts of video programs
NASA Astrophysics Data System (ADS)
Shahraray, Behzad; Gibbon, David C.
1995-03-01
An automatic authoring system for the generation of pictorial transcripts of video programs which are accompanied by closed caption information is presented. A number of key frames, each of which represents the visual information in a segment of the video (i.e., a scene), are selected automatically by performing a content-based sampling of the video program. The textual information is recovered from the closed caption signal and is initially segmented based on its implied temporal relationship with the video segments. The text segmentation boundaries are then adjusted, based on lexical analysis and/or caption control information, to account for synchronization errors due to possible delays in the detection of scene boundaries or the transmission of the caption information. The closed caption text is further refined through linguistic processing for conversion to lower- case with correct capitalization. The key frames and the related text generate a compact multimedia presentation of the contents of the video program which lends itself to efficient storage and transmission. This compact representation can be viewed on a computer screen, or used to generate the input to a commercial text processing package to generate a printed version of the program.
Harvesting Intelligence in Multimedia Social Tagging Systems
NASA Astrophysics Data System (ADS)
Giannakidou, Eirini; Kaklidou, Foteini; Chatzilari, Elisavet; Kompatsiaris, Ioannis; Vakali, Athena
As more people adopt tagging practices, social tagging systems tend to form rich knowledge repositories that enable the extraction of patterns reflecting the way content semantics is perceived by the web users. This is of particular importance, especially in the case of multimedia content, since the availability of such content in the web is very high and its efficient retrieval using textual annotations or content-based automatically extracted metadata still remains a challenge. It is argued that complementing multimedia analysis techniques with knowledge drawn from web social annotations may facilitate multimedia content management. This chapter focuses on analyzing tagging patterns and combining them with content feature extraction methods, generating, thus, intelligence from multimedia social tagging systems. Emphasis is placed on using all available "tracks" of knowledge, that is tag co-occurrence together with semantic relations among tags and low-level features of the content. Towards this direction, a survey on the theoretical background and the adopted practices for analysis of multimedia social content are presented. A case study from Flickr illustrates the efficiency of the proposed approach.
NASA Astrophysics Data System (ADS)
Alibhai, Dominic; Kumar, Sunil; Kelly, Douglas; Warren, Sean; Alexandrov, Yuriy; Munro, Ian; McGinty, James; Talbot, Clifford; Murray, Edward J.; Stuhmeier, Frank; Neil, Mark A. A.; Dunsby, Chris; French, Paul M. W.
2011-03-01
We describe an optically-sectioned FLIM multiwell plate reader that combines Nipkow microscopy with wide-field time-gated FLIM, and its application to high content analysis of FRET. The system acquires sectioned FLIM images in <10 s/well, requiring only ~11 minutes to read a 96 well plate of live cells expressing fluorescent protein. It has been applied to study the formation of immature HIV virus like particles (VLPs) in live cells by monitoring Gag-Gag protein interactions using FLIM FRET of HIV-1 Gag transfected with CFP or YFP. VLP formation results in FRET between closely packed Gag proteins, as confirmed by our FLIM analysis that includes automatic image segmentation.
Automatic system for detecting pornographic images
NASA Astrophysics Data System (ADS)
Ho, Kevin I. C.; Chen, Tung-Shou; Ho, Jun-Der
2002-09-01
Due to the dramatic growth of network and multimedia technology, people can more easily get variant information by using Internet. Unfortunately, it also makes the diffusion of illegal and harmful content much easier. So, it becomes an important topic for the Internet society to protect and safeguard Internet users from these content that may be encountered while surfing on the Net, especially children. Among these content, porno graphs cause more serious harm. Therefore, in this study, we propose an automatic system to detect still colour porno graphs. Starting from this result, we plan to develop an automatic system to search porno graphs or to filter porno graphs. Almost all the porno graphs possess one common characteristic that is the ratio of the size of skin region and non-skin region is high. Based on this characteristic, our system first converts the colour space from RGB colour space to HSV colour space so as to segment all the possible skin-colour regions from scene background. We also apply the texture analysis on the selected skin-colour regions to separate the skin regions from non-skin regions. Then, we try to group the adjacent pixels located in skin regions. If the ratio is over a given threshold, we can tell if the given image is a possible porno graph. Based on our experiment, less than 10% of non-porno graphs are classified as pornography, and over 80% of the most harmful porno graphs are classified correctly.
A Model-Based Method for Content Validation of Automatically Generated Test Items
ERIC Educational Resources Information Center
Zhang, Xinxin; Gierl, Mark
2016-01-01
The purpose of this study is to describe a methodology to recover the item model used to generate multiple-choice test items with a novel graph theory approach. Beginning with the generated test items and working backward to recover the original item model provides a model-based method for validating the content used to automatically generate test…
Incorporating Semantics into Data Driven Workflows for Content Based Analysis
NASA Astrophysics Data System (ADS)
Argüello, M.; Fernandez-Prieto, M. J.
Finding meaningful associations between text elements and knowledge structures within clinical narratives in a highly verbal domain, such as psychiatry, is a challenging goal. The research presented here uses a small corpus of case histories and brings into play pre-existing knowledge, and therefore, complements other approaches that use large corpus (millions of words) and no pre-existing knowledge. The paper describes a variety of experiments for content-based analysis: Linguistic Analysis using NLP-oriented approaches, Sentiment Analysis, and Semantically Meaningful Analysis. Although it is not standard practice, the paper advocates providing automatic support to annotate the functionality as well as the data for each experiment by performing semantic annotation that uses OWL and OWL-S. Lessons learnt can be transmitted to legacy clinical databases facing the conversion of clinical narratives according to prominent Electronic Health Records standards.
Using a MaxEnt Classifier for the Automatic Content Scoring of Free-Text Responses
NASA Astrophysics Data System (ADS)
Sukkarieh, Jana Z.
2011-03-01
Criticisms against multiple-choice item assessments in the USA have prompted researchers and organizations to move towards constructed-response (free-text) items. Constructed-response (CR) items pose many challenges to the education community—one of which is that they are expensive to score by humans. At the same time, there has been widespread movement towards computer-based assessment and hence, assessment organizations are competing to develop automatic content scoring engines for such items types—which we view as a textual entailment task. This paper describes how MaxEnt Modeling is used to help solve the task. MaxEnt has been used in many natural language tasks but this is the first application of the MaxEnt approach to textual entailment and automatic content scoring.
New approach for cognitive analysis and understanding of medical patterns and visualizations
NASA Astrophysics Data System (ADS)
Ogiela, Marek R.; Tadeusiewicz, Ryszard
2003-11-01
This paper presents new opportunities for applying linguistic description of the picture merit content and AI methods to undertake tasks of the automatic understanding of images semantics in intelligent medical information systems. A successful obtaining of the crucial semantic content of the medical image may contribute considerably to the creation of new intelligent multimedia cognitive medical systems. Thanks to the new idea of cognitive resonance between stream of the data extracted from the image using linguistic methods and expectations taken from the representaion of the medical knowledge, it is possible to understand the merit content of the image even if teh form of the image is very different from any known pattern. This article proves that structural techniques of artificial intelligence may be applied in the case of tasks related to automatic classification and machine perception based on semantic pattern content in order to determine the semantic meaning of the patterns. In the paper are described some examples presenting ways of applying such techniques in the creation of cognitive vision systems for selected classes of medical images. On the base of scientific research described in the paper we try to build some new systems for collecting, storing, retrieving and intelligent interpreting selected medical images especially obtained in radiological and MRI examinations.
Du, Cheng-Jin; Sun, Da-Wen; Jackman, Patrick; Allen, Paul
2008-12-01
An automatic method for estimating the content of intramuscular fat (IMF) in beef M. longissimus dorsi (LD) was developed using a sequence of image processing algorithm. To extract IMF particles within the LD muscle from structural features of intermuscular fat surrounding the muscle, three steps of image processing algorithm were developed, i.e. bilateral filter for noise removal, kernel fuzzy c-means clustering (KFCM) for segmentation, and vector confidence connected and flood fill for IMF extraction. The technique of bilateral filtering was firstly applied to reduce the noise and enhance the contrast of the beef image. KFCM was then used to segment the filtered beef image into lean, fat, and background. The IMF was finally extracted from the original beef image by using the techniques of vector confidence connected and flood filling. The performance of the algorithm developed was verified by correlation analysis between the IMF characteristics and the percentage of chemically extractable IMF content (P<0.05). Five IMF features are very significantly correlated with the fat content (P<0.001), including count densities of middle (CDMiddle) and large (CDLarge) fat particles, area densities of middle and large fat particles, and total fat area per unit LD area. The highest coefficient is 0.852 for CDLarge.
30 CFR 27.23 - Automatic warning device.
Code of Federal Regulations, 2010 CFR
2010-07-01
... APPROVAL OF MINING PRODUCTS METHANE-MONITORING SYSTEMS Construction and Design Requirements § 27.23... function automatically at a methane content of the mine atmosphere between 1.0 to 1.5 volume percent and at all higher concentrations of methane. (c) It is recommended that the automatic warning device be...
Mirsky, Simcha K; Barnea, Itay; Levi, Mattan; Greenspan, Hayit; Shaked, Natan T
2017-09-01
Currently, the delicate process of selecting sperm cells to be used for in vitro fertilization (IVF) is still based on the subjective, qualitative analysis of experienced clinicians using non-quantitative optical microscopy techniques. In this work, a method was developed for the automated analysis of sperm cells based on the quantitative phase maps acquired through use of interferometric phase microscopy (IPM). Over 1,400 human sperm cells from 8 donors were imaged using IPM, and an algorithm was designed to digitally isolate sperm cell heads from the quantitative phase maps while taking into consideration both the cell 3D morphology and contents, as well as acquire features describing sperm head morphology. A subset of these features was used to train a support vector machine (SVM) classifier to automatically classify sperm of good and bad morphology. The SVM achieves an area under the receiver operating characteristic curve of 88.59% and an area under the precision-recall curve of 88.67%, as well as precisions of 90% or higher. We believe that our automatic analysis can become the basis for objective and automatic sperm cell selection in IVF. © 2017 International Society for Advancement of Cytometry. © 2017 International Society for Advancement of Cytometry.
Automatic diet monitoring: a review of computer vision and wearable sensor-based methods.
Hassannejad, Hamid; Matrella, Guido; Ciampolini, Paolo; De Munari, Ilaria; Mordonini, Monica; Cagnoni, Stefano
2017-09-01
Food intake and eating habits have a significant impact on people's health. Widespread diseases, such as diabetes and obesity, are directly related to eating habits. Therefore, monitoring diet can be a substantial base for developing methods and services to promote healthy lifestyle and improve personal and national health economy. Studies have demonstrated that manual reporting of food intake is inaccurate and often impractical. Thus, several methods have been proposed to automate the process. This article reviews the most relevant and recent researches on automatic diet monitoring, discussing their strengths and weaknesses. In particular, the article reviews two approaches to this problem, accounting for most of the work in the area. The first approach is based on image analysis and aims at extracting information about food content automatically from food images. The second one relies on wearable sensors and has the detection of eating behaviours as its main goal.
Ilunga-Mbuyamba, Elisee; Avina-Cervantes, Juan Gabriel; Cepeda-Negrete, Jonathan; Ibarra-Manzano, Mario Alberto; Chalopin, Claire
2017-12-01
Brain tumor segmentation is a routine process in a clinical setting and provides useful information for diagnosis and treatment planning. Manual segmentation, performed by physicians or radiologists, is a time-consuming task due to the large quantity of medical data generated presently. Hence, automatic segmentation methods are needed, and several approaches have been introduced in recent years including the Localized Region-based Active Contour Model (LRACM). There are many popular LRACM, but each of them presents strong and weak points. In this paper, the automatic selection of LRACM based on image content and its application on brain tumor segmentation is presented. Thereby, a framework to select one of three LRACM, i.e., Local Gaussian Distribution Fitting (LGDF), localized Chan-Vese (C-V) and Localized Active Contour Model with Background Intensity Compensation (LACM-BIC), is proposed. Twelve visual features are extracted to properly select the method that may process a given input image. The system is based on a supervised approach. Applied specifically to Magnetic Resonance Imaging (MRI) images, the experiments showed that the proposed system is able to correctly select the suitable LRACM to handle a specific image. Consequently, the selection framework achieves better accuracy performance than the three LRACM separately. Copyright © 2017 Elsevier Ltd. All rights reserved.
Automated measurement of birefringence - Development and experimental evaluation of the techniques
NASA Technical Reports Server (NTRS)
Voloshin, A. S.; Redner, A. S.
1989-01-01
Traditional photoelasticity has started to lose its appeal since it requires a well-trained specialist to acquire and interpret results. A spectral-contents-analysis approach may help to revive this old, but still useful technique. Light intensity of the beam passed through the stressed specimen contains all the information necessary to automatically extract the value of retardation. This is done by using a photodiode array to investigate the spectral contents of the light beam. Three different techniques to extract the value of retardation from the spectral contents of the light are discussed and evaluated. An experimental system was built which demonstrates the ability to evaluate retardation values in real time.
Automatically Scoring Short Essays for Content. CRESST Report 836
ERIC Educational Resources Information Center
Kerr, Deirdre; Mousavi, Hamid; Iseli, Markus R.
2013-01-01
The Common Core assessments emphasize short essay constructed response items over multiple choice items because they are more precise measures of understanding. However, such items are too costly and time consuming to be used in national assessments unless a way is found to score them automatically. Current automatic essay scoring techniques are…
Automatic detection of Parkinson's disease in running speech spoken in three different languages.
Orozco-Arroyave, J R; Hönig, F; Arias-Londoño, J D; Vargas-Bonilla, J F; Daqrouq, K; Skodda, S; Rusz, J; Nöth, E
2016-01-01
The aim of this study is the analysis of continuous speech signals of people with Parkinson's disease (PD) considering recordings in different languages (Spanish, German, and Czech). A method for the characterization of the speech signals, based on the automatic segmentation of utterances into voiced and unvoiced frames, is addressed here. The energy content of the unvoiced sounds is modeled using 12 Mel-frequency cepstral coefficients and 25 bands scaled according to the Bark scale. Four speech tasks comprising isolated words, rapid repetition of the syllables /pa/-/ta/-/ka/, sentences, and read texts are evaluated. The method proves to be more accurate than classical approaches in the automatic classification of speech of people with PD and healthy controls. The accuracies range from 85% to 99% depending on the language and the speech task. Cross-language experiments are also performed confirming the robustness and generalization capability of the method, with accuracies ranging from 60% to 99%. This work comprises a step forward for the development of computer aided tools for the automatic assessment of dysarthric speech signals in multiple languages.
2016-06-01
TECHNICAL REPORT Algorithm for Automatic Detection, Localization and Characterization of Magnetic Dipole Targets Using the Laser Scalar...Automatic Detection, Localization and Characterization of Magnetic Dipole Targets Using the Laser Scalar Gradiometer Leon Vaizer, Jesse Angle, Neil...of Magnetic Dipole Targets Using LSG i June 2016 TABLE OF CONTENTS INTRODUCTION
Rains, Stephen A; Bosch, Leslie A
2009-07-01
This article reports a content analysis of the privacy policy statements (PPSs) from 97 general reference health Web sites that was conducted to examine the ways in which visitors' privacy is constructed by health organizations. PPSs are formal documents created by the Web site owner to describe how information regarding site visitors and their behavior is collected and used. The results show that over 80% of the PPSs in the sample indicated automatically collecting or requesting that visitors voluntarily provide information about themselves, and only 3% met all five of the Federal Trade Commission's Fair Information Practices guidelines. Additionally, the results suggest that the manner in which PPSs are framed and the use of justifications for collecting information are tropes used by health organizations to foster a secondary exchange of visitors' personal information for access to Web site content.
Self-adaptive relevance feedback based on multilevel image content analysis
NASA Astrophysics Data System (ADS)
Gao, Yongying; Zhang, Yujin; Fu, Yu
2001-01-01
In current content-based image retrieval systems, it is generally accepted that obtaining high-level image features is a key to improve the querying. Among the related techniques, relevance feedback has become a hot research aspect because it combines the information from the user to refine the querying results. In practice, many methods have been proposed to achieve the goal of relevance feedback. In this paper, a new scheme for relevance feedback is proposed. Unlike previous methods for relevance feedback, our scheme provides a self-adaptive operation. First, based on multi- level image content analysis, the relevant images from the user could be automatically analyzed in different levels and the querying could be modified in terms of different analysis results. Secondly, to make it more convenient to the user, the procedure of relevance feedback could be led with memory or without memory. To test the performance of the proposed method, a practical semantic-based image retrieval system has been established, and the querying results gained by our self-adaptive relevance feedback are given.
Self-adaptive relevance feedback based on multilevel image content analysis
NASA Astrophysics Data System (ADS)
Gao, Yongying; Zhang, Yujin; Fu, Yu
2000-12-01
In current content-based image retrieval systems, it is generally accepted that obtaining high-level image features is a key to improve the querying. Among the related techniques, relevance feedback has become a hot research aspect because it combines the information from the user to refine the querying results. In practice, many methods have been proposed to achieve the goal of relevance feedback. In this paper, a new scheme for relevance feedback is proposed. Unlike previous methods for relevance feedback, our scheme provides a self-adaptive operation. First, based on multi- level image content analysis, the relevant images from the user could be automatically analyzed in different levels and the querying could be modified in terms of different analysis results. Secondly, to make it more convenient to the user, the procedure of relevance feedback could be led with memory or without memory. To test the performance of the proposed method, a practical semantic-based image retrieval system has been established, and the querying results gained by our self-adaptive relevance feedback are given.
Dantas, Hebertty V; Barbosa, Mayara F; Nascimento, Elaine C L; Moreira, Pablo N T; Galvão, Roberto K H; Araújo, Mário C U
2013-03-15
This paper proposes a NIR spectrometric method for screening analysis of liquefied petroleum gas (LPG) samples. The proposed method is aimed at discriminating samples with low and high propane content, which can be useful for the adjustment of burn settings in industrial applications. A gas flow system was developed to introduce the LPG sample into a NIR flow cell at constant pressure. In addition, a gas chromatographer was employed to determine the propane content of the sample for reference purposes. The results of a principal component analysis, as well as a classification study using SIMCA (soft independent modeling of class analogies), revealed that the samples can be successfully discriminated with respect to propane content by using the NIR spectrum in the range 8100-8800 cm(-1). In addition, by using SPA-LDA (linear discriminant analysis with variables selected by the successive projections algorithm), it was found that perfect discrimination can also be achieved by using only two wavenumbers (8215 and 8324 cm(-1)). This finding may be of value for the design of a dedicated, low-cost instrument for routine analyses. Copyright © 2012 Elsevier B.V. All rights reserved.
Kosonogov, Vladimir; Martinez-Selva, Jose M; Carrillo-Verdejo, Eduvigis; Torrente, Ginesa; Carretié, Luis; Sanchez-Navarro, Juan P
2018-06-18
The social content of affective stimuli has been proposed as having an influence on cognitive processing and behaviour. This research was aimed, therefore, at studying whether automatic exogenous attention demanded by affective pictures was related to their social value. We hypothesised that affective social pictures would capture attention to a greater extent than non-social affective stimuli. For this purpose, we recorded event-related potentials in a sample of 24 participants engaged in a digit categorisation task. Distracters were affective pictures varying in social content, in addition to affective valence and arousal, which appeared in the background during the task. Our data revealed that pictures depicting high social content captured greater automatic attention than other pictures, as reflected by the greater amplitude and shorter latency of anterior P2, and anterior and posterior N2 components of the ERPs. In addition, social content also provoked greater allocation of processing resources as manifested by P3 amplitude, likely related to the high arousal they elicited. These results extend data from previous research by showing the relevance of the social value of the affective stimuli on automatic attentional processing.
Evolution-based Virtual Content Insertion with Visually Virtual Interactions in Videos
NASA Astrophysics Data System (ADS)
Chang, Chia-Hu; Wu, Ja-Ling
With the development of content-based multimedia analysis, virtual content insertion has been widely used and studied for video enrichment and multimedia advertising. However, how to automatically insert a user-selected virtual content into personal videos in a less-intrusive manner, with an attractive representation, is a challenging problem. In this chapter, we present an evolution-based virtual content insertion system which can insert virtual contents into videos with evolved animations according to predefined behaviors emulating the characteristics of evolutionary biology. The videos are considered not only as carriers of message conveyed by the virtual content but also as the environment in which the lifelike virtual contents live. Thus, the inserted virtual content will be affected by the videos to trigger a series of artificial evolutions and evolve its appearances and behaviors while interacting with video contents. By inserting virtual contents into videos through the system, users can easily create entertaining storylines and turn their personal videos into visually appealing ones. In addition, it would bring a new opportunity to increase the advertising revenue for video assets of the media industry and online video-sharing websites.
Automatic loudness control in short-form content for broadcasting.
Pires, Leandro da S; Vieira, Maurílio N; Yehia, Hani C
2017-03-01
During the early years of the International Telecommunication Union (ITU) loudness calculation standard for sound broadcasting [ITU-R (2006), Rec. BS Series, 1770], the need for additional loudness descriptors to evaluate short-form content, such as commercials and live inserts, was identified. This work proposes a loudness control scheme to prevent loudness jumps, which can bother audiences. It employs short-form content audio detection and dynamic range processing methods for the maximum loudness level criteria. Detection is achieved by combining principal component analysis for dimensionality reduction and support vector machines for binary classification. Subsequent processing is based on short-term loudness integrators and Hilbert transformers. The performance was assessed using quality classification metrics and demonstrated through a loudness control example.
Semi-automatic object geometry estimation for image personalization
NASA Astrophysics Data System (ADS)
Ding, Hengzhou; Bala, Raja; Fan, Zhigang; Eschbach, Reiner; Bouman, Charles A.; Allebach, Jan P.
2010-01-01
Digital printing brings about a host of benefits, one of which is the ability to create short runs of variable, customized content. One form of customization that is receiving much attention lately is in photofinishing applications, whereby personalized calendars, greeting cards, and photo books are created by inserting text strings into images. It is particularly interesting to estimate the underlying geometry of the surface and incorporate the text into the image content in an intelligent and natural way. Current solutions either allow fixed text insertion schemes into preprocessed images, or provide manual text insertion tools that are time consuming and aimed only at the high-end graphic designer. It would thus be desirable to provide some level of automation in the image personalization process. We propose a semi-automatic image personalization workflow which includes two scenarios: text insertion and text replacement. In both scenarios, the underlying surfaces are assumed to be planar. A 3-D pinhole camera model is used for rendering text, whose parameters are estimated by analyzing existing structures in the image. Techniques in image processing and computer vison such as the Hough transform, the bilateral filter, and connected component analysis are combined, along with necessary user inputs. In particular, the semi-automatic workflow is implemented as an image personalization tool, which is presented in our companion paper.1 Experimental results including personalized images for both scenarios are shown, which demonstrate the effectiveness of our algorithms.
Automated document analysis system
NASA Astrophysics Data System (ADS)
Black, Jeffrey D.; Dietzel, Robert; Hartnett, David
2002-08-01
A software application has been developed to aid law enforcement and government intelligence gathering organizations in the translation and analysis of foreign language documents with potential intelligence content. The Automated Document Analysis System (ADAS) provides the capability to search (data or text mine) documents in English and the most commonly encountered foreign languages, including Arabic. Hardcopy documents are scanned by a high-speed scanner and are optical character recognized (OCR). Documents obtained in an electronic format bypass the OCR and are copied directly to a working directory. For translation and analysis, the script and the language of the documents are first determined. If the document is not in English, the document is machine translated to English. The documents are searched for keywords and key features in either the native language or translated English. The user can quickly review the document to determine if it has any intelligence content and whether detailed, verbatim human translation is required. The documents and document content are cataloged for potential future analysis. The system allows non-linguists to evaluate foreign language documents and allows for the quick analysis of a large quantity of documents. All document processing can be performed manually or automatically on a single document or a batch of documents.
Multi-view information fusion for automatic BI-RADS description of mammographic masses
NASA Astrophysics Data System (ADS)
Narvaez, Fabián; Díaz, Gloria; Romero, Eduardo
2011-03-01
Most CBIR-based CAD systems (Content Based Image Retrieval systems for Computer Aided Diagnosis) identify lesions that are eventually relevant. These systems base their analysis upon a single independent view. This article presents a CBIR framework which automatically describes mammographic masses with the BI-RADS lexicon, fusing information from the two mammographic views. After an expert selects a Region of Interest (RoI) at the two views, a CBIR strategy searches similar masses in the database by automatically computing the Mahalanobis distance between shape and texture feature vectors of the mammography. The strategy was assessed in a set of 400 cases, for which the suggested descriptions were compared with the ground truth provided by the data base. Two information fusion strategies were evaluated, allowing a retrieval precision rate of 89.6% in the best scheme. Likewise, the best performance obtained for shape, margin and pathology description, using a ROC methodology, was reported as AUC = 0.86, AUC = 0.72 and AUC = 0.85, respectively.
Automatic and user-centric approaches to video summary evaluation
NASA Astrophysics Data System (ADS)
Taskiran, Cuneyt M.; Bentley, Frank
2007-01-01
Automatic video summarization has become an active research topic in content-based video processing. However, not much emphasis has been placed on developing rigorous summary evaluation methods and developing summarization systems based on a clear understanding of user needs, obtained through user centered design. In this paper we address these two topics and propose an automatic video summary evaluation algorithm adapted from teh text summarization domain.
NASA Astrophysics Data System (ADS)
Guo, Tianze; Bi, Siyu; Liu, Jiaming
2018-04-01
This essay legally restrains the illegal content based on the e-commerce directive and introduces that the European countries detect and notify illegal content through the instructions of competent authorities, notification of credible flaggers, user reports and technical tools. The illegal content should be deleted through the service terms and transparency report basing on prevent excessive deletions system. At the same time, use filters to detect and filter to against the recurrence of illegal content. By analyzing the advantages of China under the environment of cracking down on illegal content, this essay concludes that the success of China in cracking down on illegal content lies in all-round collaborative management model of countries, governments, enterprises and individuals. At the end of the essay, one is to build a training corpus that can automatically update the ability to identify the illegal content. And it proposes an optimization scheme that establish a complete set of address resolution procedures and classify IP address data according to big data analysis and DNS protection module to prevent hackers from spreading illegal content by tampering with DNS segments.
Automatic Content Recommendation and Aggregation According to SCORM
ERIC Educational Resources Information Center
Neves, Daniel Eugênio; Brandão, Wladmir Cardoso; Ishitani, Lucila
2017-01-01
Although widely used, the SCORM metadata model for content aggregation is difficult to be used by educators, content developers and instructional designers. Particularly, the identification of contents related with each other, in large repositories, and their aggregation using metadata as defined in SCORM, has been demanding efforts of computer…
Syntax-directed content analysis of videotext: application to a map detection recognition system
NASA Astrophysics Data System (ADS)
Aradhye, Hrishikesh; Herson, James A.; Myers, Gregory
2003-01-01
Video is an increasingly important and ever-growing source of information to the intelligence and homeland defense analyst. A capability to automatically identify the contents of video imagery would enable the analyst to index relevant foreign and domestic news videos in a convenient and meaningful way. To this end, the proposed system aims to help determine the geographic focus of a news story directly from video imagery by detecting and geographically localizing political maps from news broadcasts, using the results of videotext recognition in lieu of a computationally expensive, scale-independent shape recognizer. Our novel method for the geographic localization of a map is based on the premise that the relative placement of text superimposed on a map roughly corresponds to the geographic coordinates of the locations the text represents. Our scheme extracts and recognizes videotext, and iteratively identifies the geographic area, while allowing for OCR errors and artistic freedom. The fast and reliable recognition of such maps by our system may provide valuable context and supporting evidence for other sources, such as speech recognition transcripts. The concepts of syntax-directed content analysis of videotext presented here can be extended to other content analysis systems.
Hayashi, Junichiro
2009-02-01
The present study developed and evaluated the Automatic Thoughts List following Dilatory Behavior (ATL-DB) to explore the mediation hypothesis and the content-specificity hypothesis about the automatic thoughts with trait procrastination and emotions. In Study 1, data from 113 Japanese college students were used to choose 22 items to construct the ATL-DB. Two factors were indentified, I. Criticism of Self and Behavior, II. Difficulty in Achievement. These factors had high degrees of internal consistency and had positive correlations to trait procrastination. In Study 2, the relationships among trait procrastination, the automatic thoughts, depression, and anxiety were examined in 261 college students by using Structural Equation Modeling. The results showed that the influence of trait procrastination on depression was mainly mediated through Criticism of Self and Behavior only, while the influence of trait procrastination to anxiety was mediated through Criticism of Self and Behavior and Difficulty in Achievement. Therefore, the mediation hypothesis was supported and the content-specificity hypothesis was partially supported.
Reliable Acquisition of RAM Dumps from Intel-Based Apple Mac Computers over FireWire
NASA Astrophysics Data System (ADS)
Gladyshev, Pavel; Almansoori, Afrah
RAM content acquisition is an important step in live forensic analysis of computer systems. FireWire offers an attractive way to acquire RAM content of Apple Mac computers equipped with a FireWire connection. However, the existing techniques for doing so require substantial knowledge of the target computer configuration and cannot be used reliably on a previously unknown computer in a crime scene. This paper proposes a novel method for acquiring RAM content of Apple Mac computers over FireWire, which automatically discovers necessary information about the target computer and can be used in the crime scene setting. As an application of the developed method, the techniques for recovery of AOL Instant Messenger (AIM) conversation fragments from RAM dumps are also discussed in this paper.
Exploring Characterizations of Learning Object Repositories Using Data Mining Techniques
NASA Astrophysics Data System (ADS)
Segura, Alejandra; Vidal, Christian; Menendez, Victor; Zapata, Alfredo; Prieto, Manuel
Learning object repositories provide a platform for the sharing of Web-based educational resources. As these repositories evolve independently, it is difficult for users to have a clear picture of the kind of contents they give access to. Metadata can be used to automatically extract a characterization of these resources by using machine learning techniques. This paper presents an exploratory study carried out in the contents of four public repositories that uses clustering and association rule mining algorithms to extract characterizations of repository contents. The results of the analysis include potential relationships between different attributes of learning objects that may be useful to gain an understanding of the kind of resources available and eventually develop search mechanisms that consider repository descriptions as a criteria in federated search.
Wang, Jiang-Ning; Chen, Xiao-Lin; Hou, Xin-Wen; Zhou, Li-Bing; Zhu, Chao-Dong; Ji, Li-Qiang
2017-07-01
Many species of Tephritidae are damaging to fruit, which might negatively impact international fruit trade. Automatic or semi-automatic identification of fruit flies are greatly needed for diagnosing causes of damage and quarantine protocols for economically relevant insects. A fruit fly image identification system named AFIS1.0 has been developed using 74 species belonging to six genera, which include the majority of pests in the Tephritidae. The system combines automated image identification and manual verification, balancing operability and accuracy. AFIS1.0 integrates image analysis and expert system into a content-based image retrieval framework. In the the automatic identification module, AFIS1.0 gives candidate identification results. Afterwards users can do manual selection based on comparing unidentified images with a subset of images corresponding to the automatic identification result. The system uses Gabor surface features in automated identification and yielded an overall classification success rate of 87% to the species level by Independent Multi-part Image Automatic Identification Test. The system is useful for users with or without specific expertise on Tephritidae in the task of rapid and effective identification of fruit flies. It makes the application of computer vision technology to fruit fly recognition much closer to production level. © 2016 Society of Chemical Industry. © 2016 Society of Chemical Industry.
Balanced states of mind in psychopathology and psychological well-being.
Wong, Shyh Shin
2010-08-01
The balanced states of mind (BSOM) model proposes that coping with stress and psychological well-being is a function of the BSOM ratio of positive thoughts to the sum of positive and negative thoughts. Based on different BSOM ratios, different BSOM categories are constructed to quantitatively differentiate levels of coping with stress and psychological well-being. The cognitive content-specificity hypothesis states that there are unique themes of semantic content in self-reported automatic thoughts particular to depression or anxiety. This study investigated the BSOM model and its cognitive content-specificity for depression, anxiety, anger, stress, life satisfaction, and happiness, based on negative and positive automatic thoughts. Three hundred and ninety-eight college students from Singapore participated in this study. First, BSOM ratio and positive automatic thoughts were positively correlated with life satisfaction and happiness, and negatively correlated with stress, anxiety, depression, and anger. In contrast, negative automatic thoughts were positively correlated with stress, anxiety, depression, and anger, and negatively correlated with life satisfaction and happiness. Second, levels of psychopathology and psychological well-being were statistically differentiable among the BSOM categories for depression, happiness, perceived stress, and life satisfaction; and less statistically differentiable among the BSOM categories for anxiety and anger, as expected based on the BSOM model and cognitive content-specificity hypothesis. Third, the results were more supportive of the BSOM model for depression, followed by happiness, perceived stress, life satisfaction, anxiety, and anger in terms of percentage of variance accounted for by BSOM categories, as expected based on the cognitive content-specificity hypothesis. Taken together, the results suggested that the more moderately positive thoughts one has (balanced by negative thoughts), the better mental health outcomes one has. Implications and limitations of these findings are discussed.
García Iglesias, Daniel; Roqueñi Gutiérrez, Nieves; De Cos, Francisco Javier; Calvo, David
2018-02-12
Fragmentation and delayed potentials in the QRS signal of patients have been postulated as risk markers for Sudden Cardiac Death (SCD). The analysis of the high-frequency spectral content may be useful for quantification. Forty-two consecutive patients with prior history of SCD or malignant arrhythmias (patients) where compared with 120 healthy individuals (controls). The QRS complexes were extracted with a modified Pan-Tompkins algorithm and processed with the Continuous Wavelet Transform to analyze the high-frequency content (85-130 Hz). Overall, the power of the high-frequency content was higher in patients compared with controls (170.9 vs. 47.3 10³nV²Hz -1 ; p = 0.007), with a prolonged time to reach the maximal power (68.9 vs. 64.8 ms; p = 0.002). An analysis of the signal intensity (instantaneous average of cumulative power), revealed a distinct function between patients and controls. The total intensity was higher in patients compared with controls (137.1 vs. 39 10³nV²Hz -1 s -1 ; p = 0.001) and the time to reach the maximal intensity was also prolonged (88.7 vs. 82.1 ms; p < 0.001). The high-frequency content of the QRS complexes was distinct between patients at risk of SCD and healthy controls. The wavelet transform is an efficient tool for spectral analysis of the QRS complexes that may contribute to stratification of risk.
Video content parsing based on combined audio and visual information
NASA Astrophysics Data System (ADS)
Zhang, Tong; Kuo, C.-C. Jay
1999-08-01
While previous research on audiovisual data segmentation and indexing primarily focuses on the pictorial part, significant clues contained in the accompanying audio flow are often ignored. A fully functional system for video content parsing can be achieved more successfully through a proper combination of audio and visual information. By investigating the data structure of different video types, we present tools for both audio and visual content analysis and a scheme for video segmentation and annotation in this research. In the proposed system, video data are segmented into audio scenes and visual shots by detecting abrupt changes in audio and visual features, respectively. Then, the audio scene is categorized and indexed as one of the basic audio types while a visual shot is presented by keyframes and associate image features. An index table is then generated automatically for each video clip based on the integration of outputs from audio and visual analysis. It is shown that the proposed system provides satisfying video indexing results.
MPEG-7 based video annotation and browsing
NASA Astrophysics Data System (ADS)
Hoeynck, Michael; Auweiler, Thorsten; Wellhausen, Jens
2003-11-01
The huge amount of multimedia data produced worldwide requires annotation in order to enable universal content access and to provide content-based search-and-retrieval functionalities. Since manual video annotation can be time consuming, automatic annotation systems are required. We review recent approaches to content-based indexing and annotation of videos for different kind of sports and describe our approach to automatic annotation of equestrian sports videos. We especially concentrate on MPEG-7 based feature extraction and content description, where we apply different visual descriptors for cut detection. Further, we extract the temporal positions of single obstacles on the course by analyzing MPEG-7 edge information. Having determined single shot positions as well as the visual highlights, the information is jointly stored with meta-textual information in an MPEG-7 description scheme. Based on this information, we generate content summaries which can be utilized in a user-interface in order to provide content-based access to the video stream, but further for media browsing on a streaming server.
K-Nearest Neighbors Relevance Annotation Model for Distance Education
ERIC Educational Resources Information Center
Ke, Xiao; Li, Shaozi; Cao, Donglin
2011-01-01
With the rapid development of Internet technologies, distance education has become a popular educational mode. In this paper, the authors propose an online image automatic annotation distance education system, which could effectively help children learn interrelations between image content and corresponding keywords. Image automatic annotation is…
Charoenkwan, Phasit; Hwang, Eric; Cutler, Robert W; Lee, Hua-Chin; Ko, Li-Wei; Huang, Hui-Ling; Ho, Shinn-Ying
2013-01-01
High-content screening (HCS) has become a powerful tool for drug discovery. However, the discovery of drugs targeting neurons is still hampered by the inability to accurately identify and quantify the phenotypic changes of multiple neurons in a single image (named multi-neuron image) of a high-content screen. Therefore, it is desirable to develop an automated image analysis method for analyzing multi-neuron images. We propose an automated analysis method with novel descriptors of neuromorphology features for analyzing HCS-based multi-neuron images, called HCS-neurons. To observe multiple phenotypic changes of neurons, we propose two kinds of descriptors which are neuron feature descriptor (NFD) of 13 neuromorphology features, e.g., neurite length, and generic feature descriptors (GFDs), e.g., Haralick texture. HCS-neurons can 1) automatically extract all quantitative phenotype features in both NFD and GFDs, 2) identify statistically significant phenotypic changes upon drug treatments using ANOVA and regression analysis, and 3) generate an accurate classifier to group neurons treated by different drug concentrations using support vector machine and an intelligent feature selection method. To evaluate HCS-neurons, we treated P19 neurons with nocodazole (a microtubule depolymerizing drug which has been shown to impair neurite development) at six concentrations ranging from 0 to 1000 ng/mL. The experimental results show that all the 13 features of NFD have statistically significant difference with respect to changes in various levels of nocodazole drug concentrations (NDC) and the phenotypic changes of neurites were consistent to the known effect of nocodazole in promoting neurite retraction. Three identified features, total neurite length, average neurite length, and average neurite area were able to achieve an independent test accuracy of 90.28% for the six-dosage classification problem. This NFD module and neuron image datasets are provided as a freely downloadable MatLab project at http://iclab.life.nctu.edu.tw/HCS-Neurons. Few automatic methods focus on analyzing multi-neuron images collected from HCS used in drug discovery. We provided an automatic HCS-based method for generating accurate classifiers to classify neurons based on their phenotypic changes upon drug treatments. The proposed HCS-neurons method is helpful in identifying and classifying chemical or biological molecules that alter the morphology of a group of neurons in HCS.
Wei, Liang
2010-01-01
A simple, rapid and sensitive method was proposed for online determination of tannic acid in colored tannery wastewater by automatic reference flow injection analysis. Based on the tannic acid reduction phosphotungstic acid to form blue compound in pH 12.38 alkaline solutions, the shade of blue compound is in a linear relation to the content of tannic acid at the point of the maximum absorption peak of 760 nm. The optimal experimental conditions had been obtained. The linear range of the proposed method was between 200 μg L−1 to 80 mg L−1 and the detection limit was 0.58 μg L−1. The relative standard deviation was 3.08% and 2.43% for 500 μg L−1 and 40 mg L−1 of tannic acid standard solution, respectively, (n = 10). The method had been successfully applied to determination of tannic acid in colored tannery wastewaters and the analytical results were satisfactory. PMID:20508812
Bridging the semantic gap in sports
NASA Astrophysics Data System (ADS)
Li, Baoxin; Errico, James; Pan, Hao; Sezan, M. Ibrahim
2003-01-01
One of the major challenges facing current media management systems and the related applications is the so-called "semantic gap" between the rich meaning that a user desires and the shallowness of the content descriptions that are automatically extracted from the media. In this paper, we address the problem of bridging this gap in the sports domain. We propose a general framework for indexing and summarizing sports broadcast programs. The framework is based on a high-level model of sports broadcast video using the concept of an event, defined according to domain-specific knowledge for different types of sports. Within this general framework, we develop automatic event detection algorithms that are based on automatic analysis of the visual and aural signals in the media. We have successfully applied the event detection algorithms to different types of sports including American football, baseball, Japanese sumo wrestling, and soccer. Event modeling and detection contribute to the reduction of the semantic gap by providing rudimentary semantic information obtained through media analysis. We further propose a novel approach, which makes use of independently generated rich textual metadata, to fill the gap completely through synchronization of the information-laden textual data with the basic event segments. An MPEG-7 compliant prototype browsing system has been implemented to demonstrate semantic retrieval and summarization of sports video.
UMLS content views appropriate for NLP processing of the biomedical literature vs. clinical text.
Demner-Fushman, Dina; Mork, James G; Shooshan, Sonya E; Aronson, Alan R
2010-08-01
Identification of medical terms in free text is a first step in such Natural Language Processing (NLP) tasks as automatic indexing of biomedical literature and extraction of patients' problem lists from the text of clinical notes. Many tools developed to perform these tasks use biomedical knowledge encoded in the Unified Medical Language System (UMLS) Metathesaurus. We continue our exploration of automatic approaches to creation of subsets (UMLS content views) which can support NLP processing of either the biomedical literature or clinical text. We found that suppression of highly ambiguous terms in the conservative AutoFilter content view can partially replace manual filtering for literature applications, and suppression of two character mappings in the same content view achieves 89.5% precision at 78.6% recall for clinical applications. Published by Elsevier Inc.
Automatic movie skimming with general tempo analysis
NASA Astrophysics Data System (ADS)
Lee, Shih-Hung; Yeh, Chia-Hung; Kuo, C. C. J.
2003-11-01
Story units are extracted by general tempo analysis including tempos analysis including tempos of audio and visual information in this research. Although many schemes have been proposed to successfully segment video data into shots using basic low-level features, how to group shots into meaningful units called story units is still a challenging problem. By focusing on a certain type of video such as sport or news, we can explore models with the specific application domain knowledge. For movie contents, many heuristic rules based on audiovisual clues have been proposed with limited success. We propose a method to extract story units using general tempo analysis. Experimental results are given to demonstrate the feasibility and efficiency of the proposed technique.
Tóth, László; Hoffmann, Ildikó; Gosztolya, Gábor; Vincze, Veronika; Szatlóczki, Gréta; Bánréti, Zoltán; Pákáski, Magdolna; Kálmán, János
2018-01-01
Background: Even today the reliable diagnosis of the prodromal stages of Alzheimer’s disease (AD) remains a great challenge. Our research focuses on the earliest detectable indicators of cognitive de-cline in mild cognitive impairment (MCI). Since the presence of language impairment has been reported even in the mild stage of AD, the aim of this study is to develop a sensitive neuropsychological screening method which is based on the analysis of spontaneous speech production during performing a memory task. In the future, this can form the basis of an Internet-based interactive screening software for the recognition of MCI. Methods: Participants were 38 healthy controls and 48 clinically diagnosed MCI patients. The provoked spontaneous speech by asking the patients to recall the content of 2 short black and white films (one direct, one delayed), and by answering one question. Acoustic parameters (hesitation ratio, speech tempo, length and number of silent and filled pauses, length of utterance) were extracted from the recorded speech sig-nals, first manually (using the Praat software), and then automatically, with an automatic speech recogni-tion (ASR) based tool. First, the extracted parameters were statistically analyzed. Then we applied machine learning algorithms to see whether the MCI and the control group can be discriminated automatically based on the acoustic features. Results: The statistical analysis showed significant differences for most of the acoustic parameters (speech tempo, articulation rate, silent pause, hesitation ratio, length of utterance, pause-per-utterance ratio). The most significant differences between the two groups were found in the speech tempo in the delayed recall task, and in the number of pauses for the question-answering task. The fully automated version of the analysis process – that is, using the ASR-based features in combination with machine learning - was able to separate the two classes with an F1-score of 78.8%. Conclusion: The temporal analysis of spontaneous speech can be exploited in implementing a new, auto-matic detection-based tool for screening MCI for the community. PMID:29165085
Toth, Laszlo; Hoffmann, Ildiko; Gosztolya, Gabor; Vincze, Veronika; Szatloczki, Greta; Banreti, Zoltan; Pakaski, Magdolna; Kalman, Janos
2018-01-01
Even today the reliable diagnosis of the prodromal stages of Alzheimer's disease (AD) remains a great challenge. Our research focuses on the earliest detectable indicators of cognitive decline in mild cognitive impairment (MCI). Since the presence of language impairment has been reported even in the mild stage of AD, the aim of this study is to develop a sensitive neuropsychological screening method which is based on the analysis of spontaneous speech production during performing a memory task. In the future, this can form the basis of an Internet-based interactive screening software for the recognition of MCI. Participants were 38 healthy controls and 48 clinically diagnosed MCI patients. The provoked spontaneous speech by asking the patients to recall the content of 2 short black and white films (one direct, one delayed), and by answering one question. Acoustic parameters (hesitation ratio, speech tempo, length and number of silent and filled pauses, length of utterance) were extracted from the recorded speech signals, first manually (using the Praat software), and then automatically, with an automatic speech recognition (ASR) based tool. First, the extracted parameters were statistically analyzed. Then we applied machine learning algorithms to see whether the MCI and the control group can be discriminated automatically based on the acoustic features. The statistical analysis showed significant differences for most of the acoustic parameters (speech tempo, articulation rate, silent pause, hesitation ratio, length of utterance, pause-per-utterance ratio). The most significant differences between the two groups were found in the speech tempo in the delayed recall task, and in the number of pauses for the question-answering task. The fully automated version of the analysis process - that is, using the ASR-based features in combination with machine learning - was able to separate the two classes with an F1-score of 78.8%. The temporal analysis of spontaneous speech can be exploited in implementing a new, automatic detection-based tool for screening MCI for the community. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
Leaching of zinc compound from rubber stoppers into the contents of automatic atropine injectors.
Ellin, R I; Kaminskis, A; Zvirblis, P; Sultan, W E; Shutz, M B; Matthews, R
1985-07-01
This report describes how a material within the cartridge of an automatic injector contaminated its contents. On prolonged storage, a formulation that contained atropine produced lethality in mice. The toxic material originated from zinc compounds that were present in the rubber stopper and plunger of the container and that subsequently leached into the formulation. The contents of cartridges that contained greater than or equal to 0.75 mg/mL of solubilized zinc were lethal to at least 20% of the mice tested; those that contained 0.42 mg/mL showed no lethality. The problem resulted from the physicochemical properties of the rubber, not the concentration of zinc used in the vulcanization process.
A novel automatic segmentation workflow of axial breast DCE-MRI
NASA Astrophysics Data System (ADS)
Besbes, Feten; Gargouri, Norhene; Damak, Alima; Sellami, Dorra
2018-04-01
In this paper we propose a novel process of a fully automatic breast tissue segmentation which is independent from expert calibration and contrast. The proposed algorithm is composed by two major steps. The first step consists in the detection of breast boundaries. It is based on image content analysis and Moore-Neighbour tracing algorithm. As a processing step, Otsu thresholding and neighbors algorithm are applied. Then, the external area of breast is removed to get an approximated breast region. The second preprocessing step is the delineation of the chest wall which is considered as the lowest cost path linking three key points; These points are located automatically at the breast. They are respectively, the left and right boundary points and the middle upper point placed at the sternum region using statistical method. For the minimum cost path search problem, we resolve it through Dijkstra algorithm. Evaluation results reveal the robustness of our process face to different breast densities, complex forms and challenging cases. In fact, the mean overlap between manual segmentation and automatic segmentation through our method is 96.5%. A comparative study shows that our proposed process is competitive and faster than existing methods. The segmentation of 120 slices with our method is achieved at least in 20.57+/-5.2s.
Set Up of an Automatic Water Quality Sampling System in Irrigation Agriculture
Heinz, Emanuel; Kraft, Philipp; Buchen, Caroline; Frede, Hans-Georg; Aquino, Eugenio; Breuer, Lutz
2014-01-01
We have developed a high-resolution automatic sampling system for continuous in situ measurements of stable water isotopic composition and nitrogen solutes along with hydrological information. The system facilitates concurrent monitoring of a large number of water and nutrient fluxes (ground, surface, irrigation and rain water) in irrigated agriculture. For this purpose we couple an automatic sampling system with a Wavelength-Scanned Cavity Ring Down Spectrometry System (WS-CRDS) for stable water isotope analysis (δ2H and δ18O), a reagentless hyperspectral UV photometer (ProPS) for monitoring nitrate content and various water level sensors for hydrometric information. The automatic sampling system consists of different sampling stations equipped with pumps, a switch cabinet for valve and pump control and a computer operating the system. The complete system is operated via internet-based control software, allowing supervision from nearly anywhere. The system is currently set up at the International Rice Research Institute (Los Baños, The Philippines) in a diversified rice growing system to continuously monitor water and nutrient fluxes. Here we present the system's technical set-up and provide initial proof-of-concept with results for the isotopic composition of different water sources and nitrate values from the 2012 dry season. PMID:24366178
Extracting the Textual and Temporal Structure of Supercomputing Logs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jain, S; Singh, I; Chandra, A
2009-05-26
Supercomputers are prone to frequent faults that adversely affect their performance, reliability and functionality. System logs collected on these systems are a valuable resource of information about their operational status and health. However, their massive size, complexity, and lack of standard format makes it difficult to automatically extract information that can be used to improve system management. In this work we propose a novel method to succinctly represent the contents of supercomputing logs, by using textual clustering to automatically find the syntactic structures of log messages. This information is used to automatically classify messages into semantic groups via an onlinemore » clustering algorithm. Further, we describe a methodology for using the temporal proximity between groups of log messages to identify correlated events in the system. We apply our proposed methods to two large, publicly available supercomputing logs and show that our technique features nearly perfect accuracy for online log-classification and extracts meaningful structural and temporal message patterns that can be used to improve the accuracy of other log analysis techniques.« less
Image/text automatic indexing and retrieval system using context vector approach
NASA Astrophysics Data System (ADS)
Qing, Kent P.; Caid, William R.; Ren, Clara Z.; McCabe, Patrick
1995-11-01
Thousands of documents and images are generated daily both on and off line on the information superhighway and other media. Storage technology has improved rapidly to handle these data but indexing this information is becoming very costly. HNC Software Inc. has developed a technology for automatic indexing and retrieval of free text and images. This technique is demonstrated and is based on the concept of `context vectors' which encode a succinct representation of the associated text and features of sub-image. In this paper, we will describe the Automated Librarian System which was designed for free text indexing and the Image Content Addressable Retrieval System (ICARS) which extends the technique from the text domain into the image domain. Both systems have the ability to automatically assign indices for a new document and/or image based on the content similarities in the database. ICARS also has the capability to retrieve images based on similarity of content using index terms, text description, and user-generated images as a query without performing segmentation or object recognition.
Gater, Deborah L; Widatalla, Namareq; Islam, Kinza; AlRaeesi, Maryam; Teo, Jeremy C M; Pearson, Yanthe E
2017-12-13
The transformation of normal macrophage cells into lipid-laden foam cells is an important step in the progression of atherosclerosis. One major contributor to foam cell formation in vivo is the intracellular accumulation of cholesterol. Here, we report the effects of various combinations of low-density lipoprotein, sterols, lipids and other factors on human macrophages, using an automated image analysis program to quantitatively compare single cell properties, such as cell size and lipid content, in different conditions. We observed that the addition of cholesterol caused an increase in average cell lipid content across a range of conditions. All of the sterol-lipid mixtures examined were capable of inducing increases in average cell lipid content, with variations in the distribution of the response, in cytotoxicity and in how the sterol-lipid combination interacted with other activating factors. For example, cholesterol and lipopolysaccharide acted synergistically to increase cell lipid content while also increasing cell survival compared with the addition of lipopolysaccharide alone. Additionally, ergosterol and cholesteryl hemisuccinate caused similar increases in lipid content but also exhibited considerably greater cytotoxicity than cholesterol. The use of automated image analysis enables us to assess not only changes in average cell size and content, but also to rapidly and automatically compare population distributions based on simple fluorescence images. Our observations add to increasing understanding of the complex and multifactorial nature of foam-cell formation and provide a novel approach to assessing the heterogeneity of macrophage response to a variety of factors.
Presentation video retrieval using automatically recovered slide and spoken text
NASA Astrophysics Data System (ADS)
Cooper, Matthew
2013-03-01
Video is becoming a prevalent medium for e-learning. Lecture videos contain text information in both the presentation slides and lecturer's speech. This paper examines the relative utility of automatically recovered text from these sources for lecture video retrieval. To extract the visual information, we automatically detect slides within the videos and apply optical character recognition to obtain their text. Automatic speech recognition is used similarly to extract spoken text from the recorded audio. We perform controlled experiments with manually created ground truth for both the slide and spoken text from more than 60 hours of lecture video. We compare the automatically extracted slide and spoken text in terms of accuracy relative to ground truth, overlap with one another, and utility for video retrieval. Results reveal that automatically recovered slide text and spoken text contain different content with varying error profiles. Experiments demonstrate that automatically extracted slide text enables higher precision video retrieval than automatically recovered spoken text.
Application of MPEG-7 descriptors for content-based indexing of sports videos
NASA Astrophysics Data System (ADS)
Hoeynck, Michael; Auweiler, Thorsten; Ohm, Jens-Rainer
2003-06-01
The amount of multimedia data available worldwide is increasing every day. There is a vital need to annotate multimedia data in order to allow universal content access and to provide content-based search-and-retrieval functionalities. Since supervised video annotation can be time consuming, an automatic solution is appreciated. We review recent approaches to content-based indexing and annotation of videos for different kind of sports, and present our application for the automatic annotation of equestrian sports videos. Thereby, we especially concentrate on MPEG-7 based feature extraction and content description. We apply different visual descriptors for cut detection. Further, we extract the temporal positions of single obstacles on the course by analyzing MPEG-7 edge information and taking specific domain knowledge into account. Having determined single shot positions as well as the visual highlights, the information is jointly stored together with additional textual information in an MPEG-7 description scheme. Using this information, we generate content summaries which can be utilized in a user front-end in order to provide content-based access to the video stream, but further content-based queries and navigation on a video-on-demand streaming server.
Personalized professional content recommendation
Xu, Songhua
2015-10-27
A personalized content recommendation system includes a client interface configured to automatically monitor a user's information data stream transmitted on the Internet. A hybrid contextual behavioral and collaborative personal interest inference engine resident to a non-transient media generates automatic predictions about the interests of individual users of the system. A database server retains the user's personal interest profile based on a plurality of monitored information. The system also includes a server programmed to filter items in an incoming information stream with the personal interest profile and is further programmed to identify only those items of the incoming information stream that substantially match the personal interest profile.
Breaking the Cost Barrier in Automatic Classification.
ERIC Educational Resources Information Center
Doyle, L. B.
A low-cost automatic classification method is reported that uses computer time in proportion to NlogN, where N is the number of information items and the base is a parameter, some barriers besides cost are treated briefly in the opening section, including types of intellectual resistance to the idea of doing classification by content-word…
ERIC Educational Resources Information Center
Rafart Serra, Maria Assumpció; Bikfalvi, Andrea; Soler Masó, Josep; Prados Carrasco, Ferran; Poch Garcia, Jordi
2017-01-01
The combination of two macro trends, Information and Communication Technologies' (ICT) proliferation and novel approaches in education, has resulted in a series of opportunities with no precedent in terms of content, channels and methods in education. The present contribution aims to describe the experience of using an automatic spreadsheet…
García Iglesias, Daniel; Roqueñi Gutiérrez, Nieves; De Cos, Francisco Javier; Calvo, David
2018-01-01
Background: Fragmentation and delayed potentials in the QRS signal of patients have been postulated as risk markers for Sudden Cardiac Death (SCD). The analysis of the high-frequency spectral content may be useful for quantification. Methods: Forty-two consecutive patients with prior history of SCD or malignant arrhythmias (patients) where compared with 120 healthy individuals (controls). The QRS complexes were extracted with a modified Pan-Tompkins algorithm and processed with the Continuous Wavelet Transform to analyze the high-frequency content (85–130 Hz). Results: Overall, the power of the high-frequency content was higher in patients compared with controls (170.9 vs. 47.3 103nV2Hz−1; p = 0.007), with a prolonged time to reach the maximal power (68.9 vs. 64.8 ms; p = 0.002). An analysis of the signal intensity (instantaneous average of cumulative power), revealed a distinct function between patients and controls. The total intensity was higher in patients compared with controls (137.1 vs. 39 103nV2Hz−1s−1; p = 0.001) and the time to reach the maximal intensity was also prolonged (88.7 vs. 82.1 ms; p < 0.001). Discussion: The high-frequency content of the QRS complexes was distinct between patients at risk of SCD and healthy controls. The wavelet transform is an efficient tool for spectral analysis of the QRS complexes that may contribute to stratification of risk. PMID:29439530
Automatic Seismic Signal Processing Research.
1981-09-01
be used. We then rave mD(k) AT(k) + b (11) S2 aT S SD(k) - a(k) a so Equation (9) becomes ( Gnanadesikan , 1977, p. 83; Young and Calvert, 1974, Equation... Gnanadesikan (1977, p. 196), "The main function of statistical data analysis is to extricate and explicate the informational content of a body of...R. C. Goff (1980), "Evaluation of the MARS Seismic Event Detector," Systems, Science and Software Report SSS-R-81-4656, August. Gnanadesikan , R
A neotropical Miocene pollen database employing image-based search and semantic modeling.
Han, Jing Ginger; Cao, Hongfei; Barb, Adrian; Punyasena, Surangi W; Jaramillo, Carlos; Shyu, Chi-Ren
2014-08-01
Digital microscopic pollen images are being generated with increasing speed and volume, producing opportunities to develop new computational methods that increase the consistency and efficiency of pollen analysis and provide the palynological community a computational framework for information sharing and knowledge transfer. • Mathematical methods were used to assign trait semantics (abstract morphological representations) of the images of neotropical Miocene pollen and spores. Advanced database-indexing structures were built to compare and retrieve similar images based on their visual content. A Web-based system was developed to provide novel tools for automatic trait semantic annotation and image retrieval by trait semantics and visual content. • Mathematical models that map visual features to trait semantics can be used to annotate images with morphology semantics and to search image databases with improved reliability and productivity. Images can also be searched by visual content, providing users with customized emphases on traits such as color, shape, and texture. • Content- and semantic-based image searches provide a powerful computational platform for pollen and spore identification. The infrastructure outlined provides a framework for building a community-wide palynological resource, streamlining the process of manual identification, analysis, and species discovery.
Gnaiger, E; Bitterlich, G
1984-06-01
Carbohydrate, lipid, and protein compositions are stoichiometrically related to organic CHN (carbon, hydrogen, nitrogen) contents. Elemental CHN analyses of total biomass and ash, therefore, provide a basis for the calculation of proximate biochemical composition and bomb caloric value. The classical nitrogen to protein conversion factor (6.25) should be replaced by 5.8±0.13. A linear relation exists between the mass fraction of non-protein carbon and the carbohydrate and lipid content. Residual water in dry organic matter can be estimated with the additional information derived from hydrogen measurements.The stoichiometric CHN method and direct biochemical analysis agreed within 10% of ash-free dry biomass (for muscle, liver and fat tissue of silver carp; gut contents composed of detritus and algae; commercial fish food). The detrital material, however, had to be corrected for non-protein nitrogen.A linear relationship between bomb caloric value and organic carbon fractions was derived on the basis of thermodynamic and stoichiometric principles, in agreement with experimental data published for bacteria, algae, protozoa and invertebrates. The highly automatic stoichiometric CHN method for the separation of nutrient contents in biomass extends existing ecophysiological concepts for the construction of balanced carbon and nitrogen, as well as biochemical and energy budgets.
NASA Astrophysics Data System (ADS)
Jerosch, K.; Lüdtke, A.; Schlüter, M.; Ioannidis, G. T.
2007-02-01
The combination of new underwater technology as remotely operating vehicles (ROVs), high-resolution video imagery, and software to compute georeferenced mosaics of the seafloor provides new opportunities for marine geological or biological studies and applications in offshore industry. Even during single surveys by ROVs or towed systems large amounts of images are compiled. While these underwater techniques are now well-engineered, there is still a lack of methods for the automatic analysis of the acquired image data. During ROV dives more than 4200 georeferenced video mosaics were compiled for the HÅkon Mosby Mud Volcano (HMMV). Mud volcanoes as HMMV are considered as significant source locations for methane characterised by unique chemoautotrophic communities as Beggiatoa mats. For the detection and quantification of the spatial distribution of Beggiatoa mats an automated image analysis technique was developed, which applies watershed transformation and relaxation-based labelling of pre-segmented regions. Comparison of the data derived by visual inspection of 2840 video images with the automated image analysis revealed similarities with a precision better than 90%. We consider this as a step towards a time-efficient and accurate analysis of seafloor images for computation of geochemical budgets and identification of habitats at the seafloor.
Efficient content-based low-altitude images correlated network and strips reconstruction
NASA Astrophysics Data System (ADS)
He, Haiqing; You, Qi; Chen, Xiaoyong
2017-01-01
The manual intervention method is widely used to reconstruct strips for further aerial triangulation in low-altitude photogrammetry. Clearly the method for fully automatic photogrammetric data processing is not an expected way. In this paper, we explore a content-based approach without manual intervention or external information for strips reconstruction. Feature descriptors in the local spatial patterns are extracted by SIFT to construct vocabulary tree, in which these features are encoded in terms of TF-IDF numerical statistical algorithm to generate new representation for each low-altitude image. Then images correlated network is reconstructed by similarity measure, image matching and geometric graph theory. Finally, strips are reconstructed automatically by tracing straight lines and growing adjacent images gradually. Experimental results show that the proposed approach is highly effective in automatically rearranging strips of lowaltitude images and can provide rough relative orientation for further aerial triangulation.
Active suppression of distractors that match the contents of visual working memory.
Sawaki, Risa; Luck, Steven J
2011-08-01
The biased competition theory proposes that items matching the contents of visual working memory will automatically have an advantage in the competition for attention. However, evidence for an automatic effect has been mixed, perhaps because the memory-driven attentional bias can be overcome by top-down suppression. To test this hypothesis, the Pd component of the event-related potential waveform was used as a marker of attentional suppression. While observers maintained a color in working memory, task-irrelevant probe arrays were presented that contained an item matching the color being held in memory. We found that the memory-matching probe elicited a Pd component, indicating that it was being actively suppressed. This result suggests that sensory inputs matching the information being held in visual working memory are automatically detected and generate an "attend-to-me" signal, but this signal can be overridden by an active suppression mechanism to prevent the actual capture of attention.
Use of a New "Moodle" Module for Improving the Teaching of a Basic Course on Computer Architecture
ERIC Educational Resources Information Center
Trenas, M. A.; Ramos, J.; Gutierrez, E. D.; Romero, S.; Corbera, F.
2011-01-01
This paper describes how a new "Moodle" module, called "CTPracticals", is applied to the teaching of the practical content of a basic computer organization course. In the core of the module, an automatic verification engine enables it to process the VHDL designs automatically as they are submitted. Moreover, a straightforward…
Dissociating Working Memory Updating and Automatic Updating: The Reference-Back Paradigm
ERIC Educational Resources Information Center
Rac-Lubashevsky, Rachel; Kessler, Yoav
2016-01-01
Working memory (WM) updating is a controlled process through which relevant information in the environment is selected to enter the gate to WM and substitute its contents. We suggest that there is also an automatic form of updating, which influences performance in many tasks and is primarily manifested in reaction time sequential effects. The goal…
A New "Moodle" Module Supporting Automatic Verification of VHDL-Based Assignments
ERIC Educational Resources Information Center
Gutierrez, Eladio; Trenas, Maria A.; Ramos, Julian; Corbera, Francisco; Romero, Sergio
2010-01-01
This work describes a new "Moodle" module developed to give support to the practical content of a basic computer organization course. This module goes beyond the mere hosting of resources and assignments. It makes use of an automatic checking and verification engine that works on the VHDL designs submitted by the students. The module automatically…
Advances in audio source seperation and multisource audio content retrieval
NASA Astrophysics Data System (ADS)
Vincent, Emmanuel
2012-06-01
Audio source separation aims to extract the signals of individual sound sources from a given recording. In this paper, we review three recent advances which improve the robustness of source separation in real-world challenging scenarios and enable its use for multisource content retrieval tasks, such as automatic speech recognition (ASR) or acoustic event detection (AED) in noisy environments. We present a Flexible Audio Source Separation Toolkit (FASST) and discuss its advantages compared to earlier approaches such as independent component analysis (ICA) and sparse component analysis (SCA). We explain how cues as diverse as harmonicity, spectral envelope, temporal fine structure or spatial location can be jointly exploited by this toolkit. We subsequently present the uncertainty decoding (UD) framework for the integration of audio source separation and audio content retrieval. We show how the uncertainty about the separated source signals can be accurately estimated and propagated to the features. Finally, we explain how this uncertainty can be efficiently exploited by a classifier, both at the training and the decoding stage. We illustrate the resulting performance improvements in terms of speech separation quality and speaker recognition accuracy.
Color image analysis technique for measuring of fat in meat: an application for the meat industry
NASA Astrophysics Data System (ADS)
Ballerini, Lucia; Hogberg, Anders; Lundstrom, Kerstin; Borgefors, Gunilla
2001-04-01
Intramuscular fat content in meat influences some important meat quality characteristics. The aim of the present study was to develop and apply image processing techniques to quantify intramuscular fat content in beefs together with the visual appearance of fat in meat (marbling). Color images of M. longissimus dorsi meat samples with a variability of intramuscular fat content and marbling were captured. Image analysis software was specially developed for the interpretation of these images. In particular, a segmentation algorithm (i.e. classification of different substances: fat, muscle and connective tissue) was optimized in order to obtain a proper classification and perform subsequent analysis. Segmentation of muscle from fat was achieved based on their characteristics in the 3D color space, and on the intrinsic fuzzy nature of these structures. The method is fully automatic and it combines a fuzzy clustering algorithm, the Fuzzy c-Means Algorithm, with a Genetic Algorithm. The percentages of various colors (i.e. substances) within the sample are then determined; the number, size distribution, and spatial distributions of the extracted fat flecks are measured. Measurements are correlated with chemical and sensory properties. Results so far show that advanced image analysis is useful for quantify the visual appearance of meat.
"UML Quiz": Automatic Conversion of Web-Based E-Learning Content in Mobile Applications
ERIC Educational Resources Information Center
von Franqué, Alexander; Tellioglu, Hilda
2014-01-01
Many educational institutions use Learning Management Systems to provide e-learning content to their students. This often includes quizzes that can help students to prepare for exams. However, the content is usually web-optimized and not very usable on mobile devices. In this work a native mobile application ("UML Quiz") that imports…
The DAB model of drawing processes
NASA Technical Reports Server (NTRS)
Hochhaus, Larry W.
1989-01-01
The problem of automatic drawing was investigated in two ways. First, a DAB model of drawing processes was introduced. DAB stands for three types of knowledge hypothesized to support drawing abilities, namely, Drawing Knowledge, Assimilated Knowledge, and Base Knowledge. Speculation concerning the content and character of each of these subsystems of the drawing process is introduced and the overall adequacy of the model is evaluated. Second, eight experts were each asked to understand six engineering drawings and to think aloud while doing so. It is anticipated that a concurrent protocol analysis of these interviews can be carried out in the future. Meanwhile, a general description of the videotape database is provided. In conclusion, the DAB model was praised as a worthwhile first step toward solution of a difficult problem, but was considered by and large inadequate to the challenge of automatic drawing. Suggestions for improvements on the model were made.
Content-based image retrieval applied to bone age assessment
NASA Astrophysics Data System (ADS)
Fischer, Benedikt; Brosig, André; Welter, Petra; Grouls, Christoph; Günther, Rolf W.; Deserno, Thomas M.
2010-03-01
Radiological bone age assessment is based on local image regions of interest (ROI), such as the epiphysis or the area of carpal bones. These are compared to a standardized reference and scores determining the skeletal maturity are calculated. For computer-aided diagnosis, automatic ROI extraction and analysis is done so far mainly by heuristic approaches. Due to high variations in the imaged biological material and differences in age, gender and ethnic origin, automatic analysis is difficult and frequently requires manual interactions. On the contrary, epiphyseal regions (eROIs) can be compared to previous cases with known age by content-based image retrieval (CBIR). This requires a sufficient number of cases with reliable positioning of the eROI centers. In this first approach to bone age assessment by CBIR, we conduct leaving-oneout experiments on 1,102 left hand radiographs and 15,428 metacarpal and phalangeal eROIs from the USC hand atlas. The similarity of the eROIs is assessed by cross-correlation of 16x16 scaled eROIs. The effects of the number of eROIs, two age computation methods as well as the number of considered CBIR references are analyzed. The best results yield an error rate of 1.16 years and a standard deviation of 0.85 years. As the appearance of the hand varies naturally by up to two years, these results clearly demonstrate the applicability of the CBIR approach for bone age estimation.
Supporting the education evidence portal via text mining
Ananiadou, Sophia; Thompson, Paul; Thomas, James; Mu, Tingting; Oliver, Sandy; Rickinson, Mark; Sasaki, Yutaka; Weissenbacher, Davy; McNaught, John
2010-01-01
The UK Education Evidence Portal (eep) provides a single, searchable, point of access to the contents of the websites of 33 organizations relating to education, with the aim of revolutionizing work practices for the education community. Use of the portal alleviates the need to spend time searching multiple resources to find relevant information. However, the combined content of the websites of interest is still very large (over 500 000 documents and growing). This means that searches using the portal can produce very large numbers of hits. As users often have limited time, they would benefit from enhanced methods of performing searches and viewing results, allowing them to drill down to information of interest more efficiently, without having to sift through potentially long lists of irrelevant documents. The Joint Information Systems Committee (JISC)-funded ASSIST project has produced a prototype web interface to demonstrate the applicability of integrating a number of text-mining tools and methods into the eep, to facilitate an enhanced searching, browsing and document-viewing experience. New features include automatic classification of documents according to a taxonomy, automatic clustering of search results according to similar document content, and automatic identification and highlighting of key terms within documents. PMID:20643679
Automatic information timeliness assessment of diabetes web sites by evidence based medicine.
Sağlam, Rahime Belen; Taşkaya Temizel, Tuğba
2014-11-01
Studies on health domain have shown that health websites provide imperfect information and give recommendations which are not up to date with the recent literature even when their last modified dates are quite recent. In this paper, we propose a framework which assesses the timeliness of the content of health websites automatically by evidence based medicine. Our aim is to assess the accordance of website contents with the current literature and information timeliness disregarding the update time stated on the websites. The proposed method is based on automatic term recognition, relevance feedback and information retrieval techniques in order to generate time-aware structured queries. We tested the framework on diabetes health web sites which were archived between 2006 and 2013 by Archive-it using American Diabetes Association's (ADA) guidelines. The results showed that the proposed framework achieves 65% and 77% accuracy in detecting the timeliness of the web content according to years and pre-determined time intervals respectively. Information seekers and web site owners may benefit from the proposed framework in finding relevant and up-to-date diabetes web sites. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Chaspari, Theodora; Soldatos, Constantin; Maragos, Petros
2015-01-01
The development of ecologically valid procedures for collecting reliable and unbiased emotional data towards computer interfaces with social and affective intelligence targeting patients with mental disorders. Following its development, presented with, the Athens Emotional States Inventory (AESI) proposes the design, recording and validation of an audiovisual database for five emotional states: anger, fear, joy, sadness and neutral. The items of the AESI consist of sentences each having content indicative of the corresponding emotion. Emotional content was assessed through a survey of 40 young participants with a questionnaire following the Latin square design. The emotional sentences that were correctly identified by 85% of the participants were recorded in a soundproof room with microphones and cameras. A preliminary validation of AESI is performed through automatic emotion recognition experiments from speech. The resulting database contains 696 recorded utterances in Greek language by 20 native speakers and has a total duration of approximately 28 min. Speech classification results yield accuracy up to 75.15% for automatically recognizing the emotions in AESI. These results indicate the usefulness of our approach for collecting emotional data with reliable content, balanced across classes and with reduced environmental variability.
Machine Learning and Radiology
Wang, Shijun; Summers, Ronald M.
2012-01-01
In this paper, we give a short introduction to machine learning and survey its applications in radiology. We focused on six categories of applications in radiology: medical image segmentation, registration, computer aided detection and diagnosis, brain function or activity analysis and neurological disease diagnosis from fMR images, content-based image retrieval systems for CT or MRI images, and text analysis of radiology reports using natural language processing (NLP) and natural language understanding (NLU). This survey shows that machine learning plays a key role in many radiology applications. Machine learning identifies complex patterns automatically and helps radiologists make intelligent decisions on radiology data such as conventional radiographs, CT, MRI, and PET images and radiology reports. In many applications, the performance of machine learning-based automatic detection and diagnosis systems has shown to be comparable to that of a well-trained and experienced radiologist. Technology development in machine learning and radiology will benefit from each other in the long run. Key contributions and common characteristics of machine learning techniques in radiology are discussed. We also discuss the problem of translating machine learning applications to the radiology clinical setting, including advantages and potential barriers. PMID:22465077
Towards Automatic Classification of Wikipedia Content
NASA Astrophysics Data System (ADS)
Szymański, Julian
Wikipedia - the Free Encyclopedia encounters the problem of proper classification of new articles everyday. The process of assignment of articles to categories is performed manually and it is a time consuming task. It requires knowledge about Wikipedia structure, which is beyond typical editor competence, which leads to human-caused mistakes - omitting or wrong assignments of articles to categories. The article presents application of SVM classifier for automatic classification of documents from The Free Encyclopedia. The classifier application has been tested while using two text representations: inter-documents connections (hyperlinks) and word content. The results of the performed experiments evaluated on hand crafted data show that the Wikipedia classification process can be partially automated. The proposed approach can be used for building a decision support system which suggests editors the best categories that fit new content entered to Wikipedia.
Pro-Anorexia and Anti-Pro-Anorexia Videos on YouTube: Sentiment Analysis of User Responses.
Oksanen, Atte; Garcia, David; Sirola, Anu; Näsi, Matti; Kaakinen, Markus; Keipi, Teo; Räsänen, Pekka
2015-11-12
Pro-anorexia communities exist online and encourage harmful weight loss and weight control practices, often through emotional content that enforces social ties within these communities. User-generated responses to videos that directly oppose pro-anorexia communities have not yet been researched in depth. The aim was to study emotional reactions to pro-anorexia and anti-pro-anorexia online content on YouTube using sentiment analysis. Using the 50 most popular YouTube pro-anorexia and anti-pro-anorexia user channels as a starting point, we gathered data on users, their videos, and their commentators. A total of 395 anorexia videos and 12,161 comments were analyzed using positive and negative sentiments and ratings submitted by the viewers of the videos. The emotional information was automatically extracted with an automatic sentiment detection tool whose reliability was tested with human coders. Ordinary least squares regression models were used to estimate the strength of sentiments. The models controlled for the number of video views and comments, number of months the video had been on YouTube, duration of the video, uploader's activity as a video commentator, and uploader's physical location by country. The 395 videos had more than 6 million views and comments by almost 8000 users. Anti-pro-anorexia video comments expressed more positive sentiments on a scale of 1 to 5 (adjusted prediction [AP] 2.15, 95% CI 2.11-2.19) than did those of pro-anorexia videos (AP 2.02, 95% CI 1.98-2.06). Anti-pro-anorexia videos also received more likes (AP 181.02, 95% CI 155.19-206.85) than pro-anorexia videos (AP 31.22, 95% CI 31.22-37.81). Negative sentiments and video dislikes were equally distributed in responses to both pro-anorexia and anti-pro-anorexia videos. Despite pro-anorexia content being widespread on YouTube, videos promoting help for anorexia and opposing the pro-anorexia community were more popular, gaining more positive feedback and comments than pro-anorexia videos. Thus, the anti-pro-anorexia content provided a user-generated counterforce against pro-anorexia content on YouTube. Professionals working with young people should be aware of the social media dynamics and versatility of user-generated eating disorder content online.
Pro-Anorexia and Anti-Pro-Anorexia Videos on YouTube: Sentiment Analysis of User Responses
Garcia, David; Sirola, Anu; Näsi, Matti; Kaakinen, Markus; Keipi, Teo; Räsänen, Pekka
2015-01-01
Background Pro-anorexia communities exist online and encourage harmful weight loss and weight control practices, often through emotional content that enforces social ties within these communities. User-generated responses to videos that directly oppose pro-anorexia communities have not yet been researched in depth. Objective The aim was to study emotional reactions to pro-anorexia and anti-pro-anorexia online content on YouTube using sentiment analysis. Methods Using the 50 most popular YouTube pro-anorexia and anti-pro-anorexia user channels as a starting point, we gathered data on users, their videos, and their commentators. A total of 395 anorexia videos and 12,161 comments were analyzed using positive and negative sentiments and ratings submitted by the viewers of the videos. The emotional information was automatically extracted with an automatic sentiment detection tool whose reliability was tested with human coders. Ordinary least squares regression models were used to estimate the strength of sentiments. The models controlled for the number of video views and comments, number of months the video had been on YouTube, duration of the video, uploader’s activity as a video commentator, and uploader’s physical location by country. Results The 395 videos had more than 6 million views and comments by almost 8000 users. Anti-pro-anorexia video comments expressed more positive sentiments on a scale of 1 to 5 (adjusted prediction [AP] 2.15, 95% CI 2.11-2.19) than did those of pro-anorexia videos (AP 2.02, 95% CI 1.98-2.06). Anti-pro-anorexia videos also received more likes (AP 181.02, 95% CI 155.19-206.85) than pro-anorexia videos (AP 31.22, 95% CI 31.22-37.81). Negative sentiments and video dislikes were equally distributed in responses to both pro-anorexia and anti-pro-anorexia videos. Conclusions Despite pro-anorexia content being widespread on YouTube, videos promoting help for anorexia and opposing the pro-anorexia community were more popular, gaining more positive feedback and comments than pro-anorexia videos. Thus, the anti-pro-anorexia content provided a user-generated counterforce against pro-anorexia content on YouTube. Professionals working with young people should be aware of the social media dynamics and versatility of user-generated eating disorder content online. PMID:26563678
ERIC Educational Resources Information Center
Sukkarieh, Jane Z.; von Davier, Matthias; Yamamoto, Kentaro
2012-01-01
This document describes a solution to a problem in the automatic content scoring of the multilingual character-by-character highlighting item type. This solution is language independent and represents a significant enhancement. This solution not only facilitates automatic scoring but plays an important role in clustering students' responses;…
Machine Beats Experts: Automatic Discovery of Skill Models for Data-Driven Online Course Refinement
ERIC Educational Resources Information Center
Matsuda, Noboru; Furukawa, Tadanobu; Bier, Norman; Faloutsos, Christos
2015-01-01
How can we automatically determine which skills must be mastered for the successful completion of an online course? Large-scale online courses (e.g., MOOCs) often contain a broad range of contents frequently intended to be a semester's worth of materials; this breadth often makes it difficult to articulate an accurate set of skills and knowledge…
Interactive vs. automatic ultrasound image segmentation methods for staging hepatic lipidosis.
Weijers, Gert; Starke, Alexander; Haudum, Alois; Thijssen, Johan M; Rehage, Jürgen; De Korte, Chris L
2010-07-01
The aim of this study was to test the hypothesis that automatic segmentation of vessels in ultrasound (US) images can produce similar or better results in grading fatty livers than interactive segmentation. A study was performed in postpartum dairy cows (N=151), as an animal model of human fatty liver disease, to test this hypothesis. Five transcutaneous and five intraoperative US liver images were acquired in each animal and a liverbiopsy was taken. In liver tissue samples, triacylglycerol (TAG) was measured by biochemical analysis and hepatic diseases other than hepatic lipidosis were excluded by histopathologic examination. Ultrasonic tissue characterization (UTC) parameters--Mean echo level, standard deviation (SD) of echo level, signal-to-noise ratio (SNR), residual attenuation coefficient (ResAtt) and axial and lateral speckle size--were derived using a computer-aided US (CAUS) protocol and software package. First, the liver tissue was interactively segmented by two observers. With increasing fat content, fewer hepatic vessels were visible in the ultrasound images and, therefore, a smaller proportion of the liver needed to be excluded from these images. Automatic-segmentation algorithms were implemented and it was investigated whether better results could be achieved than with the subjective and time-consuming interactive-segmentation procedure. The automatic-segmentation algorithms were based on both fixed and adaptive thresholding techniques in combination with a 'speckle'-shaped moving-window exclusion technique. All data were analyzed with and without postprocessing as contained in CAUS and with different automated-segmentation techniques. This enabled us to study the effect of the applied postprocessing steps on single and multiple linear regressions ofthe various UTC parameters with TAG. Improved correlations for all US parameters were found by using automatic-segmentation techniques. Stepwise multiple linear-regression formulas where derived and used to predict TAG level in the liver. Receiver-operating-characteristics (ROC) analysis was applied to assess the performance and area under the curve (AUC) of predicting TAG and to compare the sensitivity and specificity of the methods. Best speckle-size estimates and overall performance (R2 = 0.71, AUC = 0.94) were achieved by using an SNR-based adaptive automatic-segmentation method (used TAG threshold: 50 mg/g liver wet weight). Automatic segmentation is thus feasible and profitable.
ERIC Educational Resources Information Center
Vrablecová, Petra; Šimko, Marián
2016-01-01
The domain model is an essential part of an adaptive learning system. For each educational course, it involves educational content and semantics, which is also viewed as a form of conceptual metadata about educational content. Due to the size of a domain model, manual domain model creation is a challenging and demanding task for teachers or…
Streamlining Metadata and Data Management for Evolving Digital Libraries
NASA Astrophysics Data System (ADS)
Clark, D.; Miller, S. P.; Peckman, U.; Smith, J.; Aerni, S.; Helly, J.; Sutton, D.; Chase, A.
2003-12-01
What began two years ago as an effort to stabilize the Scripps Institution of Oceanography (SIO) data archives from more than 700 cruises going back 50 years, has now become the operational fully-searchable "SIOExplorer" digital library, complete with thousands of historic photographs, images, maps, full text documents, binary data files, and 3D visualization experiences, totaling nearly 2 terabytes of digital content. Coping with data diversity and complexity has proven to be more challenging than dealing with large volumes of digital data. SIOExplorer has been built with scalability in mind, so that the addition of new data types and entire new collections may be accomplished with ease. It is a federated system, currently interoperating with three independent data-publishing authorities, each responsible for their own quality control, metadata specifications, and content selection. The IT architecture implemented at the San Diego Supercomputer Center (SDSC) streamlines the integration of additional projects in other disciplines with a suite of metadata management and collection building tools for "arbitrary digital objects." Metadata are automatically harvested from data files into domain-specific metadata blocks, and mapped into various specification standards as needed. Metadata can be browsed and objects can be viewed onscreen or downloaded for further analysis, with automatic proprietary-hold request management.
Landmark Image Retrieval by Jointing Feature Refinement and Multimodal Classifier Learning.
Zhang, Xiaoming; Wang, Senzhang; Li, Zhoujun; Ma, Shuai; Xiaoming Zhang; Senzhang Wang; Zhoujun Li; Shuai Ma; Ma, Shuai; Zhang, Xiaoming; Wang, Senzhang; Li, Zhoujun
2018-06-01
Landmark retrieval is to return a set of images with their landmarks similar to those of the query images. Existing studies on landmark retrieval focus on exploiting the geometries of landmarks for visual similarity matches. However, the visual content of social images is of large diversity in many landmarks, and also some images share common patterns over different landmarks. On the other side, it has been observed that social images usually contain multimodal contents, i.e., visual content and text tags, and each landmark has the unique characteristic of both visual content and text content. Therefore, the approaches based on similarity matching may not be effective in this environment. In this paper, we investigate whether the geographical correlation among the visual content and the text content could be exploited for landmark retrieval. In particular, we propose an effective multimodal landmark classification paradigm to leverage the multimodal contents of social image for landmark retrieval, which integrates feature refinement and landmark classifier with multimodal contents by a joint model. The geo-tagged images are automatically labeled for classifier learning. Visual features are refined based on low rank matrix recovery, and multimodal classification combined with group sparse is learned from the automatically labeled images. Finally, candidate images are ranked by combining classification result and semantic consistence measuring between the visual content and text content. Experiments on real-world datasets demonstrate the superiority of the proposed approach as compared to existing methods.
Sentiment analysis in twitter data using data analytic techniques for predictive modelling
NASA Astrophysics Data System (ADS)
Razia Sulthana, A.; Jaithunbi, A. K.; Sai Ramesh, L.
2018-04-01
Sentiment analysis refers to the task of natural language processing to determine whether a piece of text contains subjective information and the kind of subjective information it expresses. The subjective information represents the attitude behind the text: positive, negative or neutral. Understanding the opinions behind user-generated content automatically is of great concern. We have made data analysis with huge amount of tweets taken as big data and thereby classifying the polarity of words, sentences or entire documents. We use linear regression for modelling the relationship between a scalar dependent variable Y and one or more explanatory variables (or independent variables) denoted X. We conduct a series of experiments to test the performance of the system.
Active suppression of distractors that match the contents of visual working memory
Sawaki, Risa; Luck, Steven J.
2011-01-01
The biased competition theory proposes that items matching the contents of visual working memory will automatically have an advantage in the competition for attention. However, evidence for an automatic effect has been mixed, perhaps because the memory-driven attentional bias can be overcome by top-down suppression. To test this hypothesis, the Pd component of the event-related potential waveform was used as a marker of attentional suppression. While observers maintained a color in working memory, task-irrelevant probe arrays were presented that contained an item matching the color being held in memory. We found that the memory-matching probe elicited a Pd component, indicating that it was being actively suppressed. This result suggests that sensory inputs matching the information being held in visual working memory are automatically detected and generate an “attend-to-me” signal, but this signal can be overridden by an active suppression mechanism to prevent the actual capture of attention. PMID:22053147
Automatic portion estimation and visual refinement in mobile dietary assessment
Woo, Insoo; Otsmo, Karl; Kim, SungYe; Ebert, David S.; Delp, Edward J.; Boushey, Carol J.
2011-01-01
As concern for obesity grows, the need for automated and accurate methods to monitor nutrient intake becomes essential as dietary intake provides a valuable basis for managing dietary imbalance. Moreover, as mobile devices with built-in cameras have become ubiquitous, one potential means of monitoring dietary intake is photographing meals using mobile devices and having an automatic estimate of the nutrient contents returned. One of the challenging problems of the image-based dietary assessment is the accurate estimation of food portion size from a photograph taken with a mobile digital camera. In this work, we describe a method to automatically calculate portion size of a variety of foods through volume estimation using an image. These “portion volumes” utilize camera parameter estimation and model reconstruction to determine the volume of food items, from which nutritional content is then extrapolated. In this paper, we describe our initial results of accuracy evaluation using real and simulated meal images and demonstrate the potential of our approach. PMID:22242198
Automatic portion estimation and visual refinement in mobile dietary assessment
NASA Astrophysics Data System (ADS)
Woo, Insoo; Otsmo, Karl; Kim, SungYe; Ebert, David S.; Delp, Edward J.; Boushey, Carol J.
2010-01-01
As concern for obesity grows, the need for automated and accurate methods to monitor nutrient intake becomes essential as dietary intake provides a valuable basis for managing dietary imbalance. Moreover, as mobile devices with built-in cameras have become ubiquitous, one potential means of monitoring dietary intake is photographing meals using mobile devices and having an automatic estimate of the nutrient contents returned. One of the challenging problems of the image-based dietary assessment is the accurate estimation of food portion size from a photograph taken with a mobile digital camera. In this work, we describe a method to automatically calculate portion size of a variety of foods through volume estimation using an image. These "portion volumes" utilize camera parameter estimation and model reconstruction to determine the volume of food items, from which nutritional content is then extrapolated. In this paper, we describe our initial results of accuracy evaluation using real and simulated meal images and demonstrate the potential of our approach.
Streamlining the supply chain.
Neumann, Lydon
2003-07-01
Effective management of the supply chain requires attention to: Product management--formulary development and maintenance, compliance, clinical involvement, standardization, and demand-matching. Sourcing and contracting--vendor consolidation, GPO portfolio management, price leveling, content management, and direct contracting Purchasing and payment-cycle--automatic placement, web enablement, centralization, evaluated receipts settlement, and invoice matching Inventory and distribution management--"unofficial" and "official" locations, vendor-managed inventory, automatic replenishment, and freight management.
Parsing Citations in Biomedical Articles Using Conditional Random Fields
Zhang, Qing; Cao, Yong-Gang; Yu, Hong
2011-01-01
Citations are used ubiquitously in biomedical full-text articles and play an important role for representing both the rhetorical structure and the semantic content of the articles. As a result, text mining systems will significantly benefit from a tool that automatically extracts the content of a citation. In this study, we applied the supervised machine-learning algorithms Conditional Random Fields (CRFs) to automatically parse a citation into its fields (e.g., Author, Title, Journal, and Year). With a subset of html format open-access PubMed Central articles, we report an overall 97.95% F1-score. The citation parser can be accessed at: http://www.cs.uwm.edu/~qing/projects/cithit/index.html. PMID:21419403
A neotropical Miocene pollen database employing image-based search and semantic modeling1
Han, Jing Ginger; Cao, Hongfei; Barb, Adrian; Punyasena, Surangi W.; Jaramillo, Carlos; Shyu, Chi-Ren
2014-01-01
• Premise of the study: Digital microscopic pollen images are being generated with increasing speed and volume, producing opportunities to develop new computational methods that increase the consistency and efficiency of pollen analysis and provide the palynological community a computational framework for information sharing and knowledge transfer. • Methods: Mathematical methods were used to assign trait semantics (abstract morphological representations) of the images of neotropical Miocene pollen and spores. Advanced database-indexing structures were built to compare and retrieve similar images based on their visual content. A Web-based system was developed to provide novel tools for automatic trait semantic annotation and image retrieval by trait semantics and visual content. • Results: Mathematical models that map visual features to trait semantics can be used to annotate images with morphology semantics and to search image databases with improved reliability and productivity. Images can also be searched by visual content, providing users with customized emphases on traits such as color, shape, and texture. • Discussion: Content- and semantic-based image searches provide a powerful computational platform for pollen and spore identification. The infrastructure outlined provides a framework for building a community-wide palynological resource, streamlining the process of manual identification, analysis, and species discovery. PMID:25202648
Multimodal Excitatory Interfaces with Automatic Content Classification
NASA Astrophysics Data System (ADS)
Williamson, John; Murray-Smith, Roderick
We describe a non-visual interface for displaying data on mobile devices, based around active exploration: devices are shaken, revealing the contents rattling around inside. This combines sample-based contact sonification with event playback vibrotactile feedback for a rich and compelling display which produces an illusion much like balls rattling inside a box. Motion is sensed from accelerometers, directly linking the motions of the user to the feedback they receive in a tightly closed loop. The resulting interface requires no visual attention and can be operated blindly with a single hand: it is reactive rather than disruptive. This interaction style is applied to the display of an SMS inbox. We use language models to extract salient features from text messages automatically. The output of this classification process controls the timbre and physical dynamics of the simulated objects. The interface gives a rapid semantic overview of the contents of an inbox, without compromising privacy or interrupting the user.
Social and Collaborative Interactions for Educational Content Enrichment in ULEs
ERIC Educational Resources Information Center
Araújo, Rafael D.; Brant-Ribeiro, Taffarel; Mendonça, Igor E. S.; Mendes, Miller M.; Dorça, Fabiano A.; Cattelan, Renan G.
2017-01-01
This article presents a social and collaborative model for content enrichment in Ubiquitous Learning Environments. Designed as a loosely coupled software architecture, the proposed model was implemented and integrated into the Classroom eXperience, a multimedia capture platform for educational environments. After automatically recording a lecture…
Momeni, Saba; Pourghassem, Hossein
2014-08-01
Recently image fusion has prominent role in medical image processing and is useful to diagnose and treat many diseases. Digital subtraction angiography is one of the most applicable imaging to diagnose brain vascular diseases and radiosurgery of brain. This paper proposes an automatic fuzzy-based multi-temporal fusion algorithm for 2-D digital subtraction angiography images. In this algorithm, for blood vessel map extraction, the valuable frames of brain angiography video are automatically determined to form the digital subtraction angiography images based on a novel definition of vessel dispersion generated by injected contrast material. Our proposed fusion scheme contains different fusion methods for high and low frequency contents based on the coefficient characteristic of wrapping second generation of curvelet transform and a novel content selection strategy. Our proposed content selection strategy is defined based on sample correlation of the curvelet transform coefficients. In our proposed fuzzy-based fusion scheme, the selection of curvelet coefficients are optimized by applying weighted averaging and maximum selection rules for the high frequency coefficients. For low frequency coefficients, the maximum selection rule based on local energy criterion is applied to better visual perception. Our proposed fusion algorithm is evaluated on a perfect brain angiography image dataset consisting of one hundred 2-D internal carotid rotational angiography videos. The obtained results demonstrate the effectiveness and efficiency of our proposed fusion algorithm in comparison with common and basic fusion algorithms.
The roots of stereotype threat: when automatic associations disrupt girls' math performance.
Galdi, Silvia; Cadinu, Mara; Tomasetto, Carlo
2014-01-01
Although stereotype awareness is a prerequisite for stereotype threat effects (Steele & Aronson, 1995), research showed girls' deficit under stereotype threat before the emergence of math-gender stereotype awareness, and in the absence of stereotype endorsement. In a study including 240 six-year-old children, this paradox was addressed by testing whether automatic associations trigger stereotype threat in young girls. Whereas no indicators were found that children endorsed the math-gender stereotype, girls, but not boys, showed automatic associations consistent with the stereotype. Moreover, results showed that girls' automatic associations varied as a function of a manipulation regarding the stereotype content. Importantly, girls' math performance decreased in a stereotype-consistent, relative to a stereotype-inconsistent, condition and automatic associations mediated the relation between stereotype threat and performance. © 2013 The Authors. Child Development © 2013 Society for Research in Child Development, Inc.
Clothes Dryer Automatic Termination Evaluation
DOE Office of Scientific and Technical Information (OSTI.GOV)
TeGrotenhuis, Ward E.
Volume 2: Improved Sensor and Control Designs Many residential clothes dryers on the market today provide automatic cycles that are intended to stop when the clothes are dry, as determined by the final remaining moisture content (RMC). However, testing of automatic termination cycles has shown that many dryers are susceptible to over-drying of loads, leading to excess energy consumption. In particular, tests performed using the DOE Test Procedure in Appendix D2 of 10 CFR 430 subpart B have shown that as much as 62% of the energy used in a cycle may be from over-drying. Volume 1 of this reportmore » shows an average of 20% excess energy from over-drying when running automatic cycles with various load compositions and dryer settings. Consequently, improving automatic termination sensors and algorithms has the potential for substantial energy savings in the U.S.« less
Spectral mapping of soil organic matter
NASA Technical Reports Server (NTRS)
Kristof, S. J.; Baumgardner, M. F.; Johannsen, C. J.
1974-01-01
Multispectral remote sensing data were examined for use in the mapping of soil organic matter content. Computer-implemented pattern recognition techniques were used to analyze data collected in May 1969 and May 1970 by an airborne multispectral scanner over a 40-km flightline. Two fields within the flightline were selected for intensive study. Approximately 400 surface soil samples from these fields were obtained for organic matter analysis. The analytical data were used as training sets for computer-implemented analysis of the spectral data. It was found that within the geographical limitations included in this study, multispectral data and automatic data processing techniques could be used very effectively to delineate and map surface soils areas containing different levels of soil organic matter.
Audio-guided audiovisual data segmentation, indexing, and retrieval
NASA Astrophysics Data System (ADS)
Zhang, Tong; Kuo, C.-C. Jay
1998-12-01
While current approaches for video segmentation and indexing are mostly focused on visual information, audio signals may actually play a primary role in video content parsing. In this paper, we present an approach for automatic segmentation, indexing, and retrieval of audiovisual data, based on audio content analysis. The accompanying audio signal of audiovisual data is first segmented and classified into basic types, i.e., speech, music, environmental sound, and silence. This coarse-level segmentation and indexing step is based upon morphological and statistical analysis of several short-term features of the audio signals. Then, environmental sounds are classified into finer classes, such as applause, explosions, bird sounds, etc. This fine-level classification and indexing step is based upon time- frequency analysis of audio signals and the use of the hidden Markov model as the classifier. On top of this archiving scheme, an audiovisual data retrieval system is proposed. Experimental results show that the proposed approach has an accuracy rate higher than 90 percent for the coarse-level classification, and higher than 85 percent for the fine-level classification. Examples of audiovisual data segmentation and retrieval are also provided.
Application of automatic image analysis in wood science
Charles W. McMillin
1982-01-01
In this paper I describe an image analysis system and illustrate with examples the application of automatic quantitative measurement to wood science. Automatic image analysis, a powerful and relatively new technology, uses optical, video, electronic, and computer components to rapidly derive information from images with minimal operator interaction. Such instruments...
NASA Astrophysics Data System (ADS)
Sa, Qila; Wang, Zhihui
2018-03-01
At present, content-based video retrieval (CBVR) is the most mainstream video retrieval method, using the video features of its own to perform automatic identification and retrieval. This method involves a key technology, i.e. shot segmentation. In this paper, the method of automatic video shot boundary detection with K-means clustering and improved adaptive dual threshold comparison is proposed. First, extract the visual features of every frame and divide them into two categories using K-means clustering algorithm, namely, one with significant change and one with no significant change. Then, as to the classification results, utilize the improved adaptive dual threshold comparison method to determine the abrupt as well as gradual shot boundaries.Finally, achieve automatic video shot boundary detection system.
ESO/ST-ECF Data Analysis Workshop, 5th, Garching, Germany, Apr. 26, 27, 1993, Proceedings
NASA Astrophysics Data System (ADS)
Grosbol, Preben; de Ruijsscher, Resy
1993-01-01
Various papers on astronomical data analysis are presented. Individual optics addressed include: surface photometry of early-type galaxies, wavelet transform and adaptive filtering, package for surface photometry of galaxies, calibration of large-field mosaics, surface photometry of galaxies with HST, wavefront-supported image deconvolution, seeing effects on elliptical galaxies, multiple algorithms deconvolution program, enhancement of Skylab X-ray images, MIDAS procedures for the image analysis of E-S0 galaxies, photometric data reductions under MIDAS, crowded field photometry with deconvolved images, the DENIS Deep Near Infrared Survey. Also discussed are: analysis of astronomical time series, detection of low-amplitude stellar pulsations, new SOT method for frequency analysis, chaotic attractor reconstruction and applications to variable stars, reconstructing a 1D signal from irregular samples, automatic analysis for time series with large gaps, prospects for content-based image retrieval, redshift survey in the South Galactic Pole Region.
intelligentCAPTURE 1.0 Adds Tables of Content to Library Catalogues and Improves Retrieval.
ERIC Educational Resources Information Center
Hauer, Manfred; Simedy, Walton
2002-01-01
Describes an online library catalog that was developed for an Austrian scientific library that includes table of contents in addition to the standard bibliographic information in order to increase relevance for searchers. Discusses the technology involved, including OCR (Optical Character Recognition) and automatic indexing techniques; weighted…
Integrating personalized medical test contents with XML and XSL-FO.
Toddenroth, Dennis; Dugas, Martin; Frankewitsch, Thomas
2011-03-01
In 2004 the adoption of a modular curriculum at the medical faculty in Muenster led to the introduction of centralized examinations based on multiple-choice questions (MCQs). We report on how organizational challenges of realizing faculty-wide personalized tests were addressed by implementation of a specialized software module to automatically generate test sheets from individual test registrations and MCQ contents. Key steps of the presented method for preparing personalized test sheets are (1) the compilation of relevant item contents and graphical media from a relational database with database queries, (2) the creation of Extensible Markup Language (XML) intermediates, and (3) the transformation into paginated documents. The software module by use of an open source print formatter consistently produced high-quality test sheets, while the blending of vectorized textual contents and pixel graphics resulted in efficient output file sizes. Concomitantly the module permitted an individual randomization of item sequences to prevent illicit collusion. The automatic generation of personalized MCQ test sheets is feasible using freely available open source software libraries, and can be efficiently deployed on a faculty-wide scale.
Doan, Son; Ritchart, Amanda; Perry, Nicholas; Chaparro, Juan D; Conway, Mike
2017-06-13
Stress is a contributing factor to many major health problems in the United States, such as heart disease, depression, and autoimmune diseases. Relaxation is often recommended in mental health treatment as a frontline strategy to reduce stress, thereby improving health conditions. Twitter is a microblog platform that allows users to post their own personal messages (tweets), including their expressions about feelings and actions related to stress and stress management (eg, relaxing). While Twitter is increasingly used as a source of data for understanding mental health from a population perspective, the specific issue of stress-as manifested on Twitter-has not yet been the focus of any systematic study. The objective of our study was to understand how people express their feelings of stress and relaxation through Twitter messages. In addition, we aimed at investigating automated natural language processing methods to (1) classify stress versus nonstress and relaxation versus nonrelaxation tweets, and (2) identify first-hand experience-that is, who is the experiencer-in stress and relaxation tweets. We first performed a qualitative content analysis of 1326 and 781 tweets containing the keywords "stress" and "relax," respectively. We then investigated the use of machine learning algorithms-in particular naive Bayes and support vector machines-to automatically classify tweets as stress versus nonstress and relaxation versus nonrelaxation. Finally, we applied these classifiers to sample datasets drawn from 4 cities in the United States (Los Angeles, New York, San Diego, and San Francisco) obtained from Twitter's streaming application programming interface, with the goal of evaluating the extent of any correlation between our automatic classification of tweets and results from public stress surveys. Content analysis showed that the most frequent topic of stress tweets was education, followed by work and social relationships. The most frequent topic of relaxation tweets was rest & vacation, followed by nature and water. When we applied the classifiers to the cities dataset, the proportion of stress tweets in New York and San Diego was substantially higher than that in Los Angeles and San Francisco. In addition, we found that characteristic expressions of stress and relaxation varied for each city based on its geolocation. This content analysis and infodemiology study revealed that Twitter, when used in conjunction with natural language processing techniques, is a useful data source for understanding stress and stress management strategies, and can potentially supplement infrequently collected survey-based stress data. ©Son Doan, Amanda Ritchart, Nicholas Perry, Juan D Chaparro, Mike Conway. Originally published in JMIR Public Health and Surveillance (http://publichealth.jmir.org), 13.06.2017.
Ritchart, Amanda; Perry, Nicholas; Chaparro, Juan D; Conway, Mike
2017-01-01
Background Stress is a contributing factor to many major health problems in the United States, such as heart disease, depression, and autoimmune diseases. Relaxation is often recommended in mental health treatment as a frontline strategy to reduce stress, thereby improving health conditions. Twitter is a microblog platform that allows users to post their own personal messages (tweets), including their expressions about feelings and actions related to stress and stress management (eg, relaxing). While Twitter is increasingly used as a source of data for understanding mental health from a population perspective, the specific issue of stress—as manifested on Twitter—has not yet been the focus of any systematic study. Objective The objective of our study was to understand how people express their feelings of stress and relaxation through Twitter messages. In addition, we aimed at investigating automated natural language processing methods to (1) classify stress versus nonstress and relaxation versus nonrelaxation tweets, and (2) identify first-hand experience—that is, who is the experiencer—in stress and relaxation tweets. Methods We first performed a qualitative content analysis of 1326 and 781 tweets containing the keywords “stress” and “relax,” respectively. We then investigated the use of machine learning algorithms—in particular naive Bayes and support vector machines—to automatically classify tweets as stress versus nonstress and relaxation versus nonrelaxation. Finally, we applied these classifiers to sample datasets drawn from 4 cities in the United States (Los Angeles, New York, San Diego, and San Francisco) obtained from Twitter’s streaming application programming interface, with the goal of evaluating the extent of any correlation between our automatic classification of tweets and results from public stress surveys. Results Content analysis showed that the most frequent topic of stress tweets was education, followed by work and social relationships. The most frequent topic of relaxation tweets was rest & vacation, followed by nature and water. When we applied the classifiers to the cities dataset, the proportion of stress tweets in New York and San Diego was substantially higher than that in Los Angeles and San Francisco. In addition, we found that characteristic expressions of stress and relaxation varied for each city based on its geolocation. Conclusions This content analysis and infodemiology study revealed that Twitter, when used in conjunction with natural language processing techniques, is a useful data source for understanding stress and stress management strategies, and can potentially supplement infrequently collected survey-based stress data. PMID:28611016
Analysis of the QRS complex for apnea-bradycardia characterization in preterm infants
Altuve, Miguel; Carrault, Guy; Cruz, Julio; Beuchée, Alain; Pladys, Patrick; Hernandez, Alfredo I.
2009-01-01
This work presents an analysis of the information content of new features derived from the electrocardiogram (ECG) for the characterization of apnea-bradycardia events in preterm infants. Automatic beat detection and segmentation methods have been adapted to the ECG signals from preterm infants, through the application of two evolutionary algorithms. ECG data acquired from 32 preterm infants with persistent apnea-bradycardia have been used for quantitative evaluation. The adaptation procedure led to an improved sensitivity and positive predictive value, and a reduced jitter for the detection of the R-wave, QRS onset, QRS offset, and iso-electric level. Additionally, time series representing the RR interval, R-wave amplitude and QRS duration, were automatically extracted for periods at rest, before, during and after apnea-bradycardia episodes. Significant variations (p<0.05) were observed for all time-series when comparing the difference between values at rest versus values just before the bradycardia event, with the difference between values at rest versus values during the bradycardia event. These results reveal changes in the R-wave amplitude and QRS duration, appearing at the onset and termination of apnea-bradycardia episodes, which could be potentially useful for the early detection and characterization of these episodes. PMID:19963984
Machine learning and radiology.
Wang, Shijun; Summers, Ronald M
2012-07-01
In this paper, we give a short introduction to machine learning and survey its applications in radiology. We focused on six categories of applications in radiology: medical image segmentation, registration, computer aided detection and diagnosis, brain function or activity analysis and neurological disease diagnosis from fMR images, content-based image retrieval systems for CT or MRI images, and text analysis of radiology reports using natural language processing (NLP) and natural language understanding (NLU). This survey shows that machine learning plays a key role in many radiology applications. Machine learning identifies complex patterns automatically and helps radiologists make intelligent decisions on radiology data such as conventional radiographs, CT, MRI, and PET images and radiology reports. In many applications, the performance of machine learning-based automatic detection and diagnosis systems has shown to be comparable to that of a well-trained and experienced radiologist. Technology development in machine learning and radiology will benefit from each other in the long run. Key contributions and common characteristics of machine learning techniques in radiology are discussed. We also discuss the problem of translating machine learning applications to the radiology clinical setting, including advantages and potential barriers. Copyright © 2012. Published by Elsevier B.V.
Automatically assisting human memory: a SenseCam browser.
Doherty, Aiden R; Moulin, Chris J A; Smeaton, Alan F
2011-10-01
SenseCams have many potential applications as tools for lifelogging, including the possibility of use as a memory rehabilitation tool. Given that a SenseCam can log hundreds of thousands of images per year, it is critical that these be presented to the viewer in a manner that supports the aims of memory rehabilitation. In this article we report a software browser constructed with the aim of using the characteristics of memory to organise SenseCam images into a form that makes the wealth of information stored on SenseCam more accessible. To enable a large amount of visual information to be easily and quickly assimilated by a user, we apply a series of automatic content analysis techniques to structure the images into "events", suggest their relative importance, and select representative images for each. This minimises effort when browsing and searching. We provide anecdotes on use of such a system and emphasise the need for SenseCam images to be meaningfully sorted using such a browser.
Automatic gender detection of dream reports: A promising approach.
Wong, Christina; Amini, Reza; De Koninck, Joseph
2016-08-01
A computer program was developed in an attempt to differentiate the dreams of males from females. Hypothesized gender predictors were based on previous literature concerning both dream content and written language features. Dream reports from home-collected dream diaries of 100 male (144 dreams) and 100 female (144 dreams) adolescent Anglophones were matched for equal length. They were first scored with the Hall and Van de Castle (HVDC) scales and quantified using DreamSAT. Two male and two female undergraduate students were asked to read all dreams and predict the dreamer's gender. They averaged a pairwise percent correct gender prediction of 75.8% (κ=0.516), while the Automatic Analysis showed that the computer program's accuracy was 74.5% (κ=0.492), both of which were higher than chance of 50% (κ=0.00). The prediction levels were maintained when dreams containing obvious gender identifiers were eliminated and integration of HVDC scales did not improve prediction. Copyright © 2016 Elsevier Inc. All rights reserved.
Content-aware automatic cropping for consumer photos
NASA Astrophysics Data System (ADS)
Tang, Hao; Tretter, Daniel; Lin, Qian
2013-03-01
Consumer photos are typically authored once, but need to be retargeted for reuse in various situations. These include printing a photo on different size paper, changing the size and aspect ratio of an embedded photo to accommodate the dynamic content layout of web pages or documents, adapting a large photo for browsing on small displays such as mobile phone screens, and improving the aesthetic quality of a photo that was badly composed at the capture time. In this paper, we propose a novel, effective, and comprehensive content-aware automatic cropping (hereafter referred to as "autocrop") method for consumer photos to achieve the above purposes. Our autocrop method combines the state-of-the-art context-aware saliency detection algorithm, which aims to infer the likely intent of the photographer, and the "branch-and-bound" efficient subwindow search optimization technique, which seeks to locate the globally optimal cropping rectangle in a fast manner. Unlike most current autocrop methods, which can only crop a photo into an arbitrary rectangle, our autocrop method can automatically crop a photo into either a rectangle of arbitrary dimensions or a rectangle of the desired aspect ratio specified by the user. The aggressiveness of the cropping operation may be either automatically determined by the method or manually indicated by the user with ease. In addition, our autocrop method is extended to support the cropping of a photo into non-rectangular shapes such as polygons of any number of sides. It may also be potentially extended to return multiple cropping suggestions, which will enable the creation of new photos to enrich the original photo collections. Our experimental results show that the proposed autocrop method in this paper can generate high-quality crops for consumer photos of various types.
Ramkumar, Barathram; Sabarimalai Manikandan, M.
2017-01-01
Automatic electrocardiogram (ECG) signal enhancement has become a crucial pre-processing step in most ECG signal analysis applications. In this Letter, the authors propose an automated noise-aware dictionary learning-based generalised ECG signal enhancement framework which can automatically learn the dictionaries based on the ECG noise type for effective representation of ECG signal and noises, and can reduce the computational load of sparse representation-based ECG enhancement system. The proposed framework consists of noise detection and identification, noise-aware dictionary learning, sparse signal decomposition and reconstruction. The noise detection and identification is performed based on the moving average filter, first-order difference, and temporal features such as number of turning points, maximum absolute amplitude, zerocrossings, and autocorrelation features. The representation dictionary is learned based on the type of noise identified in the previous stage. The proposed framework is evaluated using noise-free and noisy ECG signals. Results demonstrate that the proposed method can significantly reduce computational load as compared with conventional dictionary learning-based ECG denoising approaches. Further, comparative results show that the method outperforms existing methods in automatically removing noises such as baseline wanders, power-line interference, muscle artefacts and their combinations without distorting the morphological content of local waves of ECG signal. PMID:28529758
Satija, Udit; Ramkumar, Barathram; Sabarimalai Manikandan, M
2017-02-01
Automatic electrocardiogram (ECG) signal enhancement has become a crucial pre-processing step in most ECG signal analysis applications. In this Letter, the authors propose an automated noise-aware dictionary learning-based generalised ECG signal enhancement framework which can automatically learn the dictionaries based on the ECG noise type for effective representation of ECG signal and noises, and can reduce the computational load of sparse representation-based ECG enhancement system. The proposed framework consists of noise detection and identification, noise-aware dictionary learning, sparse signal decomposition and reconstruction. The noise detection and identification is performed based on the moving average filter, first-order difference, and temporal features such as number of turning points, maximum absolute amplitude, zerocrossings, and autocorrelation features. The representation dictionary is learned based on the type of noise identified in the previous stage. The proposed framework is evaluated using noise-free and noisy ECG signals. Results demonstrate that the proposed method can significantly reduce computational load as compared with conventional dictionary learning-based ECG denoising approaches. Further, comparative results show that the method outperforms existing methods in automatically removing noises such as baseline wanders, power-line interference, muscle artefacts and their combinations without distorting the morphological content of local waves of ECG signal.
ERIC Educational Resources Information Center
Kuhn, Stephanie A. Contrucci; Triggs, Mandy
2009-01-01
Self-injurious behavior (SIB) that occurs at high rates across all conditions of a functional analysis can suggest automatic or multiple functions. In the current study, we conducted a functional analysis for 1 individual with SIB. Results indicated that SIB was, at least in part, maintained by automatic reinforcement. Further analyses using…
Big data sharing and analysis to advance research in post-traumatic epilepsy.
Duncan, Dominique; Vespa, Paul; Pitkanen, Asla; Braimah, Adebayo; Lapinlampi, Nina; Toga, Arthur W
2018-06-01
We describe the infrastructure and functionality for a centralized preclinical and clinical data repository and analytic platform to support importing heterogeneous multi-modal data, automatically and manually linking data across modalities and sites, and searching content. We have developed and applied innovative image and electrophysiology processing methods to identify candidate biomarkers from MRI, EEG, and multi-modal data. Based on heterogeneous biomarkers, we present novel analytic tools designed to study epileptogenesis in animal model and human with the goal of tracking the probability of developing epilepsy over time. Copyright © 2017. Published by Elsevier Inc.
Inhibitory mechanism of the matching heuristic in syllogistic reasoning.
Tse, Ping Ping; Moreno Ríos, Sergio; García-Madruga, Juan Antonio; Bajo Molina, María Teresa
2014-11-01
A number of heuristic-based hypotheses have been proposed to explain how people solve syllogisms with automatic processes. In particular, the matching heuristic employs the congruency of the quantifiers in a syllogism—by matching the quantifier of the conclusion with those of the two premises. When the heuristic leads to an invalid conclusion, successful solving of these conflict problems requires the inhibition of automatic heuristic processing. Accordingly, if the automatic processing were based on processing the set of quantifiers, no semantic contents would be inhibited. The mental model theory, however, suggests that people reason using mental models, which always involves semantic processing. Therefore, whatever inhibition occurs in the processing implies the inhibition of the semantic contents. We manipulated the validity of the syllogism and the congruency of the quantifier of its conclusion with those of the two premises according to the matching heuristic. A subsequent lexical decision task (LDT) with related words in the conclusion was used to test any inhibition of the semantic contents after each syllogistic evaluation trial. In the LDT, the facilitation effect of semantic priming diminished after correctly solved conflict syllogisms (match-invalid or mismatch-valid), but was intact after no-conflict syllogisms. The results suggest the involvement of an inhibitory mechanism of semantic contents in syllogistic reasoning when there is a conflict between the output of the syntactic heuristic and actual validity. Our results do not support a uniquely syntactic process of syllogistic reasoning but fit with the predictions based on mental model theory. Copyright © 2014 Elsevier B.V. All rights reserved.
Automated content and quality assessment of full-motion-video for the generation of meta data
NASA Astrophysics Data System (ADS)
Harguess, Josh
2015-05-01
Virtually all of the video data (and full-motion-video (FMV)) that is currently collected and stored in support of missions has been corrupted to various extents by image acquisition and compression artifacts. Additionally, video collected by wide-area motion imagery (WAMI) surveillance systems and unmanned aerial vehicles (UAVs) and similar sources is often of low quality or in other ways corrupted so that it is not worth storing or analyzing. In order to make progress in the problem of automatic video analysis, the first problem that should be solved is deciding whether the content of the video is even worth analyzing to begin with. We present a work in progress to address three types of scenes which are typically found in real-world data stored in support of Department of Defense (DoD) missions: no or very little motion in the scene, large occlusions in the scene, and fast camera motion. Each of these produce video that is generally not usable to an analyst or automated algorithm for mission support and therefore should be removed or flagged to the user as such. We utilize recent computer vision advances in motion detection and optical flow to automatically assess FMV for the identification and generation of meta-data (or tagging) of video segments which exhibit unwanted scenarios as described above. Results are shown on representative real-world video data.
Comparative analysis of semantic localization accuracies between adult and pediatric DICOM CT images
NASA Astrophysics Data System (ADS)
Robertson, Duncan; Pathak, Sayan D.; Criminisi, Antonio; White, Steve; Haynor, David; Chen, Oliver; Siddiqui, Khan
2012-02-01
Existing literature describes a variety of techniques for semantic annotation of DICOM CT images, i.e. the automatic detection and localization of anatomical structures. Semantic annotation facilitates enhanced image navigation, linkage of DICOM image content and non-image clinical data, content-based image retrieval, and image registration. A key challenge for semantic annotation algorithms is inter-patient variability. However, while the algorithms described in published literature have been shown to cope adequately with the variability in test sets comprising adult CT scans, the problem presented by the even greater variability in pediatric anatomy has received very little attention. Most existing semantic annotation algorithms can only be extended to work on scans of both adult and pediatric patients by adapting parameters heuristically in light of patient size. In contrast, our approach, which uses random regression forests ('RRF'), learns an implicit model of scale variation automatically using training data. In consequence, anatomical structures can be localized accurately in both adult and pediatric CT studies without the need for parameter adaptation or additional information about patient scale. We show how the RRF algorithm is able to learn scale invariance from a combined training set containing a mixture of pediatric and adult scans. Resulting localization accuracy for both adult and pediatric data remains comparable with that obtained using RRFs trained and tested using only adult data.
Tidal analysis and Arrival Process Mining Using Automatic Identification System (AIS) Data
2017-01-01
files, organized by location. The data were processed using the Python programming language (van Rossum and Drake 2001), the Pandas data analysis...ER D C/ CH L TR -1 7- 2 Coastal Inlets Research Program Tidal Analysis and Arrival Process Mining Using Automatic Identification System...17-2 January 2017 Tidal Analysis and Arrival Process Mining Using Automatic Identification System (AIS) Data Brandan M. Scully Coastal and
Linguistically informed digital fingerprints for text
NASA Astrophysics Data System (ADS)
Uzuner, Özlem
2006-02-01
Digital fingerprinting, watermarking, and tracking technologies have gained importance in the recent years in response to growing problems such as digital copyright infringement. While fingerprints and watermarks can be generated in many different ways, use of natural language processing for these purposes has so far been limited. Measuring similarity of literary works for automatic copyright infringement detection requires identifying and comparing creative expression of content in documents. In this paper, we present a linguistic approach to automatically fingerprinting novels based on their expression of content. We use natural language processing techniques to generate "expression fingerprints". These fingerprints consist of both syntactic and semantic elements of language, i.e., syntactic and semantic elements of expression. Our experiments indicate that syntactic and semantic elements of expression enable accurate identification of novels and their paraphrases, providing a significant improvement over techniques used in text classification literature for automatic copy recognition. We show that these elements of expression can be used to fingerprint, label, or watermark works; they represent features that are essential to the character of works and that remain fairly consistent in the works even when works are paraphrased. These features can be directly extracted from the contents of the works on demand and can be used to recognize works that would not be correctly identified either in the absence of pre-existing labels or by verbatim-copy detectors.
[Study on the automatic parameters identification of water pipe network model].
Jia, Hai-Feng; Zhao, Qi-Feng
2010-01-01
Based on the problems analysis on development and application of water pipe network model, the model parameters automatic identification is regarded as a kernel bottleneck of model's application in water supply enterprise. The methodology of water pipe network model parameters automatic identification based on GIS and SCADA database is proposed. Then the kernel algorithm of model parameters automatic identification is studied, RSA (Regionalized Sensitivity Analysis) is used for automatic recognition of sensitive parameters, and MCS (Monte-Carlo Sampling) is used for automatic identification of parameters, the detail technical route based on RSA and MCS is presented. The module of water pipe network model parameters automatic identification is developed. At last, selected a typical water pipe network as a case, the case study on water pipe network model parameters automatic identification is conducted and the satisfied results are achieved.
School Fire Protection: Contents Count
ERIC Educational Resources Information Center
American School and University, 1976
1976-01-01
The heart of a fire protection system is the sprinkler system. National Fire Protection Association (NFPA) statistics show that automatic sprinklers dramatically reduce fire damage and loss of life. (Author)
A content-based retrieval of mammographic masses using the curvelet descriptor
NASA Astrophysics Data System (ADS)
Narváez, Fabian; Díaz, Gloria; Gómez, Francisco; Romero, Eduardo
2012-03-01
Computer-aided diagnosis (CAD) that uses content based image retrieval (CBIR) strategies has became an important research area. This paper presents a retrieval strategy that automatically recovers mammography masses from a virtual repository of mammographies. Unlike other approaches, we do not attempt to segment masses but instead we characterize the regions previously selected by an expert. These regions are firstly curvelet transformed and further characterized by approximating the marginal curvelet subband distribution with a generalized gaussian density (GGD). The content based retrieval strategy searches similar regions in a database using the Kullback-Leibler divergence as the similarity measure between distributions. The effectiveness of the proposed descriptor was assessed by comparing the automatically assigned label with a ground truth available in the DDSM database.1 A total of 380 masses with different shapes, sizes and margins were used for evaluation, resulting in a mean average precision rate of 89.3% and recall rate of 75.2% for the retrieval task.
NASA Technical Reports Server (NTRS)
Hasler, A. F.; Strong, J.; Woodward, R. H.; Pierce, H.
1991-01-01
Results are presented on an automatic stereo analysis of cloud-top heights from nearly simultaneous satellite image pairs from the GOES and NOAA satellites, using a massively parallel processor computer. Comparisons of computer-derived height fields and manually analyzed fields show that the automatic analysis technique shows promise for performing routine stereo analysis in a real-time environment, providing a useful forecasting tool by augmenting observational data sets of severe thunderstorms and hurricanes. Simulations using synthetic stereo data show that it is possible to automatically resolve small-scale features such as 4000-m-diam clouds to about 1500 m in the vertical.
Qi, Xin; Xing, Fuyong; Foran, David J.; Yang, Lin
2013-01-01
Summary Background Automated analysis of imaged histopathology specimens could potentially provide support for improved reliability in detection and classification in a range of investigative and clinical cancer applications. Automated segmentation of cells in the digitized tissue microarray (TMA) is often the prerequisite for quantitative analysis. However overlapping cells usually bring significant challenges for traditional segmentation algorithms. Objectives In this paper, we propose a novel, automatic algorithm to separate overlapping cells in stained histology specimens acquired using bright-field RGB imaging. Methods It starts by systematically identifying salient regions of interest throughout the image based upon their underlying visual content. The segmentation algorithm subsequently performs a quick, voting based seed detection. Finally, the contour of each cell is obtained using a repulsive level set deformable model using the seeds generated in the previous step. We compared the experimental results with the most current literature, and the pixel wise accuracy between human experts' annotation and those generated using the automatic segmentation algorithm. Results The method is tested with 100 image patches which contain more than 1000 overlapping cells. The overall precision and recall of the developed algorithm is 90% and 78%, respectively. We also implement the algorithm on GPU. The parallel implementation is 22 times faster than its C/C++ sequential implementation. Conclusion The proposed overlapping cell segmentation algorithm can accurately detect the center of each overlapping cell and effectively separate each of the overlapping cells. GPU is proven to be an efficient parallel platform for overlapping cell segmentation. PMID:22526139
Reflective and Non-conscious Responses to Exercise Images
Cope, Kathryn; Vandelanotte, Corneel; Short, Camille E.; Conroy, David E.; Rhodes, Ryan E.; Jackson, Ben; Dimmock, James A.; Rebar, Amanda L.
2018-01-01
Images portraying exercise are commonly used to promote exercise behavior and to measure automatic associations of exercise (e.g., via implicit association tests). The effectiveness of these promotion efforts and the validity of measurement techniques partially rely on the untested assumption that the images being used are perceived by the general public as portrayals of exercise that is pleasant and motivating. The aim of this study was to investigate how content of images impacted people's automatic and reflective evaluations of exercise images. Participants (N = 90) completed a response time categorization task (similar to the implicit association test) to capture how automatically people perceived each image as relevant to Exercise or Not exercise. Participants also self-reported their evaluations of the images using visual analog scales with the anchors: Exercise/Not exercise, Does not motivate me to exercise/Motivates me to exercise, Pleasant/Unpleasant, and Energizing/Deactivating. People tended to more strongly automatically associate images with exercise if the images were of an outdoor setting, presented sport (as opposed to active labor or gym-based) activities, and included young (as opposed to middle-aged) adults. People tended to reflectively find images of young adults more motivating and relevant to exercise than images of older adults. The content of exercise images is an often overlooked source of systematic variability that may impact measurement validity and intervention effectiveness. PMID:29375419
Reflective and Non-conscious Responses to Exercise Images.
Cope, Kathryn; Vandelanotte, Corneel; Short, Camille E; Conroy, David E; Rhodes, Ryan E; Jackson, Ben; Dimmock, James A; Rebar, Amanda L
2017-01-01
Images portraying exercise are commonly used to promote exercise behavior and to measure automatic associations of exercise (e.g., via implicit association tests). The effectiveness of these promotion efforts and the validity of measurement techniques partially rely on the untested assumption that the images being used are perceived by the general public as portrayals of exercise that is pleasant and motivating. The aim of this study was to investigate how content of images impacted people's automatic and reflective evaluations of exercise images. Participants ( N = 90) completed a response time categorization task (similar to the implicit association test) to capture how automatically people perceived each image as relevant to Exercise or Not exercise . Participants also self-reported their evaluations of the images using visual analog scales with the anchors: Exercise / Not exercise, Does not motivate me to exercise / Motivates me to exercise, Pleasant / Unpleasant , and Energizing/Deactivating . People tended to more strongly automatically associate images with exercise if the images were of an outdoor setting, presented sport (as opposed to active labor or gym-based) activities, and included young (as opposed to middle-aged) adults. People tended to reflectively find images of young adults more motivating and relevant to exercise than images of older adults. The content of exercise images is an often overlooked source of systematic variability that may impact measurement validity and intervention effectiveness.
1989-08-01
Automatic Line Network Extraction from Aerial Imangery of Urban Areas Sthrough KnowledghBased Image Analysis N 04 Final Technical ReportI December...Automatic Line Network Extraction from Aerial Imagery of Urban Areas through Knowledge Based Image Analysis Accesion For NTIS CRA&I DTIC TAB 0...paittern re’ognlition. blac’kboardl oriented symbollic processing, knowledge based image analysis , image understanding, aer’ial imsagery, urban area, 17
Automatic analysis of the micronucleus test in primary human lymphocytes using image analysis.
Frieauff, W; Martus, H J; Suter, W; Elhajouji, A
2013-01-01
The in vitro micronucleus test (MNT) is a well-established test for early screening of new chemical entities in industrial toxicology. For assessing the clastogenic or aneugenic potential of a test compound, micronucleus induction in cells has been shown repeatedly to be a sensitive and a specific parameter. Various automated systems to replace the tedious and time-consuming visual slide analysis procedure as well as flow cytometric approaches have been discussed. The ROBIAS (Robotic Image Analysis System) for both automatic cytotoxicity assessment and micronucleus detection in human lymphocytes was developed at Novartis where the assay has been used to validate positive results obtained in the MNT in TK6 cells, which serves as the primary screening system for genotoxicity profiling in early drug development. In addition, the in vitro MNT has become an accepted alternative to support clinical studies and will be used for regulatory purposes as well. The comparison of visual with automatic analysis results showed a high degree of concordance for 25 independent experiments conducted for the profiling of 12 compounds. For concentration series of cyclophosphamide and carbendazim, a very good correlation between automatic and visual analysis by two examiners could be established, both for the relative division index used as cytotoxicity parameter, as well as for micronuclei scoring in mono- and binucleated cells. Generally, false-positive micronucleus decisions could be controlled by fast and simple relocation of the automatically detected patterns. The possibility to analyse 24 slides within 65h by automatic analysis over the weekend and the high reproducibility of the results make automatic image processing a powerful tool for the micronucleus analysis in primary human lymphocytes. The automated slide analysis for the MNT in human lymphocytes complements the portfolio of image analysis applications on ROBIAS which is supporting various assays at Novartis.
DELINEATING SUBTYPES OF SELF-INJURIOUS BEHAVIOR MAINTAINED BY AUTOMATIC REINFORCEMENT
Hagopian, Louis P.; Rooker, Griffin W.; Zarcone, Jennifer R.
2016-01-01
Self-injurious behavior (SIB) is maintained by automatic reinforcement in roughly 25% of cases. Automatically reinforced SIB typically has been considered a single functional category, and is less understood than socially reinforced SIB. Subtyping automatically reinforced SIB into functional categories has the potential to guide the development of more targeted interventions and increase our understanding of its biological underpinnings. The current study involved an analysis of 39 individuals with automatically reinforced SIB and a comparison group of 13 individuals with socially reinforced SIB. Automatically reinforced SIB was categorized into 3 subtypes based on patterns of responding in the functional analysis and the presence of self-restraint. These response features were selected as the basis for subtyping on the premise that they could reflect functional properties of SIB unique to each subtype. Analysis of treatment data revealed important differences across subtypes and provides preliminary support to warrant additional research on this proposed subtyping model. PMID:26223959
Structural scene analysis and content-based image retrieval applied to bone age assessment
NASA Astrophysics Data System (ADS)
Fischer, Benedikt; Brosig, André; Deserno, Thomas M.; Ott, Bastian; Günther, Rolf W.
2009-02-01
Radiological bone age assessment is based on global or local image regions of interest (ROI), such as epiphyseal regions or the area of carpal bones. Usually, these regions are compared to a standardized reference and a score determining the skeletal maturity is calculated. For computer-assisted diagnosis, automatic ROI extraction is done so far by heuristic approaches. In this work, we apply a high-level approach of scene analysis for knowledge-based ROI segmentation. Based on a set of 100 reference images from the IRMA database, a so called structural prototype (SP) is trained. In this graph-based structure, the 14 phalanges and 5 metacarpal bones are represented by nodes, with associated location, shape, as well as texture parameters modeled by Gaussians. Accordingly, the Gaussians describing the relative positions, relative orientation, and other relative parameters between two nodes are associated to the edges. Thereafter, segmentation of a hand radiograph is done in several steps: (i) a multi-scale region merging scheme is applied to extract visually prominent regions; (ii) a graph/sub-graph matching to the SP robustly identifies a subset of the 19 bones; (iii) the SP is registered to the current image for complete scene-reconstruction (iv) the epiphyseal regions are extracted from the reconstructed scene. The evaluation is based on 137 images of Caucasian males from the USC hand atlas. Overall, an error rate of 32% is achieved, for the 6 middle distal and medial/distal epiphyses, 23% of all extractions need adjustments. On average 9.58 of the 14 epiphyseal regions were extracted successfully per image. This is promising for further use in content-based image retrieval (CBIR) and CBIR-based automatic bone age assessment.
Study of the cerrado vegetation in the Federal District area from orbital data. M.S. Thesis
NASA Technical Reports Server (NTRS)
Dejesusparada, N. (Principal Investigator); Aoki, H.; Dossantos, J. R.
1980-01-01
The physiognomic units of cerrado in the area of Distrito Federal (DF) were studied through the visual and automatic analysis of products provided by Multispectral Scanning System (MSS) of LANDSAT. The visual analysis of the multispectral images in black and white, at the 1:250,000 scale, was made based on the texture and tonal patterns. The automatic analysis of the compatible computer tapes (CCT) was made by means of IMAGE-100 system. The following conclusions were obtained: (1) the delimitation of cerrado vegetation forms can be made by the visual and automatic analysis; (2) in the visual analysis, the principal parameter used to discriminate the cerrado forms was the tonal pattern, independently of the year's seasons, and the channel 5 gave better information; (3) in the automatic analysis, the data of the four channels of MSS can be used in the discrimination of the cerrado forms; and (4) in the automatic analysis, the four channels combination possibilities gave more information in the separation of cerrado units when soil types were considered.
Validation of automatic segmentation of ribs for NTCP modeling.
Stam, Barbara; Peulen, Heike; Rossi, Maddalena M G; Belderbos, José S A; Sonke, Jan-Jakob
2016-03-01
Determination of a dose-effect relation for rib fractures in a large patient group has been limited by the time consuming manual delineation of ribs. Automatic segmentation could facilitate such an analysis. We determine the accuracy of automatic rib segmentation in the context of normal tissue complication probability modeling (NTCP). Forty-one patients with stage I/II non-small cell lung cancer treated with SBRT to 54 Gy in 3 fractions were selected. Using the 4DCT derived mid-ventilation planning CT, all ribs were manually contoured and automatically segmented. Accuracy of segmentation was assessed using volumetric, shape and dosimetric measures. Manual and automatic dosimetric parameters Dx and EUD were tested for equivalence using the Two One-Sided T-test (TOST), and assessed for agreement using Bland-Altman analysis. NTCP models based on manual and automatic segmentation were compared. Automatic segmentation was comparable with the manual delineation in radial direction, but larger near the costal cartilage and vertebrae. Manual and automatic Dx and EUD were significantly equivalent. The Bland-Altman analysis showed good agreement. The two NTCP models were very similar. Automatic rib segmentation was significantly equivalent to manual delineation and can be used for NTCP modeling in a large patient group. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Ontology-based content analysis of US patent applications from 2001-2010.
Weber, Lutz; Böhme, Timo; Irmer, Matthias
2013-01-01
Ontology-based semantic text analysis methods allow to automatically extract knowledge relationships and data from text documents. In this review, we have applied these technologies for the systematic analysis of pharmaceutical patents. Hierarchical concepts from the knowledge domains of chemical compounds, diseases and proteins were used to annotate full-text US patent applications that deal with pharmacological activities of chemical compounds and filed in the years 2001-2010. Compounds claimed in these applications have been classified into their respective compound classes to review the distribution of scaffold types or general compound classes such as natural products in a time-dependent manner. Similarly, the target proteins and claimed utility of the compounds have been classified and the most relevant were extracted. The method presented allows the discovery of the main areas of innovation as well as emerging fields of patenting activities - providing a broad statistical basis for competitor analysis and decision-making efforts.
Tumor Content Chart-Assisted HER2/CEP17 Digital PCR Analysis of Gastric Cancer Biopsy Specimens.
Matsusaka, Keisuke; Ishikawa, Shumpei; Nakayama, Atsuhito; Ushiku, Tetsuo; Nishimoto, Aiko; Urabe, Masayuki; Kaneko, Nobuyuki; Kunita, Akiko; Kaneda, Atsushi; Aburatani, Hiroyuki; Fujishiro, Mitsuhiro; Seto, Yasuyuki; Fukayama, Masashi
2016-01-01
Evaluating HER2 gene amplification is an essential component of therapeutic decision-making for advanced or metastatic gastric cancer. A simple method that is applicable to small, formalin-fixed, paraffin-embedded biopsy specimens is desirable as an adjunct to or as a substitute for currently used HER2 immunohistochemistry and in situ hybridization protocols. In this study, we developed a microfluidics-based digital PCR method for determining HER2 and chromosome 17 centromere (CEP17) copy numbers and estimating tumor content ratio (TCR). The HER2/CEP17 ratio is determined by three variables-TCR and absolute copy numbers of HER2 and CEP17-by examining tumor cells; only the ratio of the latter two can be obtained by digital PCR using the whole specimen without purifying tumor cells. TCR was determined by semi-automatic image analysis. We developed a Tumor Content chart, which is a plane of rectangular coordinates consisting of HER2/CEP17 digital PCR data and TCR that delineates amplified, non-amplified, and equivocal areas. By applying this method, 44 clinical gastric cancer biopsy samples were classified as amplified (n = 13), non-amplified (n = 25), or equivocal (n = 6). By comparison, 11 samples were positive, 11 were negative, and 22 were equivocally immunohistochemistry. Thus, our novel method reduced the number of equivocal samples from 22 to 6, thereby obviating the need for confirmation by fluorescence or dual-probe in situ hybridization to < 30% of cases. Tumor content chart-assisted digital PCR analysis is also applicable to multiple sites in surgically resected tissues. These results indicate that this analysis is a useful alternative to HER2 immunohistochemistry in gastric cancers that can serve as a basis for the automated evaluation of HER2 status.
Tumor Content Chart-Assisted HER2/CEP17 Digital PCR Analysis of Gastric Cancer Biopsy Specimens
Matsusaka, Keisuke; Ishikawa, Shumpei; Nakayama, Atsuhito; Ushiku, Tetsuo; Nishimoto, Aiko; Urabe, Masayuki; Kaneko, Nobuyuki; Kunita, Akiko; Kaneda, Atsushi; Aburatani, Hiroyuki; Fujishiro, Mitsuhiro; Seto, Yasuyuki; Fukayama, Masashi
2016-01-01
Evaluating HER2 gene amplification is an essential component of therapeutic decision-making for advanced or metastatic gastric cancer. A simple method that is applicable to small, formalin-fixed, paraffin-embedded biopsy specimens is desirable as an adjunct to or as a substitute for currently used HER2 immunohistochemistry and in situ hybridization protocols. In this study, we developed a microfluidics-based digital PCR method for determining HER2 and chromosome 17 centromere (CEP17) copy numbers and estimating tumor content ratio (TCR). The HER2/CEP17 ratio is determined by three variables—TCR and absolute copy numbers of HER2 and CEP17—by examining tumor cells; only the ratio of the latter two can be obtained by digital PCR using the whole specimen without purifying tumor cells. TCR was determined by semi-automatic image analysis. We developed a Tumor Content chart, which is a plane of rectangular coordinates consisting of HER2/CEP17 digital PCR data and TCR that delineates amplified, non-amplified, and equivocal areas. By applying this method, 44 clinical gastric cancer biopsy samples were classified as amplified (n = 13), non-amplified (n = 25), or equivocal (n = 6). By comparison, 11 samples were positive, 11 were negative, and 22 were equivocally immunohistochemistry. Thus, our novel method reduced the number of equivocal samples from 22 to 6, thereby obviating the need for confirmation by fluorescence or dual-probe in situ hybridization to < 30% of cases. Tumor content chart-assisted digital PCR analysis is also applicable to multiple sites in surgically resected tissues. These results indicate that this analysis is a useful alternative to HER2 immunohistochemistry in gastric cancers that can serve as a basis for the automated evaluation of HER2 status. PMID:27119558
Vision Systems with the Human in the Loop
NASA Astrophysics Data System (ADS)
Bauckhage, Christian; Hanheide, Marc; Wrede, Sebastian; Käster, Thomas; Pfeiffer, Michael; Sagerer, Gerhard
2005-12-01
The emerging cognitive vision paradigm deals with vision systems that apply machine learning and automatic reasoning in order to learn from what they perceive. Cognitive vision systems can rate the relevance and consistency of newly acquired knowledge, they can adapt to their environment and thus will exhibit high robustness. This contribution presents vision systems that aim at flexibility and robustness. One is tailored for content-based image retrieval, the others are cognitive vision systems that constitute prototypes of visual active memories which evaluate, gather, and integrate contextual knowledge for visual analysis. All three systems are designed to interact with human users. After we will have discussed adaptive content-based image retrieval and object and action recognition in an office environment, the issue of assessing cognitive systems will be raised. Experiences from psychologically evaluated human-machine interactions will be reported and the promising potential of psychologically-based usability experiments will be stressed.
Papenmeier, Frank; Huff, Markus
2010-02-01
Analyzing gaze behavior with dynamic stimulus material is of growing importance in experimental psychology; however, there is still a lack of efficient analysis tools that are able to handle dynamically changing areas of interest. In this article, we present DynAOI, an open-source tool that allows for the definition of dynamic areas of interest. It works automatically with animations that are based on virtual three-dimensional models. When one is working with videos of real-world scenes, a three-dimensional model of the relevant content needs to be created first. The recorded eye-movement data are matched with the static and dynamic objects in the model underlying the video content, thus creating static and dynamic areas of interest. A validation study asking participants to track particular objects demonstrated that DynAOI is an efficient tool for handling dynamic areas of interest.
Do the Contents of Working Memory Capture Attention? Yes, but Cognitive Control Matters
ERIC Educational Resources Information Center
Han, Suk Won; Kim, Min-Shik
2009-01-01
There has been a controversy on whether working memory can guide attentional selection. Some researchers have reported that the contents of working memory guide attention automatically in visual search (D. Soto, D. Heinke, G. W. Humphreys, & M. J. Blanco, 2005). On the other hand, G.F. Woodman and S. J. Luck (2007) reported that they could not…
NASA Astrophysics Data System (ADS)
Du, Hongbo; Al-Jubouri, Hanan; Sellahewa, Harin
2014-05-01
Content-based image retrieval is an automatic process of retrieving images according to image visual contents instead of textual annotations. It has many areas of application from automatic image annotation and archive, image classification and categorization to homeland security and law enforcement. The key issues affecting the performance of such retrieval systems include sensible image features that can effectively capture the right amount of visual contents and suitable similarity measures to find similar and relevant images ranked in a meaningful order. Many different approaches, methods and techniques have been developed as a result of very intensive research in the past two decades. Among many existing approaches, is a cluster-based approach where clustering methods are used to group local feature descriptors into homogeneous regions, and search is conducted by comparing the regions of the query image against those of the stored images. This paper serves as a review of works in this area. The paper will first summarize the existing work reported in the literature and then present the authors' own investigations in this field. The paper intends to highlight not only achievements made by recent research but also challenges and difficulties still remaining in this area.
MPEG content summarization based on compressed domain feature analysis
NASA Astrophysics Data System (ADS)
Sugano, Masaru; Nakajima, Yasuyuki; Yanagihara, Hiromasa
2003-11-01
This paper addresses automatic summarization of MPEG audiovisual content on compressed domain. By analyzing semantically important low-level and mid-level audiovisual features, our method universally summarizes the MPEG-1/-2 contents in the form of digest or highlight. The former is a shortened version of an original, while the latter is an aggregation of important or interesting events. In our proposal, first, the incoming MPEG stream is segmented into shots and the above features are derived from each shot. Then the features are adaptively evaluated in an integrated manner, and finally the qualified shots are aggregated into a summary. Since all the processes are performed completely on compressed domain, summarization is achieved at very low computational cost. The experimental results show that news highlights and sports highlights in TV baseball games can be successfully extracted according to simple shot transition models. As for digest extraction, subjective evaluation proves that meaningful shots are extracted from content without a priori knowledge, even if it contains multiple genres of programs. Our method also has the advantage of generating an MPEG-7 based description such as summary and audiovisual segments in the course of summarization.
Heymann, Michael; Degani, Asaf
2007-04-01
We present a formal approach and methodology for the analysis and generation of user interfaces, with special emphasis on human-automation interaction. A conceptual approach for modeling, analyzing, and verifying the information content of user interfaces is discussed. The proposed methodology is based on two criteria: First, the interface must be correct--that is, given the interface indications and all related information (user manuals, training material, etc.), the user must be able to successfully perform the specified tasks. Second, the interface and related information must be succinct--that is, the amount of information (mode indications, mode buttons, parameter settings, etc.) presented to the user must be reduced (abstracted) to the minimum necessary. A step-by-step procedure for generating the information content of the interface that is both correct and succinct is presented and then explained and illustrated via two examples. Every user interface is an abstract description of the underlying system. The correspondence between the abstracted information presented to the user and the underlying behavior of a given machine can be analyzed and addressed formally. The procedure for generating the information content of user interfaces can be automated, and a software tool for its implementation has been developed. Potential application areas include adaptive interface systems and customized/personalized interfaces.
Science information systems: Archive, access, and retrieval
NASA Technical Reports Server (NTRS)
Campbell, William J.
1991-01-01
The objective of this research is to develop technology for the automated characterization and interactive retrieval and visualization of very large, complex scientific data sets. Technologies will be developed for the following specific areas: (1) rapidly archiving data sets; (2) automatically characterizing and labeling data in near real-time; (3) providing users with the ability to browse contents of databases efficiently and effectively; (4) providing users with the ability to access and retrieve system independent data sets electronically; and (5) automatically alerting scientists to anomalies detected in data.
Systems and methods for automatically identifying and linking names in digital resources
Parker, Charles T.; Lyons, Catherine M.; Roston, Gerald P.; Garrity, George M.
2017-06-06
The present invention provides systems and methods for automatically identifying name-like-strings in digital resources, matching these name-like-string against a set of names held in an expertly curated database, and for those name-like-strings found in said database, enhancing the content by associating additional matter with the name, wherein said matter includes information about the names that is held within said database and pointers to other digital resources which include the same name and it synonyms.
Research on Generating Method of Embedded Software Test Document Based on Dynamic Model
NASA Astrophysics Data System (ADS)
Qu, MingCheng; Wu, XiangHu; Tao, YongChao; Liu, Ying
2018-03-01
This paper provides a dynamic model-based test document generation method for embedded software that provides automatic generation of two documents: test requirements specification documentation and configuration item test documentation. This method enables dynamic test requirements to be implemented in dynamic models, enabling dynamic test demand tracking to be easily generated; able to automatically generate standardized, standardized test requirements and test documentation, improved document-related content inconsistency and lack of integrity And other issues, improve the efficiency.
Towards an intelligent framework for multimodal affective data analysis.
Poria, Soujanya; Cambria, Erik; Hussain, Amir; Huang, Guang-Bin
2015-03-01
An increasingly large amount of multimodal content is posted on social media websites such as YouTube and Facebook everyday. In order to cope with the growth of such so much multimodal data, there is an urgent need to develop an intelligent multi-modal analysis framework that can effectively extract information from multiple modalities. In this paper, we propose a novel multimodal information extraction agent, which infers and aggregates the semantic and affective information associated with user-generated multimodal data in contexts such as e-learning, e-health, automatic video content tagging and human-computer interaction. In particular, the developed intelligent agent adopts an ensemble feature extraction approach by exploiting the joint use of tri-modal (text, audio and video) features to enhance the multimodal information extraction process. In preliminary experiments using the eNTERFACE dataset, our proposed multi-modal system is shown to achieve an accuracy of 87.95%, outperforming the best state-of-the-art system by more than 10%, or in relative terms, a 56% reduction in error rate. Copyright © 2014 Elsevier Ltd. All rights reserved.
Automatic Content Creation for Games to Train Students Distinguishing Similar Chinese Characters
NASA Astrophysics Data System (ADS)
Lai, Kwong-Hung; Leung, Howard; Tang, Jeff K. T.
In learning Chinese, many students often have the problem of mixing up similar characters. This can cause misunderstanding and miscommunication in daily life. It is thus important for students learning the Chinese language to be able to distinguish similar characters and understand their proper usage. In this paper, we propose a game style framework in which the game content in identifying similar Chinese characters in idioms and words is created automatically. Our prior work on analyzing students’ Chinese handwriting can be applied in the similarity measure of Chinese characters. We extend this work by adding the component of radical extraction to speed up the search process. Experimental results show that the proposed method is more accurate and faster in finding more similar Chinese characters compared with the baseline method without considering the radical information.
Sauer, Ursula G; Wächter, Thomas; Hareng, Lars; Wareing, Britta; Langsch, Angelika; Zschunke, Matthias; Alvers, Michael R; Landsiedel, Robert
2014-06-01
The knowledge-based search engine Go3R, www.Go3R.org, has been developed to assist scientists from industry and regulatory authorities in collecting comprehensive toxicological information with a special focus on identifying available alternatives to animal testing. The semantic search paradigm of Go3R makes use of expert knowledge on 3Rs methods and regulatory toxicology, laid down in the ontology, a network of concepts, terms, and synonyms, to recognize the contents of documents. Search results are automatically sorted into a dynamic table of contents presented alongside the list of documents retrieved. This table of contents allows the user to quickly filter the set of documents by topics of interest. Documents containing hazard information are automatically assigned to a user interface following the endpoint-specific IUCLID5 categorization scheme required, e.g. for REACH registration dossiers. For this purpose, complex endpoint-specific search queries were compiled and integrated into the search engine (based upon a gold standard of 310 references that had been assigned manually to the different endpoint categories). Go3R sorts 87% of the references concordantly into the respective IUCLID5 categories. Currently, Go3R searches in the 22 million documents available in the PubMed and TOXNET databases. However, it can be customized to search in other databases including in-house databanks. Copyright © 2013 Elsevier Ltd. All rights reserved.
Novikova, Anna; Carstensen, Jens M; Rades, Thomas; Leopold, Prof Dr Claudia S
2016-12-30
In the present study the applicability of multispectral UV imaging in combination with multivariate image analysis for surface evaluation of MUPS tablets was investigated with respect to the differentiation of the API pellets from the excipients matrix, estimation of the drug content as well as pellet distribution, and influence of the coating material and tablet thickness on the predictive model. Different formulations consisting of coated drug pellets with two coating polymers (Aquacoat ® ECD and Eudragit ® NE 30 D) at three coating levels each were compressed to MUPS tablets with various amounts of coated pellets and different tablet thicknesses. The coated drug pellets were clearly distinguishable from the excipients matrix using a partial least squares approach regardless of the coating layer thickness and coating material used. Furthermore, the number of the detected drug pellets on the tablet surface allowed an estimation of the true drug content in the respective MUPS tablet. In addition, the pellet distribution in the MUPS formulations could be estimated by UV image analysis of the tablet surface. In conclusion, this study revealed that UV imaging in combination with multivariate image analysis is a promising approach for the automatic quality control of MUPS tablets during the manufacturing process. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Armando, Alessandro; Giunchiglia, Enrico; Ponta, Serena Elisa
We present an approach to the formal specification and automatic analysis of business processes under authorization constraints based on the action language \\cal{C}. The use of \\cal{C} allows for a natural and concise modeling of the business process and the associated security policy and for the automatic analysis of the resulting specification by using the Causal Calculator (CCALC). Our approach improves upon previous work by greatly simplifying the specification step while retaining the ability to perform a fully automatic analysis. To illustrate the effectiveness of the approach we describe its application to a version of a business process taken from the banking domain and use CCALC to determine resource allocation plans complying with the security policy.
Varela, P; Silva, A; da Silva, F; da Graça, S; Manso, M E; Conway, G D
2010-10-01
The spectrogram is one of the best-known time-frequency distributions suitable to analyze signals whose energy varies both in time and frequency. In reflectometry, it has been used to obtain the frequency content of FM-CW signals for density profile inversion and also to study plasma density fluctuations from swept and fixed frequency data. Being implemented via the short-time Fourier transform, the spectrogram is limited in resolution, and for that reason several methods have been developed to overcome this problem. Among those, we focus on the reassigned spectrogram technique that is both easily automated and computationally efficient requiring only the calculation of two additional spectrograms. In each time-frequency window, the technique reallocates the spectrogram coordinates to the region that most contributes to the signal energy. The application to ASDEX Upgrade reflectometry data results in better energy concentration and improved localization of the spectral content of the reflected signals. When combined with the automatic (data driven) window length spectrogram, this technique provides improved profile accuracy, in particular, in regions where frequency content varies most rapidly such as the edge pedestal shoulder.
Trend of Narratives in the Age of Misinformation.
Bessi, Alessandro; Zollo, Fabiana; Del Vicario, Michela; Scala, Antonio; Caldarelli, Guido; Quattrociocchi, Walter
2015-01-01
Social media enabled a direct path from producer to consumer of contents changing the way users get informed, debate, and shape their worldviews. Such a disintermediation might weaken consensus on social relevant issues in favor of rumors, mistrust, or conspiracy thinking-e.g., chem-trails inducing global warming, the link between vaccines and autism, or the New World Order conspiracy. Previous studies pointed out that consumers of conspiracy-like content are likely to aggregate in homophile clusters-i.e., echo-chambers. Along this path we study, by means of a thorough quantitative analysis, how different topics are consumed inside the conspiracy echo-chamber in the Italian Facebook. Through a semi-automatic topic extraction strategy, we show that the most consumed contents semantically refer to four specific categories: environment, diet, health, and geopolitics. We find similar consumption patterns by comparing users activity (likes and comments) on posts belonging to these different semantic categories. Finally, we model users mobility across the distinct topics finding that the more a user is active, the more he is likely to span on all categories. Once inside a conspiracy narrative users tend to embrace the overall corpus.
Text Mining to Support Gene Ontology Curation and Vice Versa.
Ruch, Patrick
2017-01-01
In this chapter, we explain how text mining can support the curation of molecular biology databases dealing with protein functions. We also show how curated data can play a disruptive role in the developments of text mining methods. We review a decade of efforts to improve the automatic assignment of Gene Ontology (GO) descriptors, the reference ontology for the characterization of genes and gene products. To illustrate the high potential of this approach, we compare the performances of an automatic text categorizer and show a large improvement of +225 % in both precision and recall on benchmarked data. We argue that automatic text categorization functions can ultimately be embedded into a Question-Answering (QA) system to answer questions related to protein functions. Because GO descriptors can be relatively long and specific, traditional QA systems cannot answer such questions. A new type of QA system, so-called Deep QA which uses machine learning methods trained with curated contents, is thus emerging. Finally, future advances of text mining instruments are directly dependent on the availability of high-quality annotated contents at every curation step. Databases workflows must start recording explicitly all the data they curate and ideally also some of the data they do not curate.
Method for automatic measurement of second language speaking proficiency
NASA Astrophysics Data System (ADS)
Bernstein, Jared; Balogh, Jennifer
2005-04-01
Spoken language proficiency is intuitively related to effective and efficient communication in spoken interactions. However, it is difficult to derive a reliable estimate of spoken language proficiency by situated elicitation and evaluation of a person's communicative behavior. This paper describes the task structure and scoring logic of a group of fully automatic spoken language proficiency tests (for English, Spanish and Dutch) that are delivered via telephone or Internet. Test items are presented in spoken form and require a spoken response. Each test is automatically-scored and primarily based on short, decontextualized tasks that elicit integrated listening and speaking performances. The tests present several types of tasks to candidates, including sentence repetition, question answering, sentence construction, and story retelling. The spoken responses are scored according to the lexical content of the response and a set of acoustic base measures on segments, words and phrases, which are scaled with IRT methods or parametrically combined to optimize fit to human listener judgments. Most responses are isolated spoken phrases and sentences that are scored according to their linguistic content, their latency, and their fluency and pronunciation. The item development procedures and item norming are described.
Automatic summarization of soccer highlights using audio-visual descriptors.
Raventós, A; Quijada, R; Torres, Luis; Tarrés, Francesc
2015-01-01
Automatic summarization generation of sports video content has been object of great interest for many years. Although semantic descriptions techniques have been proposed, many of the approaches still rely on low-level video descriptors that render quite limited results due to the complexity of the problem and to the low capability of the descriptors to represent semantic content. In this paper, a new approach for automatic highlights summarization generation of soccer videos using audio-visual descriptors is presented. The approach is based on the segmentation of the video sequence into shots that will be further analyzed to determine its relevance and interest. Of special interest in the approach is the use of the audio information that provides additional robustness to the overall performance of the summarization system. For every video shot a set of low and mid level audio-visual descriptors are computed and lately adequately combined in order to obtain different relevance measures based on empirical knowledge rules. The final summary is generated by selecting those shots with highest interest according to the specifications of the user and the results of relevance measures. A variety of results are presented with real soccer video sequences that prove the validity of the approach.
A new user-assisted segmentation and tracking technique for an object-based video editing system
NASA Astrophysics Data System (ADS)
Yu, Hong Y.; Hong, Sung-Hoon; Lee, Mike M.; Choi, Jae-Gark
2004-03-01
This paper presents a semi-automatic segmentation method which can be used to generate video object plane (VOP) for object based coding scheme and multimedia authoring environment. Semi-automatic segmentation can be considered as a user-assisted segmentation technique. A user can initially mark objects of interest around the object boundaries and then the user-guided and selected objects are continuously separated from the unselected areas through time evolution in the image sequences. The proposed segmentation method consists of two processing steps: partially manual intra-frame segmentation and fully automatic inter-frame segmentation. The intra-frame segmentation incorporates user-assistance to define the meaningful complete visual object of interest to be segmentation and decides precise object boundary. The inter-frame segmentation involves boundary and region tracking to obtain temporal coherence of moving object based on the object boundary information of previous frame. The proposed method shows stable efficient results that could be suitable for many digital video applications such as multimedia contents authoring, content based coding and indexing. Based on these results, we have developed objects based video editing system with several convenient editing functions.
Yavuzer, Yasemin; Karataş, Zeynep
2013-01-01
This study aimed to examine the mediating role of anger in the relationship between automatic thoughts and physical aggression in adolescents. The study included 224 adolescents in the 9th grade of 3 different high schools in central Burdur during the 2011-2012 academic year. Participants completed the Aggression Questionnaire and Automatic Thoughts Scale in their classrooms during counseling sessions. Data were analyzed using simple and multiple linear regression analysis. There were positive correlations between the adolescents' automatic thoughts, and physical aggression, and anger. According to regression analysis, automatic thoughts effectively predicted the level of physical aggression (b= 0.233, P < 0.001)) and anger (b= 0.325, P < 0.001). Analysis of the mediating role of anger showed that anger fully mediated the relationship between automatic thoughts and physical aggression (Sobel z = 5.646, P < 0.001). Anger fully mediated the relationship between automatic thoughts and physical aggression. Providing adolescents with anger management skills training is very important for the prevention of physical aggression. Such training programs should include components related to the development of an awareness of dysfunctional and anger-triggering automatic thoughts, and how to change them. As the study group included adolescents from Burdur, the findings can only be generalized to groups with similar characteristics.
Al-Nawashi, Malek; Al-Hazaimeh, Obaida M; Saraee, Mohamad
2017-01-01
Abnormal activity detection plays a crucial role in surveillance applications, and a surveillance system that can perform robustly in an academic environment has become an urgent need. In this paper, we propose a novel framework for an automatic real-time video-based surveillance system which can simultaneously perform the tracking, semantic scene learning, and abnormality detection in an academic environment. To develop our system, we have divided the work into three phases: preprocessing phase, abnormal human activity detection phase, and content-based image retrieval phase. For motion object detection, we used the temporal-differencing algorithm and then located the motions region using the Gaussian function. Furthermore, the shape model based on OMEGA equation was used as a filter for the detected objects (i.e., human and non-human). For object activities analysis, we evaluated and analyzed the human activities of the detected objects. We classified the human activities into two groups: normal activities and abnormal activities based on the support vector machine. The machine then provides an automatic warning in case of abnormal human activities. It also embeds a method to retrieve the detected object from the database for object recognition and identification using content-based image retrieval. Finally, a software-based simulation using MATLAB was performed and the results of the conducted experiments showed an excellent surveillance system that can simultaneously perform the tracking, semantic scene learning, and abnormality detection in an academic environment with no human intervention.
Intelligent keyframe extraction for video printing
NASA Astrophysics Data System (ADS)
Zhang, Tong
2004-10-01
Nowadays most digital cameras have the functionality of taking short video clips, with the length of video ranging from several seconds to a couple of minutes. The purpose of this research is to develop an algorithm which extracts an optimal set of keyframes from each short video clip so that the user could obtain proper video frames to print out. In current video printing systems, keyframes are normally obtained by evenly sampling the video clip over time. Such an approach, however, may not reflect highlights or regions of interest in the video. Keyframes derived in this way may also be improper for video printing in terms of either content or image quality. In this paper, we present an intelligent keyframe extraction approach to derive an improved keyframe set by performing semantic analysis of the video content. For a video clip, a number of video and audio features are analyzed to first generate a candidate keyframe set. These features include accumulative color histogram and color layout differences, camera motion estimation, moving object tracking, face detection and audio event detection. Then, the candidate keyframes are clustered and evaluated to obtain a final keyframe set. The objective is to automatically generate a limited number of keyframes to show different views of the scene; to show different people and their actions in the scene; and to tell the story in the video shot. Moreover, frame extraction for video printing, which is a rather subjective problem, is considered in this work for the first time, and a semi-automatic approach is proposed.
Using New Media to Reach Broad Audiences
NASA Astrophysics Data System (ADS)
Gay, P. L.
2008-06-01
The International Year of Astronomy New Media Working Group (IYA NMWG) has a singular mission: To flood the Internet with ways to learn about astronomy, interact with astronomers and astronomy content, and socially network with astronomy. Within each of these areas, we seek to build lasting programs and partnerships that will continue beyond 2009. Our weapon of choice is New Media. It is often easiest to define New Media by what it is not. Television, radio, print and their online redistribution of content are not New Media. Many forms of New Media start as user provided content and content infrastructures that answer that individual's creative whim in a way that is adopted by a broader audience. Classic examples include Blogs and Podcasts. This media is typically distributed through content specific websites and RSS feeds, which allow syndication. RSS aggregators (iTunes has audio and video aggregation abilities) allow subscribers to have content delivered to their computers automatically when they connect to the Internet. RSS technology is also being used in such creative ways as allowing automatically updating Google-maps that show the location of someone with an intelligent GPS system, and in sharing 100 word microblogs from anyone (Twitters) through a single feed. In this poster, we outline how the IYA NMWG plans to use New Media to reach target primary audiences of astronomy enthusiasts, image lovers, and amateur astronomers, as well as secondary audiences, including: science fiction fans, online gamers, and skeptics.
2012-01-01
Background The twelve-item Self-Report Habit Index (SRHI) is the most popular measure of energy-balance related habits. This measure characterises habit by automatic activation, behavioural frequency, and relevance to self-identity. Previous empirical research suggests that the SRHI may be abbreviated with no losses in reliability or predictive utility. Drawing on recent theorising suggesting that automaticity is the ‘active ingredient’ of habit-behaviour relationships, we tested whether an automaticity-specific SRHI subscale could capture habit-based behaviour patterns in self-report data. Methods A content validity task was undertaken to identify a subset of automaticity indicators within the SRHI. The reliability, convergent validity and predictive validity of the automaticity item subset was subsequently tested in secondary analyses of all previous SRHI applications, identified via systematic review, and in primary analyses of four raw datasets relating to energy‐balance relevant behaviours (inactive travel, active travel, snacking, and alcohol consumption). Results A four-item automaticity subscale (the ‘Self-Report Behavioural Automaticity Index’; ‘SRBAI’) was found to be reliable and sensitive to two hypothesised effects of habit on behaviour: a habit-behaviour correlation, and a moderating effect of habit on the intention-behaviour relationship. Conclusion The SRBAI offers a parsimonious measure that adequately captures habitual behaviour patterns. The SRBAI may be of particular utility in predicting future behaviour and in studies tracking habit formation or disruption. PMID:22935297
Zenitani, Satoko; Nishiuchi, Hiromu; Kiuchi, Takahiro
2010-04-01
The Smart-card-based Automatic Meal Record system for company cafeterias (AutoMealRecord system) was recently developed and used to monitor employee eating habits. The system could be a unique nutrition assessment tool for automatically monitoring the meal purchases of all employees, although it only focuses on company cafeterias and has never been validated. Before starting an interventional study, we tested the reliability of the data collected by the system using the data mining approach. The AutoMealRecord data were examined to determine if it could predict current obesity. All data used in this study (n = 899) were collected by a major electric company based in Tokyo, which has been operating the AutoMealRecord system for several years. We analyzed dietary patterns by principal component analysis using data from the system and extracted 5 major dietary patterns: healthy, traditional Japanese, Chinese, Japanese noodles, and pasta. The ability to predict current body mass index (BMI) with dietary preference was assessed with multiple linear regression analyses, and in the current study, BMI was positively correlated with male gender, preference for "Japanese noodles," mean energy intake, protein content, and frequency of body measurement at a body measurement booth in the cafeteria. There was a negative correlation with age, dietary fiber, and lunchtime cafeteria use (R(2) = 0.22). This regression model predicted "would-be obese" participants (BMI >or= 23) with 68.8% accuracy by leave-one-out cross validation. This shows that there was sufficient predictability of BMI based on data from the AutoMealRecord System. We conclude that the AutoMealRecord system is valuable for further consideration as a health care intervention tool. Copyright 2010 Elsevier Inc. All rights reserved.
OKCARS : Oklahoma Collision Analysis and Response System.
DOT National Transportation Integrated Search
2012-10-01
By continuously monitoring traffic intersections to automatically detect that a collision or nearcollision : has occurred, automatically call for assistance, and automatically forewarn oncoming traffic, : our OKCARS has the capability to effectively ...
NASA Astrophysics Data System (ADS)
Valdes-Abellan, Javier; Jiménez-Martínez, Joaquin; Candela, Lucila
2013-04-01
For monitoring the vadose zone, different strategies can be chosen, depending on the objectives and scale of observation. The effects of non-conventional water use on the vadose zone might produce impacts in porous media which could lead to changes in soil hydraulic properties, among others. Controlling these possible effects requires an accurate monitoring strategy that controls the volumetric water content, θ, and soil pressure, h, along the studied profile. According to the available literature, different monitoring systems have been carried out independently, however less attention has received comparative studies between different techniques. An experimental plot of 9x5 m2 was set with automatic and non-automatic sensors to control θ and h up to 1.5m depth. The non-automatic system consisted of ten Jet Fill tensiometers at 30, 45, 60, 90 and 120 cm (Soil Moisture®) and a polycarbonate access tube of 44 mm (i.d) for soil moisture measurements with a TRIME FM TDR portable probe (IMKO®). Vertical installation was carefully performed; measurements with this system were manual, twice a week for θ and three times per week for h. The automatic system composed of five 5TE sensors (Decagon Devices®) installed at 20, 40, 60, 90 and 120 cm for θ measurements and one MPS1 sensor (Decagon Devices®) at 60 cm depth for h. Installation took place laterally in a 40-50 cm length hole bored in a side of a trench that was excavated. All automatic sensors hourly recorded and stored in a data-logger. Boundary conditions were controlled with a volume-meter and with a meteorological station. ET was modelled with Penman-Monteith equation. Soil characterization include bulk density, gravimetric water content, grain size distribution, saturated hydraulic conductivity and soil water retention curves determined following laboratory standards. Soil mineralogy was determined by X-Ray difractometry. Unsaturated soil hydraulic parameters were model-fitted through SWRC-fit code and ROSETTA based on soil textural fractions. Simulation of water flow using automatic and non-automatic date was carried out by HYDRUS-1D independently. A good agreement from collected automatic and non-automatic data and modelled results can be recognized. General trend was captured, except for the outlier values as expected. Slightly differences were found between hydraulic properties obtained from laboratory determinations, and from inverse modelling from the two approaches. Differences up to 14% of flux through the lower boundary were detected between the two strategies According to results, automatic sensors have more resolution and then they're more appropriated to detect subtle changes of soil hydraulic properties. Nevertheless, if the aim of the research is to control the general trend of water dynamics, no significant differences were observed between the two systems.
Vancleef, Linda M G; Peters, Madelon L; Gilissen, Susan M P; De Jong, Peter J
2007-07-01
Three fundamental fears are assumed to underlie psychopathology: Anxiety Sensitivity (AS), Injury/illness sensitivity (IS), and Fear of Negative Evaluation (FNE). Both AS and IS may form risk factors for the development and exacerbation of chronic pain. The current research examines the relation between these fears and automatic threat appraisal for pain-related stimuli. Study 1 (n=48) additionally examined content-specific associations of AS and FNE with the automatic threat appraisal of, respectively, panic and social evaluative cues. Study 2 (n=60) additionally focused on the association of IS and AS with the engagement in health protecting behavior, and the use of health care services. Both studies found evidence for an automatic threat appraisal of aversive stimuli. Study 2 demonstrated a positive association between the automatic threat appraisal for pain-related stimuli and individuals' IS levels. IS was found to be the single best predictor of the tendency to engage in health protecting behavior, whereas AS was the single best predictor of the reported use of health care services. This study contributes to the field of knowledge on putative risk factors for chronic pain. Results demonstrate an automatic threat appraisal toward pain-related stimuli that is related to vulnerability traits for pain. This automatic threat appraisal might initiate relatively spontaneous (nonstrategic) pain-maintaining behavioral responses.
Automatic Control of the Concrete Mixture Homogeneity in Cycling Mixers
NASA Astrophysics Data System (ADS)
Anatoly Fedorovich, Tikhonov; Drozdov, Anatoly
2018-03-01
The article describes the factors affecting the concrete mixture quality related to the moisture content of aggregates, since the effectiveness of the concrete mixture production is largely determined by the availability of quality management tools at all stages of the technological process. It is established that the unaccounted moisture of aggregates adversely affects the concrete mixture homogeneity and, accordingly, the strength of building structures. A new control method and the automatic control system of the concrete mixture homogeneity in the technological process of mixing components have been proposed, since the tasks of providing a concrete mixture are performed by the automatic control system of processing kneading-and-mixing machinery with operational automatic control of homogeneity. Theoretical underpinnings of the control of the mixture homogeneity are presented, which are related to a change in the frequency of vibrodynamic vibrations of the mixer body. The structure of the technical means of the automatic control system for regulating the supply of water is determined depending on the change in the concrete mixture homogeneity during the continuous mixing of components. The following technical means for establishing automatic control have been chosen: vibro-acoustic sensors, remote terminal units, electropneumatic control actuators, etc. To identify the quality indicator of automatic control, the system offers a structure flowchart with transfer functions that determine the ACS operation in transient dynamic mode.
Automatic imitation: A meta-analysis.
Cracco, Emiel; Bardi, Lara; Desmet, Charlotte; Genschow, Oliver; Rigoni, Davide; De Coster, Lize; Radkova, Ina; Deschrijver, Eliane; Brass, Marcel
2018-05-01
Automatic imitation is the finding that movement execution is facilitated by compatible and impeded by incompatible observed movements. In the past 15 years, automatic imitation has been studied to understand the relation between perception and action in social interaction. Although research on this topic started in cognitive science, interest quickly spread to related disciplines such as social psychology, clinical psychology, and neuroscience. However, important theoretical questions have remained unanswered. Therefore, in the present meta-analysis, we evaluated seven key questions on automatic imitation. The results, based on 161 studies containing 226 experiments, revealed an overall effect size of g z = 0.95, 95% CI [0.88, 1.02]. Moderator analyses identified automatic imitation as a flexible, largely automatic process that is driven by movement and effector compatibility, but is also influenced by spatial compatibility. Automatic imitation was found to be stronger for forced choice tasks than for simple response tasks, for human agents than for nonhuman agents, and for goalless actions than for goal-directed actions. However, it was not modulated by more subtle factors such as animacy beliefs, motion profiles, or visual perspective. Finally, there was no evidence for a relation between automatic imitation and either empathy or autism. Among other things, these findings point toward actor-imitator similarity as a crucial modulator of automatic imitation and challenge the view that imitative tendencies are an indicator of social functioning. The current meta-analysis has important theoretical implications and sheds light on longstanding controversies in the literature on automatic imitation and related domains. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Aberl, A; Coelhan, M
2013-01-01
Sulfites are routinely added as preservatives and antioxidants in wine production. By law, the total sulfur dioxide content in wine is restricted and therefore must be monitored. Currently, the method of choice for determining the total content of sulfur dioxide in wine is the optimised Monier-Williams method, which is time consuming and laborious. The headspace gas chromatographic method described in this study offers a fast and reliable alternative method for the detection and quantification of the sulfur dioxide content in wine. The analysis was performed using an automatic headspace injection sampler, coupled with a gas chromatograph and an electron capture detector. The method is based on the formation of gaseous sulfur dioxide subsequent to acidification and heating of the sample. In addition to free sulfur dioxide, reversibly bound sulfur dioxide in carbonyl compounds, such as acetaldehyde, was also measured with this method. A total of 20 wine samples produced using diverse grape varieties and vintages of varied provenance were analysed using the new method. For reference and comparison purposes, 10 of the results obtained by the proposed method were compared with those acquired by the optimised Monier-Williams method. Overall, the results from the headspace analysis showed good correlation (R = 0.9985) when compared with the conventional method. This new method requires minimal sample preparation and is simple to perform, and the analysis can also be completed within a short period of time.
AfterQC: automatic filtering, trimming, error removing and quality control for fastq data.
Chen, Shifu; Huang, Tanxiao; Zhou, Yanqing; Han, Yue; Xu, Mingyan; Gu, Jia
2017-03-14
Some applications, especially those clinical applications requiring high accuracy of sequencing data, usually have to face the troubles caused by unavoidable sequencing errors. Several tools have been proposed to profile the sequencing quality, but few of them can quantify or correct the sequencing errors. This unmet requirement motivated us to develop AfterQC, a tool with functions to profile sequencing errors and correct most of them, plus highly automated quality control and data filtering features. Different from most tools, AfterQC analyses the overlapping of paired sequences for pair-end sequencing data. Based on overlapping analysis, AfterQC can detect and cut adapters, and furthermore it gives a novel function to correct wrong bases in the overlapping regions. Another new feature is to detect and visualise sequencing bubbles, which can be commonly found on the flowcell lanes and may raise sequencing errors. Besides normal per cycle quality and base content plotting, AfterQC also provides features like polyX (a long sub-sequence of a same base X) filtering, automatic trimming and K-MER based strand bias profiling. For each single or pair of FastQ files, AfterQC filters out bad reads, detects and eliminates sequencer's bubble effects, trims reads at front and tail, detects the sequencing errors and corrects part of them, and finally outputs clean data and generates HTML reports with interactive figures. AfterQC can run in batch mode with multiprocess support, it can run with a single FastQ file, a single pair of FastQ files (for pair-end sequencing), or a folder for all included FastQ files to be processed automatically. Based on overlapping analysis, AfterQC can estimate the sequencing error rate and profile the error transform distribution. The results of our error profiling tests show that the error distribution is highly platform dependent. Much more than just another new quality control (QC) tool, AfterQC is able to perform quality control, data filtering, error profiling and base correction automatically. Experimental results show that AfterQC can help to eliminate the sequencing errors for pair-end sequencing data to provide much cleaner outputs, and consequently help to reduce the false-positive variants, especially for the low-frequency somatic mutations. While providing rich configurable options, AfterQC can detect and set all the options automatically and require no argument in most cases.
Automatic Match between Delimitation Line and Real Terrain Based on Least-Cost Path Analysis
NASA Astrophysics Data System (ADS)
Feng, C. Q.; Jiang, N.; Zhang, X. N.; Ma, J.
2013-11-01
Nowadays, during the international negotiation on separating dispute areas, manual adjusting is lonely applied to the match between delimitation line and real terrain, which not only consumes much time and great labor force, but also cannot ensure high precision. Concerning that, the paper mainly explores automatic match between them and study its general solution based on Least -Cost Path Analysis. First, under the guidelines of delimitation laws, the cost layer is acquired through special disposals of delimitation line and terrain features line. Second, a new delimitation line gets constructed with the help of Least-Cost Path Analysis. Third, the whole automatic match model is built via Module Builder in order to share and reuse it. Finally, the result of automatic match is analyzed from many different aspects, including delimitation laws, two-sided benefits and so on. Consequently, a conclusion is made that the method of automatic match is feasible and effective.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pelaia II, Thomas A.
2015-06-30
There is a need for software that allows a tour guide to present different tracks of slides and then return to the default slide show automatically upon completion. A mobile solution is needed for trade shows. DiTour is an iPad/iPhone app that pulls presentation content from a website, stores it on the device and presents it on a connected display. A tour guide can select a track to present and it will automatically return to the default track after a timeout. It offers a mobile solution which is ideal for trade shows.
QBIC project: querying images by content, using color, texture, and shape
NASA Astrophysics Data System (ADS)
Niblack, Carlton W.; Barber, Ron; Equitz, Will; Flickner, Myron D.; Glasman, Eduardo H.; Petkovic, Dragutin; Yanker, Peter; Faloutsos, Christos; Taubin, Gabriel
1993-04-01
In the query by image content (QBIC) project we are studying methods to query large on-line image databases using the images' content as the basis of the queries. Examples of the content we use include color, texture, and shape of image objects and regions. Potential applications include medical (`Give me other images that contain a tumor with a texture like this one'), photo-journalism (`Give me images that have blue at the top and red at the bottom'), and many others in art, fashion, cataloging, retailing, and industry. Key issues include derivation and computation of attributes of images and objects that provide useful query functionality, retrieval methods based on similarity as opposed to exact match, query by image example or user drawn image, the user interfaces, query refinement and navigation, high dimensional database indexing, and automatic and semi-automatic database population. We currently have a prototype system written in X/Motif and C running on an RS/6000 that allows a variety of queries, and a test database of over 1000 images and 1000 objects populated from commercially available photo clip art images. In this paper we present the main algorithms for color texture, shape and sketch query that we use, show example query results, and discuss future directions.
Report of the Fifth Biennial Conference on Chemical Education: Content.
ERIC Educational Resources Information Center
Journal of Chemical Education, 1979
1979-01-01
Two papers are reported on, one dealing with a course on inorganic reaction mechanisms for college seniors, and the other on the set-up necessary to provide students with an automatic potentiometric titration facility. (BB)
Technical data on new engineering products
NASA Astrophysics Data System (ADS)
1985-02-01
New grades of permanently magnetic materials; automatic digital radiolocator; bench winder; analog induction gauge; programmable pulse generator; portable defibrillators; pipe welders; two-component electromagnetic log; sulphur content analyzer; peristaltic pumps; function generators; welding manipulator; and tonsiometer are described.
Gaudinat, Arnaud; Grabar, Natalia; Boyer, Célia
2007-10-11
The detection of ethical issues of web sites aims at selection of information helpful to the reader and is an important concern in medical informatics. Indeed, with the ever-increasing volume of online health information, coupled with its uneven reliability and quality, the public should be aware about the quality of information available online. In order to address this issue, we propose methods for the automatic detection of statements related to ethical principles such as those of the HONcode. For the detection of these statements, we combine two kinds of heterogeneous information: content-based categorizations and URL-based categorizations through application of the machine learning algorithms. Our objective is to observe the quality of categorization through URL's for web pages where categorization through content has been proven to be not precise enough. The results obtained indicate that only some of the principles were better processed.
On the automaticity of response inhibition in individuals with alcoholism.
Noël, Xavier; Brevers, Damien; Hanak, Catherine; Kornreich, Charles; Verbanck, Paul; Verbruggen, Frederick
2016-06-01
Response inhibition is usually considered a hallmark of executive control. However, recent work indicates that stop performance can become associatively mediated ('automatic') over practice. This study investigated automatic response inhibition in sober and recently detoxified individuals with alcoholism.. We administered to forty recently detoxified alcoholics and forty healthy participants a modified stop-signal task that consisted of a training phase in which a subset of the stimuli was consistently associated with stopping or going, and a test phase in which this mapping was reversed. In the training phase, stop performance improved for the consistent stop stimuli, compared with control stimuli that were not associated with going or stopping. In the test phase, go performance tended to be impaired for old stop stimuli. Combined, these findings support the automatic inhibition hypothesis. Importantly, performance was similar in both groups, which indicates that automatic inhibitory control develops normally in individuals with alcoholism.. This finding is specific to individuals with alcoholism without other psychiatric disorders, which is rather atypical and prevents generalization. Personalized stimuli with a stronger affective content should be used in future studies. These results advance our understanding of behavioral inhibition in individuals with alcoholism. Furthermore, intact automatic inhibitory control may be an important element of successful cognitive remediation of addictive behaviors.. Copyright © 2016 Elsevier Ltd. All rights reserved.
Automatic attention to emotional stimuli: neural correlates.
Carretié, Luis; Hinojosa, José A; Martín-Loeches, Manuel; Mercado, Francisco; Tapia, Manuel
2004-08-01
We investigated the capability of emotional and nonemotional visual stimulation to capture automatic attention, an aspect of the interaction between cognitive and emotional processes that has received scant attention from researchers. Event-related potentials were recorded from 37 subjects using a 60-electrode array, and were submitted to temporal and spatial principal component analyses to detect and quantify the main components, and to source localization software (LORETA) to determine their spatial origin. Stimuli capturing automatic attention were of three types: emotionally positive, emotionally negative, and nonemotional pictures. Results suggest that initially (P1: 105 msec after stimulus), automatic attention is captured by negative pictures, and not by positive or nonemotional ones. Later (P2: 180 msec), automatic attention remains captured by negative pictures, but also by positive ones. Finally (N2: 240 msec), attention is captured only by positive and nonemotional stimuli. Anatomically, this sequence is characterized by decreasing activation of the visual association cortex (VAC) and by the growing involvement, from dorsal to ventral areas, of the anterior cingulate cortex (ACC). Analyses suggest that the ACC and not the VAC is responsible for experimental effects described above. Intensity, latency, and location of neural activity related to automatic attention thus depend clearly on the stimulus emotional content and on its associated biological importance. Copyright 2004 Wiley-Liss, Inc.
Batmaz, Sedat; Ahmet Yuncu, Ozgur; Kocbiyik, Sibel
2015-01-01
Background: Beck’s theory of emotional disorder suggests that negative automatic thoughts (NATs) and the underlying schemata affect one’s way of interpreting situations and result in maladaptive coping strategies. Depending on their content and meaning, NATs are associated with specific emotions, and since they are usually quite brief, patients are often more aware of the emotion they feel. This relationship between cognition and emotion, therefore, is thought to form the background of the cognitive content specificity hypothesis. Researchers focusing on this hypothesis have suggested that instruments like the cognition checklist (CCL) might be an alternative to make a diagnostic distinction between depression and anxiety. Objectives: The aim of the present study was to assess the psychometric properties of the Turkish version of the CCL in a psychiatric outpatient sample. Patients and Methods: A total of 425 psychiatric outpatients 18 years of age and older were recruited. After a structured diagnostic interview, the participants completed the hospital anxiety depression scale (HADS), the automatic thoughts questionnaire (ATQ), and the CCL. An exploratory factor analysis was performed, followed by an oblique rotation. The internal consistency, test-retest reliability, and concurrent and discriminant validity analyses were undertaken. Results: The internal consistency of the CCL was excellent (Cronbach’s α = 0.95). The test-retest correlation coefficients were satisfactory (r = 0.80, P < 0.001 for CCL-D, and r = 0.79, P < 0.001 for CCL-A). The exploratory factor analysis revealed that a two-factor solution best fit the data. This bidimensional factor structure explained 51.27 % of the variance of the scale. The first factor consisted of items related to anxious cognitions, and the second factor of depressive cognitions. The CCL subscales significantly correlated with the ATQ (rs 0.44 for the CCL-D, and 0.32 for the CCL-A) as well as the other measures of mood severity (all Ps < 0.01). To a great extent, all items of the CCL were able to distinguish the clinical and non-clinical groups, suggesting the scale has high discriminating validity. Conclusions: The current study has provided evidence that the Turkish version of the CCL is a reliable and valid instrument to assess NATs in a clinical outpatient sample. PMID:26834808
Kim, Won-Seok; Zeng, Pengcheng; Shi, Jian Qing; Lee, Youngjo; Paik, Nam-Jong
2017-01-01
Motion analysis of the hyoid bone via videofluoroscopic study has been used in clinical research, but the classical manual tracking method is generally labor intensive and time consuming. Although some automatic tracking methods have been developed, masked points could not be tracked and smoothing and segmentation, which are necessary for functional motion analysis prior to registration, were not provided by the previous software. We developed software to track the hyoid bone motion semi-automatically. It works even in the situation where the hyoid bone is masked by the mandible and has been validated in dysphagia patients with stroke. In addition, we added the function of semi-automatic smoothing and segmentation. A total of 30 patients' data were used to develop the software, and data collected from 17 patients were used for validation, of which the trajectories of 8 patients were partly masked. Pearson correlation coefficients between the manual and automatic tracking are high and statistically significant (0.942 to 0.991, P-value<0.0001). Relative errors between automatic tracking and manual tracking in terms of the x-axis, y-axis and 2D range of hyoid bone excursion range from 3.3% to 9.2%. We also developed an automatic method to segment each hyoid bone trajectory into four phases (elevation phase, anterior movement phase, descending phase and returning phase). The semi-automatic hyoid bone tracking from VFSS data by our software is valid compared to the conventional manual tracking method. In addition, the ability of automatic indication to switch the automatic mode to manual mode in extreme cases and calibration without attaching the radiopaque object is convenient and useful for users. Semi-automatic smoothing and segmentation provide further information for functional motion analysis which is beneficial to further statistical analysis such as functional classification and prognostication for dysphagia. Therefore, this software could provide the researchers in the field of dysphagia with a convenient, useful, and all-in-one platform for analyzing the hyoid bone motion. Further development of our method to track the other swallowing related structures or objects such as epiglottis and bolus and to carry out the 2D curve registration may be needed for a more comprehensive functional data analysis for dysphagia with big data.
Klukkert, Marten; Wu, Jian X; Rantanen, Jukka; Carstensen, Jens M; Rades, Thomas; Leopold, Claudia S
2016-07-30
Monitoring of tablet quality attributes in direct vicinity of the production process requires analytical techniques that allow fast, non-destructive, and accurate tablet characterization. The overall objective of this study was to investigate the applicability of multispectral UV imaging as a reliable, rapid technique for estimation of the tablet API content and tablet hardness, as well as determination of tablet intactness and the tablet surface density profile. One of the aims was to establish an image analysis approach based on multivariate image analysis and pattern recognition to evaluate the potential of UV imaging for automatized quality control of tablets with respect to their intactness and surface density profile. Various tablets of different composition and different quality regarding their API content, radial tensile strength, intactness, and surface density profile were prepared using an eccentric as well as a rotary tablet press at compression pressures from 20MPa up to 410MPa. It was found, that UV imaging can provide both, relevant information on chemical and physical tablet attributes. The tablet API content and radial tensile strength could be estimated by UV imaging combined with partial least squares analysis. Furthermore, an image analysis routine was developed and successfully applied to the UV images that provided qualitative information on physical tablet surface properties such as intactness and surface density profiles, as well as quantitative information on variations in the surface density. In conclusion, this study demonstrates that UV imaging combined with image analysis is an effective and non-destructive method to determine chemical and physical quality attributes of tablets and is a promising approach for (near) real-time monitoring of the tablet compaction process and formulation optimization purposes. Copyright © 2015 Elsevier B.V. All rights reserved.
Breast density quantification with cone-beam CT: A post-mortem study
Johnson, Travis; Ding, Huanjun; Le, Huy Q.; Ducote, Justin L.; Molloi, Sabee
2014-01-01
Forty post-mortem breasts were imaged with a flat-panel based cone-beam x-ray CT system at 50 kVp. The feasibility of breast density quantification has been investigated using standard histogram thresholding and an automatic segmentation method based on the fuzzy c-means algorithm (FCM). The breasts were chemically decomposed into water, lipid, and protein immediately after image acquisition was completed. The percent fibroglandular volume (%FGV) from chemical analysis was used as the gold standard for breast density comparison. Both image-based segmentation techniques showed good precision in breast density quantification with high linear coefficients between the right and left breast of each pair. When comparing with the gold standard using %FGV from chemical analysis, Pearson’s r-values were estimated to be 0.983 and 0.968 for the FCM clustering and the histogram thresholding techniques, respectively. The standard error of the estimate (SEE) was also reduced from 3.92% to 2.45% by applying the automatic clustering technique. The results of the postmortem study suggested that breast tissue can be characterized in terms of water, lipid and protein contents with high accuracy by using chemical analysis, which offers a gold standard for breast density studies comparing different techniques. In the investigated image segmentation techniques, the FCM algorithm had high precision and accuracy in breast density quantification. In comparison to conventional histogram thresholding, it was more efficient and reduced inter-observer variation. PMID:24254317
Ebert, Lars C; Heimer, Jakob; Schweitzer, Wolf; Sieberth, Till; Leipner, Anja; Thali, Michael; Ampanozi, Garyfalia
2017-12-01
Post mortem computed tomography (PMCT) can be used as a triage tool to better identify cases with a possibly non-natural cause of death, especially when high caseloads make it impossible to perform autopsies on all cases. Substantial data can be generated by modern medical scanners, especially in a forensic setting where the entire body is documented at high resolution. A solution for the resulting issues could be the use of deep learning techniques for automatic analysis of radiological images. In this article, we wanted to test the feasibility of such methods for forensic imaging by hypothesizing that deep learning methods can detect and segment a hemopericardium in PMCT. For deep learning image analysis software, we used the ViDi Suite 2.0. We retrospectively selected 28 cases with, and 24 cases without, hemopericardium. Based on these data, we trained two separate deep learning networks. The first one classified images into hemopericardium/not hemopericardium, and the second one segmented the blood content. We randomly selected 50% of the data for training and 50% for validation. This process was repeated 20 times. The best performing classification network classified all cases of hemopericardium from the validation images correctly with only a few false positives. The best performing segmentation network would tend to underestimate the amount of blood in the pericardium, which is the case for most networks. This is the first study that shows that deep learning has potential for automated image analysis of radiological images in forensic medicine.
Active Learning Strategies for Phenotypic Profiling of High-Content Screens.
Smith, Kevin; Horvath, Peter
2014-06-01
High-content screening is a powerful method to discover new drugs and carry out basic biological research. Increasingly, high-content screens have come to rely on supervised machine learning (SML) to perform automatic phenotypic classification as an essential step of the analysis. However, this comes at a cost, namely, the labeled examples required to train the predictive model. Classification performance increases with the number of labeled examples, and because labeling examples demands time from an expert, the training process represents a significant time investment. Active learning strategies attempt to overcome this bottleneck by presenting the most relevant examples to the annotator, thereby achieving high accuracy while minimizing the cost of obtaining labeled data. In this article, we investigate the impact of active learning on single-cell-based phenotype recognition, using data from three large-scale RNA interference high-content screens representing diverse phenotypic profiling problems. We consider several combinations of active learning strategies and popular SML methods. Our results show that active learning significantly reduces the time cost and can be used to reveal the same phenotypic targets identified using SML. We also identify combinations of active learning strategies and SML methods which perform better than others on the phenotypic profiling problems we studied. © 2014 Society for Laboratory Automation and Screening.
Representation-based user interfaces for the audiovisual library of the year 2000
NASA Astrophysics Data System (ADS)
Aigrain, Philippe; Joly, Philippe; Lepain, Philippe; Longueville, Veronique
1995-03-01
The audiovisual library of the future will be based on computerized access to digitized documents. In this communication, we address the user interface issues which will arise from this new situation. One cannot simply transfer a user interface designed for the piece by piece production of some audiovisual presentation and make it a tool for accessing full-length movies in an electronic library. One cannot take a digital sound editing tool and propose it as a means to listen to a musical recording. In our opinion, when computers are used as mediations to existing contents, document representation-based user interfaces are needed. With such user interfaces, a structured visual representation of the document contents is presented to the user, who can then manipulate it to control perception and analysis of these contents. In order to build such manipulable visual representations of audiovisual documents, one needs to automatically extract structural information from the documents contents. In this communication, we describe possible visual interfaces for various temporal media, and we propose methods for the economically feasible large scale processing of documents. The work presented is sponsored by the Bibliotheque Nationale de France: it is part of the program aiming at developing for image and sound documents an experimental counterpart to the digitized text reading workstation of this library.
Chen, C; Xiang, J Y; Hu, W; Xie, Y B; Wang, T J; Cui, J W; Xu, Y; Liu, Z; Xiang, H; Xie, Q
2015-11-01
To screen and identify safe micro-organisms used during Douchi fermentation, and verify the feasibility of producing high-quality Douchi using these identified micro-organisms. PCR-denaturing gradient gel electrophoresis (DGGE) and automatic amino-acid analyser were used to investigate the microbial diversity and free amino acids (FAAs) content of 10 commercial Douchi samples. The correlations between microbial communities and FAAs were analysed by statistical analysis. Ten strains with significant positive correlation were identified. Then an experiment on Douchi fermentation by identified strains was carried out, and the nutritional composition in Douchi was analysed. Results showed that FAAs and relative content of isoflavone aglycones in verification Douchi samples were generally higher than those in commercial Douchi samples. Our study indicated that fungi, yeasts, Bacillus and lactic acid bacteria were the key players in Douchi fermentation, and with identified probiotic micro-organisms participating in fermentation, a higher quality Douchi product was produced. This is the first report to analyse and confirm the key micro-organisms during Douchi fermentation by statistical analysis. This work proves fermentation micro-organisms to be the key influencing factor of Douchi quality, and demonstrates the feasibility of fermenting Douchi using identified starter micro-organisms. © 2015 The Society for Applied Microbiology.
Trinkaus, Hans L; Gaisser, Andrea E
2010-09-01
Nearly 30,000 individual inquiries are answered annually by the telephone cancer information service (CIS, KID) of the German Cancer Research Center (DKFZ). The aim was to develop a tool for evaluating these calls, and to support the complete counseling process interactively. A novel software tool is introduced, based on a structure similar to a music score. Treating the interaction as a "duet", guided by the CIS counselor, the essential contents of the dialogue are extracted automatically. For this, "trained speech recognition" is applied to the (known) counselor's part, and "keyword spotting" is used on the (unknown) client's part to pick out specific items from the "word streams". The outcomes fill an abstract score representing the dialogue. Pilot tests performed on a prototype of SACA (Software Assisted Call Analysis) resulted in a basic proof of concept: Demographic data as well as information regarding the situation of the caller could be identified. The study encourages following up on the vision of an integrated SACA tool for supporting calls online and performing statistics on its knowledge database offline. Further research perspectives are to check SACA's potential in comparison with established interaction analysis systems like RIAS. Copyright (c) 2010 Elsevier Ireland Ltd. All rights reserved.
MPEG-7-based description infrastructure for an audiovisual content analysis and retrieval system
NASA Astrophysics Data System (ADS)
Bailer, Werner; Schallauer, Peter; Hausenblas, Michael; Thallinger, Georg
2005-01-01
We present a case study of establishing a description infrastructure for an audiovisual content-analysis and retrieval system. The description infrastructure consists of an internal metadata model and access tool for using it. Based on an analysis of requirements, we have selected, out of a set of candidates, MPEG-7 as the basis of our metadata model. The openness and generality of MPEG-7 allow using it in broad range of applications, but increase complexity and hinder interoperability. Profiling has been proposed as a solution, with the focus on selecting and constraining description tools. Semantic constraints are currently only described in textual form. Conformance in terms of semantics can thus not be evaluated automatically and mappings between different profiles can only be defined manually. As a solution, we propose an approach to formalize the semantic constraints of an MPEG-7 profile using a formal vocabulary expressed in OWL, which allows automated processing of semantic constraints. We have defined the Detailed Audiovisual Profile as the profile to be used in our metadata model and we show how some of the semantic constraints of this profile can be formulated using ontologies. To work practically with the metadata model, we have implemented a MPEG-7 library and a client/server document access infrastructure.
Using ProMED-Mail and MedWorm blogs for cross-domain pattern analysis in epidemic intelligence.
Stewart, Avaré; Denecke, Kerstin
2010-01-01
In this work we motivate the use of medical blog user generated content for gathering facts about disease reporting events to support biosurveillance investigation. Given the characteristics of blogs, the extraction of such events is made more difficult due to noise and data abundance. We address the problem of automatically inferring disease reporting event extraction patterns in this more noisy setting. The sublanguage used in outbreak reports is exploited to align with the sequences of disease reporting sentences in blogs. Based our Cross Domain Pattern Analysis Framework, experimental results show that Phase-Level sequences tend to produce more overlap across the domains than Word-Level sequences. The cross domain alignment process is effective at filtering noisy sequences from blogs and extracting good candidate sequence patterns from an abundance of text.
Analysis of Patent Databases Using VxInsight
DOE Office of Scientific and Technical Information (OSTI.GOV)
BOYACK,KEVIN W.; WYLIE,BRIAN N.; DAVIDSON,GEORGE S.
2000-12-12
We present the application of a new knowledge visualization tool, VxInsight, to the mapping and analysis of patent databases. Patent data are mined and placed in a database, relationships between the patents are identified, primarily using the citation and classification structures, then the patents are clustered using a proprietary force-directed placement algorithm. Related patents cluster together to produce a 3-D landscape view of the tens of thousands of patents. The user can navigate the landscape by zooming into or out of regions of interest. Querying the underlying database places a colored marker on each patent matching the query. Automatically generatedmore » labels, showing landscape content, update continually upon zooming. Optionally, citation links between patents may be shown on the landscape. The combination of these features enables powerful analyses of patent databases.« less
The UniProtKB guide to the human proteome
Breuza, Lionel; Poux, Sylvain; Estreicher, Anne; Famiglietti, Maria Livia; Magrane, Michele; Tognolli, Michael; Bridge, Alan; Baratin, Delphine; Redaschi, Nicole
2016-01-01
Advances in high-throughput and advanced technologies allow researchers to routinely perform whole genome and proteome analysis. For this purpose, they need high-quality resources providing comprehensive gene and protein sets for their organisms of interest. Using the example of the human proteome, we will describe the content of a complete proteome in the UniProt Knowledgebase (UniProtKB). We will show how manual expert curation of UniProtKB/Swiss-Prot is complemented by expert-driven automatic annotation to build a comprehensive, high-quality and traceable resource. We will also illustrate how the complexity of the human proteome is captured and structured in UniProtKB. Database URL: www.uniprot.org PMID:26896845
Automatic detection of artifacts in converted S3D video
NASA Astrophysics Data System (ADS)
Bokov, Alexander; Vatolin, Dmitriy; Zachesov, Anton; Belous, Alexander; Erofeev, Mikhail
2014-03-01
In this paper we present algorithms for automatically detecting issues specific to converted S3D content. When a depth-image-based rendering approach produces a stereoscopic image, the quality of the result depends on both the depth maps and the warping algorithms. The most common problem with converted S3D video is edge-sharpness mismatch. This artifact may appear owing to depth-map blurriness at semitransparent edges: after warping, the object boundary becomes sharper in one view and blurrier in the other, yielding binocular rivalry. To detect this problem we estimate the disparity map, extract boundaries with noticeable differences, and analyze edge-sharpness correspondence between views. We pay additional attention to cases involving a complex background and large occlusions. Another problem is detection of scenes that lack depth volume: we present algorithms for detecting at scenes and scenes with at foreground objects. To identify these problems we analyze the features of the RGB image as well as uniform areas in the depth map. Testing of our algorithms involved examining 10 Blu-ray 3D releases with converted S3D content, including Clash of the Titans, The Avengers, and The Chronicles of Narnia: The Voyage of the Dawn Treader. The algorithms we present enable improved automatic quality assessment during the production stage.
NASA Astrophysics Data System (ADS)
Garcia-Estringana, P.; Latron, J.; Molina, A. J.; Llorens, P.
2012-04-01
Rainfall partitioning fluxes (throughfall and stemflow) have a large degree of temporal and spatial variability and may consequently lead to significant changes in the volume and composition of water that reach the understory and the soil. The objective of this work is to study the effect of rainfall partitioning on the seasonal and spatial variability of the soil water content in a Mediterranean downy oak forest (Quercus pubescens), located in the Vallcebre research catchments (42° 12'N, 1° 49'E). The monitoring design, started on July 2011, consists of a set of 20 automatic rain recorders and 40 automatic soil moisture probes located below the canopy. One hundred hemispheric photographs of the canopy were used to place the instruments at representative locations (in terms of canopy cover) within the plot. Bulk rainfall, stemflow and meteorological conditions above the forest cover are also automatically recorded. Canopy cover, in leaf and leafless periods, as well as biometric characteristics of the plot, are also regularly measured. This work presents the first results describing throughfall and soil moisture spatial variability during both the leaf and leafless periods. The main drivers of throughfall variability, as canopy structure and meteorological conditions are also analysed.
NASA Astrophysics Data System (ADS)
Biel, C.; Molina, A.; Aranda, X.; Llorens, P.; Savé, R.
2012-04-01
Tree plantation for wood production has been proposed to mitigate CO2-related climate change. Although these agroforestry systems can contribute to maintain the agriculture in some areas placed between rainfed crops and secondary forests, water scarcity in Mediterranean climate could restrict its growth, and their presence will affect the water balance. Tree plantations management (species, plant density, irrigation, etc), hence, can be used to affect the water balance, resulting in water availability improvement and buffering of the water cycle. Soil water content and meteorological data are widely used in agroforestry systems as indicators of vegetation water use, and consequently to define water management. However, the available information of ecohydrological processes in this kind of ecosystem is scarce. The present work studies how the temporal and spatial variation of soil water content is affected by transpiration and interception loss fluxes in a Mediterranean rainfed plantation of cherry tree (Prunus avium) located in Caldes de Montbui (Northeast of Spain). From May till December 2011, rainfall partitioning, canopy transpiration, soil water content and meteorological parameters were continuously recorded. Rainfall partitioning was measured in 6 trees, with 6 automatic rain recorders for throughfall and 1 automatic rain recorder for stemflow per tree. Transpiration was monitored in 12 nearby trees by means of heat pulse sap flow sensors. Soil water content was also measured at three different depths under selected trees and at two depths between rows without tree cover influence. This work presents the relationships between rainfall partitioning, transpiration and soil water content evolution under the tree canopy. The effect of tree cover on the soil water content dynamics is also analyzed.
Review of automatic detection of pig behaviours by using image analysis
NASA Astrophysics Data System (ADS)
Han, Shuqing; Zhang, Jianhua; Zhu, Mengshuai; Wu, Jianzhai; Kong, Fantao
2017-06-01
Automatic detection of lying, moving, feeding, drinking, and aggressive behaviours of pigs by means of image analysis can save observation input by staff. It would help staff make early detection of diseases or injuries of pigs during breeding and improve management efficiency of swine industry. This study describes the progress of pig behaviour detection based on image analysis and advancement in image segmentation of pig body, segmentation of pig adhesion and extraction of pig behaviour characteristic parameters. Challenges for achieving automatic detection of pig behaviours were summarized.
Call Me Guru: User Categories and Large-Scale Behavior in YouTube
NASA Astrophysics Data System (ADS)
Biel, Joan-Isaac; Gatica-Perez, Daniel
While existing studies on YouTube's massive user-generated video content have mostly focused on the analysis of videos, their characteristics, and network properties, little attention has been paid to the analysis of users' long-term behavior as it relates to the roles they self-define and (explicitly or not) play in the site. In this chapter, we present a statistical analysis of aggregated user behavior in YouTube from the perspective of user categories, a feature that allows people to ascribe to popular roles and to potentially reach certain communities. Using a sample of 270,000 users, we found that a high level of interaction and participation is concentrated on a relatively small, yet significant, group of users, following recognizable patterns of personal and social involvement. Based on our analysis, we also show that by using simple behavioral features from user profiles, people can be automatically classified according to their category with accuracy rates of up to 73%.
TRECVID: the utility of a content-based video retrieval evaluation
NASA Astrophysics Data System (ADS)
Hauptmann, Alexander G.
2006-01-01
TRECVID, an annual retrieval evaluation benchmark organized by NIST, encourages research in information retrieval from digital video. TRECVID benchmarking covers both interactive and manual searching by end users, as well as the benchmarking of some supporting technologies including shot boundary detection, extraction of semantic features, and the automatic segmentation of TV news broadcasts. Evaluations done in the context of the TRECVID benchmarks show that generally, speech transcripts and annotations provide the single most important clue for successful retrieval. However, automatically finding the individual images is still a tremendous and unsolved challenge. The evaluations repeatedly found that none of the multimedia analysis and retrieval techniques provide a significant benefit over retrieval using only textual information such as from automatic speech recognition transcripts or closed captions. In interactive systems, we do find significant differences among the top systems, indicating that interfaces can make a huge difference for effective video/image search. For interactive tasks efficient interfaces require few key clicks, but display large numbers of images for visual inspection by the user. The text search finds the right context region in the video in general, but to select specific relevant images we need good interfaces to easily browse the storyboard pictures. In general, TRECVID has motivated the video retrieval community to be honest about what we don't know how to do well (sometimes through painful failures), and has focused us to work on the actual task of video retrieval, as opposed to flashy demos based on technological capabilities.
Automatic Analysis of Critical Incident Reports: Requirements and Use Cases.
Denecke, Kerstin
2016-01-01
Increasingly, critical incident reports are used as a means to increase patient safety and quality of care. The entire potential of these sources of experiential knowledge remains often unconsidered since retrieval and analysis is difficult and time-consuming, and the reporting systems often do not provide support for these tasks. The objective of this paper is to identify potential use cases for automatic methods that analyse critical incident reports. In more detail, we will describe how faceted search could offer an intuitive retrieval of critical incident reports and how text mining could support in analysing relations among events. To realise an automated analysis, natural language processing needs to be applied. Therefore, we analyse the language of critical incident reports and derive requirements towards automatic processing methods. We learned that there is a huge potential for an automatic analysis of incident reports, but there are still challenges to be solved.
Automatic analysis of microscopic images of red blood cell aggregates
NASA Astrophysics Data System (ADS)
Menichini, Pablo A.; Larese, Mónica G.; Riquelme, Bibiana D.
2015-06-01
Red blood cell aggregation is one of the most important factors in blood viscosity at stasis or at very low rates of flow. The basic structure of aggregates is a linear array of cell commonly termed as rouleaux. Enhanced or abnormal aggregation is seen in clinical conditions, such as diabetes and hypertension, producing alterations in the microcirculation, some of which can be analyzed through the characterization of aggregated cells. Frequently, image processing and analysis for the characterization of RBC aggregation were done manually or semi-automatically using interactive tools. We propose a system that processes images of RBC aggregation and automatically obtains the characterization and quantification of the different types of RBC aggregates. Present technique could be interesting to perform the adaptation as a routine used in hemorheological and Clinical Biochemistry Laboratories because this automatic method is rapid, efficient and economical, and at the same time independent of the user performing the analysis (repeatability of the analysis).
Hagopian, Louis P.; Rooker, Griffin W.; Zarcone, Jennifer R.; Bonner, Andrew C.; Arevalo, Alexander R.
2017-01-01
Hagopian, Rooker, and Zarcone (2015) evaluated a model for subtyping automatically reinforced self-injurious behavior (SIB) based on its sensitivity to changes in functional analysis conditions and the presence of self-restraint. The current study tested the generality of the model by applying it to all datasets of automatically reinforced SIB published from 1982 to 2015. We identified 49 datasets that included sufficient data to permit subtyping. Similar to the original study, Subtype-1 SIB was generally amenable to treatment using reinforcement alone, whereas Subtype-2 SIB was not. Conclusions could not be drawn about Subtype-3 SIB due to the small number of datasets. Nevertheless, the findings support the generality of the model and suggest that sensitivity of SIB to disruption by alternative reinforcement is an important dimension of automatically reinforced SIB. Findings also suggest that automatically reinforced SIB should no longer be considered a single category and that additional research is needed to better understand and treat Subtype-2 SIB. PMID:28032344
Slyter, Leonard L.
1975-01-01
An artifical rumen continuous culture with pH control, automated input of water-soluble and water-insoluble substrates, controlled mixing of contents, and a collection system for gas is described. Images PMID:16350029
A method of lead determination in human teeth by energy dispersive X-ray fluorescence (EDXRF).
Sargentini-Maier, M L; Frank, R M; Leroy, M J; Turlot, J C
1988-12-01
A systematic sampling procedure was combined with a method of energy dispersive X-ray fluorescence (EDXRF) to study lead content and its variations in human teeth. On serial ground sections made on unembedded permanent teeth of inhabitants of Strasbourg with a special diamond rotating disk, 2 series of 500 microns large punch biopsies were made systematically in 5 directions from the tooth surface to the inner pulpal dentine with a micro-punching unit. In addition, pooled fragments of enamel and dentine were made for each tooth. On each punched fragment or pooled sample, lead content was determined after dissolution in ultrapure nitric acid, on a 4 microns thick polypropylene film, and irradiation with a Siemens EDXRF prototype with direct sample excitation by a high power X-ray tube with a molybdenum anode. Fluorescence was detected by a Si(Li) detector and calcium was used as an internal standard. This technique allowed a rapid, automatic, multielementary and non-destructive analysis of microsamples with good detection limits.
Tackling action-based video abstraction of animated movies for video browsing
NASA Astrophysics Data System (ADS)
Ionescu, Bogdan; Ott, Laurent; Lambert, Patrick; Coquin, Didier; Pacureanu, Alexandra; Buzuloiu, Vasile
2010-07-01
We address the issue of producing automatic video abstracts in the context of the video indexing of animated movies. For a quick browse of a movie's visual content, we propose a storyboard-like summary, which follows the movie's events by retaining one key frame for each specific scene. To capture the shot's visual activity, we use histograms of cumulative interframe distances, and the key frames are selected according to the distribution of the histogram's modes. For a preview of the movie's exciting action parts, we propose a trailer-like video highlight, whose aim is to show only the most interesting parts of the movie. Our method is based on a relatively standard approach, i.e., highlighting action through the analysis of the movie's rhythm and visual activity information. To suit every type of movie content, including predominantly static movies or movies without exciting parts, the concept of action depends on the movie's average rhythm. The efficiency of our approach is confirmed through several end-user studies.
Paveley, Ross A.; Mansour, Nuha R.; Hallyburton, Irene; Bleicher, Leo S.; Benn, Alex E.; Mikic, Ivana; Guidi, Alessandra; Gilbert, Ian H.; Hopkins, Andrew L.; Bickle, Quentin D.
2012-01-01
Sole reliance on one drug, Praziquantel, for treatment and control of schistosomiasis raises concerns about development of widespread resistance, prompting renewed interest in the discovery of new anthelmintics. To discover new leads we designed an automated label-free, high content-based, high throughput screen (HTS) to assess drug-induced effects on in vitro cultured larvae (schistosomula) using bright-field imaging. Automatic image analysis and Bayesian prediction models define morphological damage, hit/non-hit prediction and larval phenotype characterization. Motility was also assessed from time-lapse images. In screening a 10,041 compound library the HTS correctly detected 99.8% of the hits scored visually. A proportion of these larval hits were also active in an adult worm ex-vivo screen and are the subject of ongoing studies. The method allows, for the first time, screening of large compound collections against schistosomes and the methods are adaptable to other whole organism and cell-based screening by morphology and motility phenotyping. PMID:22860151
Online, automatic, ionospheric maps: IRI-PLAS-MAP
NASA Astrophysics Data System (ADS)
Arikan, F.; Sezen, U.; Gulyaeva, T. L.; Cilibas, O.
2015-04-01
Global and regional behavior of the ionosphere is an important component of space weather. The peak height and critical frequency of ionospheric layer for the maximum ionization, namely, hmF2 and foF2, and the total number of electrons on a ray path, Total Electron Content (TEC), are the most investigated and monitored values of ionosphere in capturing and observing ionospheric variability. Typically ionospheric models such as International Reference Ionosphere (IRI) can provide electron density profile, critical parameters of ionospheric layers and Ionospheric electron content for a given location, date and time. Yet, IRI model is limited by only foF2 STORM option in reflecting the dynamics of ionospheric/plasmaspheric/geomagnetic storms. Global Ionospheric Maps (GIM) are provided by IGS analysis centers for global TEC distribution estimated from ground-based GPS stations that can capture the actual dynamics of ionosphere and plasmasphere, but this service is not available for other ionospheric observables. In this study, a unique and original space weather service is introduced as IRI-PLAS-MAP from http://www.ionolab.org
Spanier, A B; Caplan, N; Sosna, J; Acar, B; Joskowicz, L
2018-01-01
The goal of medical content-based image retrieval (M-CBIR) is to assist radiologists in the decision-making process by retrieving medical cases similar to a given image. One of the key interests of radiologists is lesions and their annotations, since the patient treatment depends on the lesion diagnosis. Therefore, a key feature of M-CBIR systems is the retrieval of scans with the most similar lesion annotations. To be of value, M-CBIR systems should be fully automatic to handle large case databases. We present a fully automatic end-to-end method for the retrieval of CT scans with similar liver lesion annotations. The input is a database of abdominal CT scans labeled with liver lesions, a query CT scan, and optionally one radiologist-specified lesion annotation of interest. The output is an ordered list of the database CT scans with the most similar liver lesion annotations. The method starts by automatically segmenting the liver in the scan. It then extracts a histogram-based features vector from the segmented region, learns the features' relative importance, and ranks the database scans according to the relative importance measure. The main advantages of our method are that it fully automates the end-to-end querying process, that it uses simple and efficient techniques that are scalable to large datasets, and that it produces quality retrieval results using an unannotated CT scan. Our experimental results on 9 CT queries on a dataset of 41 volumetric CT scans from the 2014 Image CLEF Liver Annotation Task yield an average retrieval accuracy (Normalized Discounted Cumulative Gain index) of 0.77 and 0.84 without/with annotation, respectively. Fully automatic end-to-end retrieval of similar cases based on image information alone, rather that on disease diagnosis, may help radiologists to better diagnose liver lesions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Egorov, Oleg; O'Hara, Matthew J.; Grate, Jay W.
An automated fluidic instrument is described that rapidly determines the total 99Tc content of aged nuclear waste samples, where the matrix is chemically and radiologically complex and the existing speciation of the 99Tc is variable. The monitor links microwave-assisted sample preparation with an automated anion exchange column separation and detection using a flow-through solid scintillator detector. The sample preparation steps acidify the sample, decompose organics, and convert all Tc species to the pertechnetate anion. The column-based anion exchange procedure separates the pertechnetate from the complex sample matrix, so that radiometric detection can provide accurate measurement of 99Tc. We developed amore » preprogrammed spike addition procedure to automatically determine matrix-matched calibration. The overall measurement efficiency that is determined simultaneously provides a self-diagnostic parameter for the radiochemical separation and overall instrument function. Continuous, automated operation was demonstrated over the course of 54 h, which resulted in the analysis of 215 samples plus 54 hly spike-addition samples, with consistent overall measurement efficiency for the operation of the monitor. A sample can be processed and measured automatically in just 12.5 min with a detection limit of 23.5 Bq/mL of 99Tc in low activity waste (0.495 mL sample volume), with better than 10% RSD precision at concentrations above the quantification limit. This rapid automated analysis method was developed to support nuclear waste processing operations planned for the Hanford nuclear site.« less
Simultenious binary hash and features learning for image retrieval
NASA Astrophysics Data System (ADS)
Frantc, V. A.; Makov, S. V.; Voronin, V. V.; Marchuk, V. I.; Semenishchev, E. A.; Egiazarian, K. O.; Agaian, S.
2016-05-01
Content-based image retrieval systems have plenty of applications in modern world. The most important one is the image search by query image or by semantic description. Approaches to this problem are employed in personal photo-collection management systems, web-scale image search engines, medical systems, etc. Automatic analysis of large unlabeled image datasets is virtually impossible without satisfactory image-retrieval technique. It's the main reason why this kind of automatic image processing has attracted so much attention during recent years. Despite rather huge progress in the field, semantically meaningful image retrieval still remains a challenging task. The main issue here is the demand to provide reliable results in short amount of time. This paper addresses the problem by novel technique for simultaneous learning of global image features and binary hash codes. Our approach provide mapping of pixel-based image representation to hash-value space simultaneously trying to save as much of semantic image content as possible. We use deep learning methodology to generate image description with properties of similarity preservation and statistical independence. The main advantage of our approach in contrast to existing is ability to fine-tune retrieval procedure for very specific application which allow us to provide better results in comparison to general techniques. Presented in the paper framework for data- dependent image hashing is based on use two different kinds of neural networks: convolutional neural networks for image description and autoencoder for feature to hash space mapping. Experimental results confirmed that our approach has shown promising results in compare to other state-of-the-art methods.
NASA Astrophysics Data System (ADS)
Mallepudi, Sri Abhishikth; Calix, Ricardo A.; Knapp, Gerald M.
2011-02-01
In recent years there has been a rapid increase in the size of video and image databases. Effective searching and retrieving of images from these databases is a significant current research area. In particular, there is a growing interest in query capabilities based on semantic image features such as objects, locations, and materials, known as content-based image retrieval. This study investigated mechanisms for identifying materials present in an image. These capabilities provide additional information impacting conditional probabilities about images (e.g. objects made of steel are more likely to be buildings). These capabilities are useful in Building Information Modeling (BIM) and in automatic enrichment of images. I2T methodologies are a way to enrich an image by generating text descriptions based on image analysis. In this work, a learning model is trained to detect certain materials in images. To train the model, an image dataset was constructed containing single material images of bricks, cloth, grass, sand, stones, and wood. For generalization purposes, an additional set of 50 images containing multiple materials (some not used in training) was constructed. Two different supervised learning classification models were investigated: a single multi-class SVM classifier, and multiple binary SVM classifiers (one per material). Image features included Gabor filter parameters for texture, and color histogram data for RGB components. All classification accuracy scores using the SVM-based method were above 85%. The second model helped in gathering more information from the images since it assigned multiple classes to the images. A framework for the I2T methodology is presented.
Trust, control strategies and allocation of function in human-machine systems.
Lee, J; Moray, N
1992-10-01
As automated controllers supplant human intervention in controlling complex systems, the operators' role often changes from that of an active controller to that of a supervisory controller. Acting as supervisors, operators can choose between automatic and manual control. Improperly allocating function between automatic and manual control can have negative consequences for the performance of a system. Previous research suggests that the decision to perform the job manually or automatically depends, in part, upon the trust the operators invest in the automatic controllers. This paper reports an experiment to characterize the changes in operators' trust during an interaction with a semi-automatic pasteurization plant, and investigates the relationship between changes in operators' control strategies and trust. A regression model identifies the causes of changes in trust, and a 'trust transfer function' is developed using time series analysis to describe the dynamics of trust. Based on a detailed analysis of operators' strategies in response to system faults we suggest a model for the choice between manual and automatic control, based on trust in automatic controllers and self-confidence in the ability to control the system manually.
Recommending images of user interests from the biomedical literature
NASA Astrophysics Data System (ADS)
Clukey, Steven; Xu, Songhua
2013-03-01
Every year hundreds of thousands of biomedical images are published in journals and conferences. Consequently, finding images relevant to one's interests becomes an ever daunting task. This vast amount of literature creates a need for intelligent and easy-to-use tools that can help researchers effectively navigate through the content corpus and conveniently locate materials of their interests. Traditionally, literature search tools allow users to query content using topic keywords. However, manual query composition is often time and energy consuming. A better system would be one that can automatically deliver relevant content to a researcher without having the end user manually manifest one's search intent and interests via search queries. Such a computer-aided assistance for information access can be provided by a system that first determines a researcher's interests automatically and then recommends images relevant to the person's interests accordingly. The technology can greatly improve a researcher's ability to stay up to date in their fields of study by allowing them to efficiently browse images and documents matching their needs and interests among the vast amount of the biomedical literature. A prototype system implementation of the technology can be accessed via http://www.smartdataware.com.
A framework for automatic information quality ranking of diabetes websites.
Belen Sağlam, Rahime; Taskaya Temizel, Tugba
2015-01-01
Objective: When searching for particular medical information on the internet the challenge lies in distinguishing the websites that are relevant to the topic, and contain accurate information. In this article, we propose a framework that automatically identifies and ranks diabetes websites according to their relevance and information quality based on the website content. Design: The proposed framework ranks diabetes websites according to their content quality, relevance and evidence based medicine. The framework combines information retrieval techniques with a lexical resource based on Sentiwordnet making it possible to work with biased and untrusted websites while, at the same time, ensuring the content relevance. Measurement: The evaluation measurements used were Pearson-correlation, true positives, false positives and accuracy. We tested the framework with a benchmark data set consisting of 55 websites with varying degrees of information quality problems. Results: The proposed framework gives good results that are comparable with the non-automated information quality measuring approaches in the literature. The correlation between the results of the proposed automated framework and ground-truth is 0.68 on an average with p < 0.001 which is greater than the other proposed automated methods in the literature (r score in average is 0.33).
Comparison of automatic control systems
NASA Technical Reports Server (NTRS)
Oppelt, W
1941-01-01
This report deals with a reciprocal comparison of an automatic pressure control, an automatic rpm control, an automatic temperature control, and an automatic directional control. It shows the difference between the "faultproof" regulator and the actual regulator which is subject to faults, and develops this difference as far as possible in a parallel manner with regard to the control systems under consideration. Such as analysis affords, particularly in its extension to the faults of the actual regulator, a deep insight into the mechanism of the regulator process.
1986-08-01
THE SCIENCE OF AND ADVANCED TECHNOLOGY FOR COST-EFFECTIVE MANUFACTURE Lfl OF HIGH PRECISION ENGINEERING PRODUCTS N iA6/*N ONR Contract No. 83K0385...ADVANCED TECHNOLOGY FOR1 COST-EFFECTIVE MANUFACTURE OF1’ HIGH PRECISION ENGINEERING PRODUCTS ONR Contract No. 83K0385 Final Report Vol. 5 AUTOMATIC...Ck 53N Drawing #: 03116-6233 Raw Material: Iiz’ 500mm diameter and 3000mm length Ma, rial Alloy steel. high carbon content, quenched to Min 45Rc
Automatic indexing of compound words based on mutual information for Korean text retrieval
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pan Koo Kim; Yoo Kun Cho
In this paper, we present an automatic indexing technique for compound words suitable to an aggulutinative language, specifically Korean. Firstly, we present the construction conditions to compose compound words as indexing terms. Also we present the decomposition rules applicable to consecutive nouns to extract all contents of text. Finally we propose a measure to estimate the usefulness of a term, mutual information, to calculate the degree of word association of compound words, based on the information theoretic notion. By applying this method, our system has raised the precision rate of compound words from 72% to 87%.
Trend of Narratives in the Age of Misinformation
Bessi, Alessandro; Zollo, Fabiana; Del Vicario, Michela; Scala, Antonio; Caldarelli, Guido; Quattrociocchi, Walter
2015-01-01
Social media enabled a direct path from producer to consumer of contents changing the way users get informed, debate, and shape their worldviews. Such a disintermediation might weaken consensus on social relevant issues in favor of rumors, mistrust, or conspiracy thinking—e.g., chem-trails inducing global warming, the link between vaccines and autism, or the New World Order conspiracy. Previous studies pointed out that consumers of conspiracy-like content are likely to aggregate in homophile clusters—i.e., echo-chambers. Along this path we study, by means of a thorough quantitative analysis, how different topics are consumed inside the conspiracy echo-chamber in the Italian Facebook. Through a semi-automatic topic extraction strategy, we show that the most consumed contents semantically refer to four specific categories: environment, diet, health, and geopolitics. We find similar consumption patterns by comparing users activity (likes and comments) on posts belonging to these different semantic categories. Finally, we model users mobility across the distinct topics finding that the more a user is active, the more he is likely to span on all categories. Once inside a conspiracy narrative users tend to embrace the overall corpus. PMID:26275043
Automatically augmenting lifelog events using pervasively generated content from millions of people.
Doherty, Aiden R; Smeaton, Alan F
2010-01-01
In sensor research we take advantage of additional contextual sensor information to disambiguate potentially erroneous sensor readings or to make better informed decisions on a single sensor's output. This use of additional information reinforces, validates, semantically enriches, and augments sensed data. Lifelog data is challenging to augment, as it tracks one's life with many images including the places they go, making it non-trivial to find associated sources of information. We investigate realising the goal of pervasive user-generated content based on sensors, by augmenting passive visual lifelogs with "Web 2.0" content collected by millions of other individuals.
Effects of 99mTc-TRODAT-1 drug template on image quantitative analysis
Yang, Bang-Hung; Chou, Yuan-Hwa; Wang, Shyh-Jen; Chen, Jyh-Cheng
2018-01-01
99mTc-TRODAT-1 is a type of drug that can bind to dopamine transporters in living organisms and is often used in SPCT imaging for observation of changes in the activity uptake of dopamine in the striatum. Therefore, it is currently widely used in studies on clinical diagnosis of Parkinson’s disease (PD) and movement-related disorders. In conventional 99mTc-TRODAT-1 SPECT image evaluation, visual inspection or manual selection of ROI for semiquantitative analysis is mainly used to observe and evaluate the degree of striatal defects. However, these methods are dependent on the subjective opinions of observers, which lead to human errors, have shortcomings such as long duration, increased effort, and have low reproducibility. To solve this problem, this study aimed to establish an automatic semiquantitative analytical method for 99mTc-TRODAT-1. This method combines three drug templates (one built-in SPECT template in SPM software and two self-generated MRI-based and HMPAO-based TRODAT-1 templates) for the semiquantitative analysis of the striatal phantom and clinical images. At the same time, the results of automatic analysis of the three templates were compared with results from a conventional manual analysis for examining the feasibility of automatic analysis and the effects of drug templates on automatic semiquantitative analysis results. After comparison, it was found that the MRI-based TRODAT-1 template generated from MRI images is the most suitable template for 99mTc-TRODAT-1 automatic semiquantitative analysis. PMID:29543874
Headway Deviation Effects on Bus Passenger Loads : Analysis of Tri-Met's Archived AVL-APC Data
DOT National Transportation Integrated Search
2003-01-01
In this paper we empirically analyze the relationship between transit service headway deviations and passenger loads, using archived data from Tri-Met's automatic vehicle location and automatic passenger counter systems. The analysis employs twostage...
Social Aspects of Photobooks: Improving Photobook Authoring from Large-Scale Multimedia Analysis
NASA Astrophysics Data System (ADS)
Sandhaus, Philipp; Boll, Susanne
With photo albums we aim to capture personal events such as weddings, vacations, and parties of family and friends. By arranging photo prints, captions and paper souvenirs such as tickets over the pages of a photobook we tell a story to capture and share our memories. The photo memories captured in such a photobook tell us much about the content and the relevance of the photos for the user. The way in which we select photos and arrange them in the photo album reveal a lot about the events, persons and places on the photos: captions describe content, closeness and arrangement of photos express relations between photos and their content and especially about the social relations of the author and the persons present in the album. Nowadays the process of photo album authoring has become digital, photos and texts can be arranged and laid out with the help of authoring tools in a digital photo album which can be printed as a physical photobook. In this chapter we present results of the analysis of a large repository of digitally mastered photobooks to learn about their social aspects. We explore to which degree a social aspect can be identified and how expressive and vivid different classes of photobooks are. The photobooks are anonymized, real world photobooks from customers of our industry partner CeWe Color. The knowledge gained from this social photobook analysis is meant both to better understand how people author their photobooks and to improve the automatic selection of and layout of photobooks.
Automatic Topography Using High Precision Digital Moire Methods
NASA Astrophysics Data System (ADS)
Yatagai, T.; Idesawa, M.; Saito, S.
1983-07-01
Three types of moire topographic methods using digital techniques are proposed. Deformed gratings obtained by projecting a reference grating onto an object under test are subjected to digital analysis. The electronic analysis procedures of deformed gratings described here enable us to distinguish between depression and elevation of the object, so that automatic measurement of 3-D shapes and automatic moire fringe interpolation are performed. Based on the digital moire methods, we have developed a practical measurement system, with a linear photodiode array on a micro-stage as a scanning image sensor. Examples of fringe analysis in medical applications are presented.
Lacson, Ronilda C; Barzilay, Regina; Long, William J
2006-10-01
Spoken medical dialogue is a valuable source of information for patients and caregivers. This work presents a first step towards automatic analysis and summarization of spoken medical dialogue. We first abstract a dialogue into a sequence of semantic categories using linguistic and contextual features integrated in a supervised machine-learning framework. Our model has a classification accuracy of 73%, compared to 33% achieved by a majority baseline (p<0.01). We then describe and implement a summarizer that utilizes this automatically induced structure. Our evaluation results indicate that automatically generated summaries exhibit high resemblance to summaries written by humans. In addition, task-based evaluation shows that physicians can reasonably answer questions related to patient care by looking at the automatically generated summaries alone, in contrast to the physicians' performance when they were given summaries from a naïve summarizer (p<0.05). This work demonstrates the feasibility of automatically structuring and summarizing spoken medical dialogue.
Ivezic, Nenad; Potok, Thomas E.
2003-09-30
A method for automatically evaluating a manufacturing technique comprises the steps of: receiving from a user manufacturing process step parameters characterizing a manufacturing process; accepting from the user a selection for an analysis of a particular lean manufacturing technique; automatically compiling process step data for each process step in the manufacturing process; automatically calculating process metrics from a summation of the compiled process step data for each process step; and, presenting the automatically calculated process metrics to the user. A method for evaluating a transition from a batch manufacturing technique to a lean manufacturing technique can comprise the steps of: collecting manufacturing process step characterization parameters; selecting a lean manufacturing technique for analysis; communicating the selected lean manufacturing technique and the manufacturing process step characterization parameters to an automatic manufacturing technique evaluation engine having a mathematical model for generating manufacturing technique evaluation data; and, using the lean manufacturing technique evaluation data to determine whether to transition from an existing manufacturing technique to the selected lean manufacturing technique.
WCE video segmentation using textons
NASA Astrophysics Data System (ADS)
Gallo, Giovanni; Granata, Eliana
2010-03-01
Wireless Capsule Endoscopy (WCE) integrates wireless transmission with image and video technology. It has been used to examine the small intestine non invasively. Medical specialists look for signicative events in the WCE video by direct visual inspection manually labelling, in tiring and up to one hour long sessions, clinical relevant frames. This limits the WCE usage. To automatically discriminate digestive organs such as esophagus, stomach, small intestine and colon is of great advantage. In this paper we propose to use textons for the automatic discrimination of abrupt changes within a video. In particular, we consider, as features, for each frame hue, saturation, value, high-frequency energy content and the responses to a bank of Gabor filters. The experiments have been conducted on ten video segments extracted from WCE videos, in which the signicative events have been previously labelled by experts. Results have shown that the proposed method may eliminate up to 70% of the frames from further investigations. The direct analysis of the doctors may hence be concentrated only on eventful frames. A graphical tool showing sudden changes in the textons frequencies for each frame is also proposed as a visual aid to find clinically relevant segments of the video.
[Fat and fatty acids chosen in chocolates content].
Tarkowski, Andrzej; Kowalczyk, Magdalena
2007-01-01
The objective of present work was to comparison of fat and chosen fatty acid in chocolates with, approachable on national market. In the investigations on fat and fatty acids content in the milk chocolates, there were used 14 chocolates, divided into 3 groups either without, with supplements and stuffing. Crude fat content in the chocolates was determined on Soxhlet automatic apparatus. The saturated ad nsaturated acids content was determined using gas chromatographic method. Content of fat and fatty cids in chocolates were differentiation. The highest crude fat content was finding in chocolates with tuffing (31.8%) and without supplements (28.9%). The sum of saturated fatty acids content in fat above 62%) was highest and low differentiation in the chocolates without supplements. Among of saturated and unsaturated fatty acids depended from kind of chocolates dominated, palmitic, stearic, oleic and, linoleic acids. Supplements of nut in chocolates had on influence of high oleic and linoleic level
Methods for automatically analyzing humpback song units.
Rickwood, Peter; Taylor, Andrew
2008-03-01
This paper presents mathematical techniques for automatically extracting and analyzing bioacoustic signals. Automatic techniques are described for isolation of target signals from background noise, extraction of features from target signals and unsupervised classification (clustering) of the target signals based on these features. The only user-provided inputs, other than raw sound, is an initial set of signal processing and control parameters. Of particular note is that the number of signal categories is determined automatically. The techniques, applied to hydrophone recordings of humpback whales (Megaptera novaeangliae), produce promising initial results, suggesting that they may be of use in automated analysis of not only humpbacks, but possibly also in other bioacoustic settings where automated analysis is desirable.
Creating a medical dictionary using word alignment: the influence of sources and resources.
Nyström, Mikael; Merkel, Magnus; Petersson, Håkan; Ahlfeldt, Hans
2007-11-23
Automatic word alignment of parallel texts with the same content in different languages is among other things used to generate dictionaries for new translations. The quality of the generated word alignment depends on the quality of the input resources. In this paper we report on automatic word alignment of the English and Swedish versions of the medical terminology systems ICD-10, ICF, NCSP, KSH97-P and parts of MeSH and how the terminology systems and type of resources influence the quality. We automatically word aligned the terminology systems using static resources, like dictionaries, statistical resources, like statistically derived dictionaries, and training resources, which were generated from manual word alignment. We varied which part of the terminology systems that we used to generate the resources, which parts that we word aligned and which types of resources we used in the alignment process to explore the influence the different terminology systems and resources have on the recall and precision. After the analysis, we used the best configuration of the automatic word alignment for generation of candidate term pairs. We then manually verified the candidate term pairs and included the correct pairs in an English-Swedish dictionary. The results indicate that more resources and resource types give better results but the size of the parts used to generate the resources only partly affects the quality. The most generally useful resources were generated from ICD-10 and resources generated from MeSH were not as general as other resources. Systematic inter-language differences in the structure of the terminology system rubrics make the rubrics harder to align. Manually created training resources give nearly as good results as a union of static resources, statistical resources and training resources and noticeably better results than a union of static resources and statistical resources. The verified English-Swedish dictionary contains 24,000 term pairs in base forms. More resources give better results in the automatic word alignment, but some resources only give small improvements. The most important type of resource is training and the most general resources were generated from ICD-10.
Creating a medical dictionary using word alignment: The influence of sources and resources
Nyström, Mikael; Merkel, Magnus; Petersson, Håkan; Åhlfeldt, Hans
2007-01-01
Background Automatic word alignment of parallel texts with the same content in different languages is among other things used to generate dictionaries for new translations. The quality of the generated word alignment depends on the quality of the input resources. In this paper we report on automatic word alignment of the English and Swedish versions of the medical terminology systems ICD-10, ICF, NCSP, KSH97-P and parts of MeSH and how the terminology systems and type of resources influence the quality. Methods We automatically word aligned the terminology systems using static resources, like dictionaries, statistical resources, like statistically derived dictionaries, and training resources, which were generated from manual word alignment. We varied which part of the terminology systems that we used to generate the resources, which parts that we word aligned and which types of resources we used in the alignment process to explore the influence the different terminology systems and resources have on the recall and precision. After the analysis, we used the best configuration of the automatic word alignment for generation of candidate term pairs. We then manually verified the candidate term pairs and included the correct pairs in an English-Swedish dictionary. Results The results indicate that more resources and resource types give better results but the size of the parts used to generate the resources only partly affects the quality. The most generally useful resources were generated from ICD-10 and resources generated from MeSH were not as general as other resources. Systematic inter-language differences in the structure of the terminology system rubrics make the rubrics harder to align. Manually created training resources give nearly as good results as a union of static resources, statistical resources and training resources and noticeably better results than a union of static resources and statistical resources. The verified English-Swedish dictionary contains 24,000 term pairs in base forms. Conclusion More resources give better results in the automatic word alignment, but some resources only give small improvements. The most important type of resource is training and the most general resources were generated from ICD-10. PMID:18036221
NASA Astrophysics Data System (ADS)
Leith, Alex P.; Ratan, Rabindra A.; Wohn, Donghee Yvette
2016-08-01
Given the diversity and complexity of education game mechanisms and topics, this article contributes to a theoretical understanding of how game mechanisms "map" to educational topics through inquiry-based learning. Namely, the article examines the presence of evolution through natural selection (ENS) in digital games. ENS is a fundamentally important and widely misunderstood theory. This analysis of ENS portrayal in digital games provides insight into the use of games in teaching ENS. Systematic database search results were coded for the three principles of ENS: phenotypic variation, differential fitness, and fitness heritability. Though thousands of games use the term evolution, few presented elements of evolution, and even fewer contained all principles of ENS. Games developed to specifically teach evolution were difficult to find through Web searches. These overall deficiencies in ENS games reflect the inherent incompatibility between game control elements and the automatic process of ENS.
19 CFR 360.103 - Automatic issuance of import licenses.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 19 Customs Duties 3 2010-04-01 2010-04-01 false Automatic issuance of import licenses. 360.103 Section 360.103 Customs Duties INTERNATIONAL TRADE ADMINISTRATION, DEPARTMENT OF COMMERCE STEEL IMPORT MONITORING AND ANALYSIS SYSTEM § 360.103 Automatic issuance of import licenses. (a) In general. Steel import...
19 CFR 360.103 - Automatic issuance of import licenses.
Code of Federal Regulations, 2014 CFR
2014-04-01
... 19 Customs Duties 3 2014-04-01 2014-04-01 false Automatic issuance of import licenses. 360.103 Section 360.103 Customs Duties INTERNATIONAL TRADE ADMINISTRATION, DEPARTMENT OF COMMERCE STEEL IMPORT MONITORING AND ANALYSIS SYSTEM § 360.103 Automatic issuance of import licenses. (a) In general. Steel import...
19 CFR 360.103 - Automatic issuance of import licenses.
Code of Federal Regulations, 2013 CFR
2013-04-01
... 19 Customs Duties 3 2013-04-01 2013-04-01 false Automatic issuance of import licenses. 360.103 Section 360.103 Customs Duties INTERNATIONAL TRADE ADMINISTRATION, DEPARTMENT OF COMMERCE STEEL IMPORT MONITORING AND ANALYSIS SYSTEM § 360.103 Automatic issuance of import licenses. (a) In general. Steel import...
19 CFR 360.103 - Automatic issuance of import licenses.
Code of Federal Regulations, 2012 CFR
2012-04-01
... 19 Customs Duties 3 2012-04-01 2012-04-01 false Automatic issuance of import licenses. 360.103 Section 360.103 Customs Duties INTERNATIONAL TRADE ADMINISTRATION, DEPARTMENT OF COMMERCE STEEL IMPORT MONITORING AND ANALYSIS SYSTEM § 360.103 Automatic issuance of import licenses. (a) In general. Steel import...
19 CFR 360.103 - Automatic issuance of import licenses.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 19 Customs Duties 3 2011-04-01 2011-04-01 false Automatic issuance of import licenses. 360.103 Section 360.103 Customs Duties INTERNATIONAL TRADE ADMINISTRATION, DEPARTMENT OF COMMERCE STEEL IMPORT MONITORING AND ANALYSIS SYSTEM § 360.103 Automatic issuance of import licenses. (a) In general. Steel import...
Automatic Thesaurus Generation for an Electronic Community System.
ERIC Educational Resources Information Center
Chen, Hsinchun; And Others
1995-01-01
This research reports an algorithmic approach to the automatic generation of thesauri for electronic community systems. The techniques used include term filtering, automatic indexing, and cluster analysis. The Worm Community System, used by molecular biologists studying the nematode worm C. elegans, was used as the testbed for this research.…
Automatic Error Analysis Using Intervals
ERIC Educational Resources Information Center
Rothwell, E. J.; Cloud, M. J.
2012-01-01
A technique for automatic error analysis using interval mathematics is introduced. A comparison to standard error propagation methods shows that in cases involving complicated formulas, the interval approach gives comparable error estimates with much less effort. Several examples are considered, and numerical errors are computed using the INTLAB…
Moderators of the Relationship between Implicit and Explicit Evaluation
Nosek, Brian A.
2005-01-01
Automatic and controlled modes of evaluation sometimes provide conflicting reports of the quality of social objects. This paper presents evidence for four moderators of the relationship between automatic (implicit) and controlled (explicit) evaluations. Implicit and explicit preferences were measured for a variety of object pairs using a large sample. The average correlation was r = .36, and 52 of the 57 object pairs showed a significant positive correlation. Results of multilevel modeling analyses suggested that: (a) implicit and explicit preferences are related, (b) the relationship varies as a function of the objects assessed, and (c) at least four variables moderate the relationship – self-presentation, evaluative strength, dimensionality, and distinctiveness. The variables moderated implicit-explicit correspondence across individuals and accounted for much of the observed variation across content domains. The resulting model of the relationship between automatic and controlled evaluative processes is grounded in personal experience with the targets of evaluation. PMID:16316292
Film grain synthesis and its application to re-graining
NASA Astrophysics Data System (ADS)
Schallauer, Peter; Mörzinger, Roland
2006-01-01
Digital film restoration and special effects compositing require more and more automatic procedures for movie regraining. Missing or inhomogeneous grain decreases perceived quality. For the purpose of grain synthesis an existing texture synthesis algorithm has been evaluated and optimized. We show that this algorithm can produce synthetic grain which is perceptually similar to a given grain template, which has high spatial and temporal variation and which can be applied to multi-spectral images. Furthermore a re-grain application framework is proposed, which synthesises based on an input grain template artificial grain and composites this together with the original image content. Due to its modular approach this framework supports manual as well as automatic re-graining applications. Two example applications are presented, one for re-graining an entire movie and one for fully automatic re-graining of image regions produced by restoration algorithms. Low computational cost of the proposed algorithms allows application in industrial grade software.
Automatic Invocation Linking for Collaborative Web-Based Corpora
NASA Astrophysics Data System (ADS)
Gardner, James; Krowne, Aaron; Xiong, Li
Collaborative online encyclopedias or knowledge bases such as Wikipedia and PlanetMath are becoming increasingly popular because of their open access, comprehensive and interlinked content, rapid and continual updates, and community interactivity. To understand a particular concept in these knowledge bases, a reader needs to learn about related and underlying concepts. In this chapter, we introduce the problem of invocation linking for collaborative encyclopedia or knowledge bases, review the state of the art for invocation linking including the popular linking system of Wikipedia, discuss the problems and challenges of automatic linking, and present the NNexus approach, an abstraction and generalization of the automatic linking system used by PlanetMath.org. The chapter emphasizes both research problems and practical design issues through discussion of real world scenarios and hence is suitable for both researchers in web intelligence and practitioners looking to adopt the techniques. Below is a brief outline of the chapter.
Mogol, Burçe Ataç; Gökmen, Vural
2014-05-01
Computer vision-based image analysis has been widely used in food industry to monitor food quality. It allows low-cost and non-contact measurements of colour to be performed. In this paper, two computer vision-based image analysis approaches are discussed to extract mean colour or featured colour information from the digital images of foods. These types of information may be of particular importance as colour indicates certain chemical changes or physical properties in foods. As exemplified here, the mean CIE a* value or browning ratio determined by means of computer vision-based image analysis algorithms can be correlated with acrylamide content of potato chips or cookies. Or, porosity index as an important physical property of breadcrumb can be calculated easily. In this respect, computer vision-based image analysis provides a useful tool for automatic inspection of food products in a manufacturing line, and it can be actively involved in the decision-making process where rapid quality/safety evaluation is needed. © 2013 Society of Chemical Industry.
Automatic Parametrization of Somatosensory Evoked Potentials With Chirp Modeling.
Vayrynen, Eero; Noponen, Kai; Vipin, Ashwati; Thow, X Y; Al-Nashash, Hasan; Kortelainen, Jukka; All, Angelo
2016-09-01
In this paper, an approach using polynomial phase chirp signals to model somatosensory evoked potentials (SEPs) is proposed. SEP waveforms are assumed as impulses undergoing group velocity dispersion while propagating along a multipath neural connection. Mathematical analysis of pulse dispersion resulting in chirp signals is performed. An automatic parameterization of SEPs is proposed using chirp models. A Particle Swarm Optimization algorithm is used to optimize the model parameters. Features describing the latencies and amplitudes of SEPs are automatically derived. A rat model is then used to evaluate the automatic parameterization of SEPs in two experimental cases, i.e., anesthesia level and spinal cord injury (SCI). Experimental results show that chirp-based model parameters and the derived SEP features are significant in describing both anesthesia level and SCI changes. The proposed automatic optimization based approach for extracting chirp parameters offers potential for detailed SEP analysis in future studies. The method implementation in Matlab technical computing language is provided online.
Finite element fatigue analysis of rectangular clutch spring of automatic slack adjuster
NASA Astrophysics Data System (ADS)
Xu, Chen-jie; Luo, Zai; Hu, Xiao-feng; Jiang, Wen-song
2015-02-01
The failure of rectangular clutch spring of automatic slack adjuster directly affects the work of automatic slack adjuster. We establish the structural mechanics model of automatic slack adjuster rectangular clutch spring based on its working principle and mechanical structure. In addition, we upload such structural mechanics model to ANSYS Workbench FEA system to predict the fatigue life of rectangular clutch spring. FEA results show that the fatigue life of rectangular clutch spring is 2.0403×105 cycle under the effect of braking loads. In the meantime, fatigue tests of 20 automatic slack adjusters are carried out on the fatigue test bench to verify the conclusion of the structural mechanics model. The experimental results show that the mean fatigue life of rectangular clutch spring is 1.9101×105, which meets the results based on the finite element analysis using ANSYS Workbench FEA system.
Automatic view synthesis by image-domain-warping.
Stefanoski, Nikolce; Wang, Oliver; Lang, Manuel; Greisen, Pierre; Heinzle, Simon; Smolic, Aljosa
2013-09-01
Today, stereoscopic 3D (S3D) cinema is already mainstream, and almost all new display devices for the home support S3D content. S3D distribution infrastructure to the home is already established partly in the form of 3D Blu-ray discs, video on demand services, or television channels. The necessity to wear glasses is, however, often considered as an obstacle, which hinders broader acceptance of this technology in the home. Multiviewautostereoscopic displays enable a glasses free perception of S3D content for several observers simultaneously, and support head motion parallax in a limited range. To support multiviewautostereoscopic displays in an already established S3D distribution infrastructure, a synthesis of new views from S3D video is needed. In this paper, a view synthesis method based on image-domain-warping (IDW) is presented that automatically synthesizes new views directly from S3D video and functions completely. IDW relies on an automatic and robust estimation of sparse disparities and image saliency information, and enforces target disparities in synthesized images using an image warping framework. Two configurations of the view synthesizer in the scope of a transmission and view synthesis framework are analyzed and evaluated. A transmission and view synthesis system that uses IDW is recently submitted to MPEG's call for proposals on 3D video technology, where it is ranked among the four best performing proposals.
Fully automatic registration and segmentation of first-pass myocardial perfusion MR image sequences.
Gupta, Vikas; Hendriks, Emile A; Milles, Julien; van der Geest, Rob J; Jerosch-Herold, Michael; Reiber, Johan H C; Lelieveldt, Boudewijn P F
2010-11-01
Derivation of diagnostically relevant parameters from first-pass myocardial perfusion magnetic resonance images involves the tedious and time-consuming manual segmentation of the myocardium in a large number of images. To reduce the manual interaction and expedite the perfusion analysis, we propose an automatic registration and segmentation method for the derivation of perfusion linked parameters. A complete automation was accomplished by first registering misaligned images using a method based on independent component analysis, and then using the registered data to automatically segment the myocardium with active appearance models. We used 18 perfusion studies (100 images per study) for validation in which the automatically obtained (AO) contours were compared with expert drawn contours on the basis of point-to-curve error, Dice index, and relative perfusion upslope in the myocardium. Visual inspection revealed successful segmentation in 15 out of 18 studies. Comparison of the AO contours with expert drawn contours yielded 2.23 ± 0.53 mm and 0.91 ± 0.02 as point-to-curve error and Dice index, respectively. The average difference between manually and automatically obtained relative upslope parameters was found to be statistically insignificant (P = .37). Moreover, the analysis time per slice was reduced from 20 minutes (manual) to 1.5 minutes (automatic). We proposed an automatic method that significantly reduced the time required for analysis of first-pass cardiac magnetic resonance perfusion images. The robustness and accuracy of the proposed method were demonstrated by the high spatial correspondence and statistically insignificant difference in perfusion parameters, when AO contours were compared with expert drawn contours. Copyright © 2010 AUR. Published by Elsevier Inc. All rights reserved.
Invited review: technical solutions for analysis of milk constituents and abnormal milk.
Brandt, M; Haeussermann, A; Hartung, E
2010-02-01
Information about constituents of milk and visual alterations can be used for management support in improving mastitis detection, monitoring fertility and reproduction, and adapting individual diets. Numerous sensors that gather this information are either currently available or in development. Nevertheless, there is still a need to adapt these sensors to special requirements of on-farm utilization such as robustness, calibration and maintenance, costs, operating cycle duration, and high sensitivity and specificity. This paper provides an overview of available sensors, ongoing research, and areas of application for analysis of milk constituents. Currently, the recognition of abnormal milk and the control of udder health is achieved mainly by recording electrical conductivity and changes in milk color. Further indicators of inflammation were recently investigated either to satisfy the high specificity necessary for automatic separation of milk or to create reliable alarm lists. Likewise, milk composition, especially fat:protein ratio, milk urea nitrogen content, and concentration of ketone bodies, provides suitable information about energy and protein supply, roughage fraction in the diet, and metabolic imbalances in dairy cows. In this regard, future prospects are to use frequent on-farm measurements of milk constituents for short-term automatic nutritional management. Finally, measuring progesterone concentration in milk helps farmers detect ovulation, pregnancy, and infertility. Monitoring systems for on-farm or on-line analysis of milk composition are mostly based on infrared spectroscopy, optical methods, biosensors, or sensor arrays. Their calibration and maintenance requirements have to be checked thoroughly before they can be regularly implemented on dairy farms. Copyright 2010 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Computer program for analysis of hemodynamic response to head-up tilt test
NASA Astrophysics Data System (ADS)
ŚwiÄ tek, Eliza; Cybulski, Gerard; Koźluk, Edward; PiÄ tkowska, Agnieszka; Niewiadomski, Wiktor
2014-11-01
The aim of this work was to create a computer program, written in the MATLAB environment, which enables the visualization and analysis of hemodynamic parameters recorded during a passive tilt test using the CNS Task Force Monitor System. The application was created to help in the assessment of the relationship between the values and dynamics of changes of the selected parameters and the risk of orthostatic syncope. The signal analysis included: R-R intervals (RRI), heart rate (HR), systolic blood pressure (sBP), diastolic blood pressure (dBP), mean blood pressure (mBP), stroke volume (SV), stroke index (SI), cardiac output (CO), cardiac index (CI), total peripheral resistance (TPR), total peripheral resistance index (TPRI), ventricular ejection time (LVET) and thoracic fluid content (TFC). The program enables the user to visualize waveforms for a selected parameter and to perform smoothing with selected moving average parameters. It allows one to construct the graph of means for any range, and the Poincare plot for a selected time range. The program automatically determines the average value of the parameter before tilt, its minimum and maximum value immediately after changing positions and the times of their occurrence. It is possible to correct the automatically detected points manually. For the RR interval, it determines the acceleration index (AI) and the brake index (BI). It is possible to save calculated values to an XLS with a name specified by user. The application has a user-friendly graphical interface and can run on a computer that has no MATLAB software.
Arnemann, Philip-Helge; Hessler, Michael; Kampmeier, Tim; Morelli, Andrea; Van Aken, Hugo Karel; Westphal, Martin; Rehberg, Sebastian; Ertmer, Christian
2016-12-01
Life-threatening diseases of critically ill patients are known to derange microcirculation. Automatic analysis of microcirculation would provide a bedside diagnostic tool for microcirculatory disorders and allow immediate therapeutic decisions based upon microcirculation analysis. After induction of general anaesthesia and instrumentation for haemodynamic monitoring, haemorrhagic shock was induced in ten female sheep by stepwise blood withdrawal of 3 × 10 mL per kilogram body weight. Before and after the induction of haemorrhagic shock, haemodynamic variables, samples for blood gas analysis, and videos of conjunctival microcirculation were obtained by incident dark field illumination microscopy. Microcirculatory videos were analysed (1) manually with AVA software version 3.2 by an experienced user and (2) automatically by AVA software version 4.2 for total vessel density (TVD), perfused vessel density (PVD) and proportion of perfused vessels (PPV). Correlation between the two analysis methods was examined by intraclass correlation coefficient and Bland-Altman analysis. The induction of haemorrhagic shock decreased the mean arterial pressure (from 87 ± 11 to 40 ± 7 mmHg; p < 0.001); stroke volume index (from 38 ± 14 to 20 ± 5 ml·m -2 ; p = 0.001) and cardiac index (from 2.9 ± 0.9 to 1.8 ± 0.5 L·min -1 ·m -2 ; p < 0.001) and increased the heart rate (from 72 ± 9 to 87 ± 11 bpm; p < 0.001) and lactate concentration (from 0.9 ± 0.3 to 2.0 ± 0.6 mmol·L -1 ; p = 0.001). Manual analysis showed no change in TVD (17.8 ± 4.2 to 17.8 ± 3.8 mm*mm -2 ; p = 0.993), whereas PVD (from 15.6 ± 4.6 to 11.5 ± 6.5 mm*mm -2 ; p = 0.041) and PPV (from 85.9 ± 11.8 to 62.7 ± 29.6%; p = 0.017) decreased significantly. Automatic analysis was not able to identify these changes. Correlation analysis showed a poor correlation between the analysis methods and a wide spread of values in Bland-Altman analysis. As characteristic changes in microcirculation during ovine haemorrhagic shock were not detected by automatic analysis and correlation between automatic and manual analyses (current gold standard) was poor, the use of the investigated software for automatic analysis of microcirculation cannot be recommended in its current version at least in the investigated model. Further improvements in automatic vessel detection are needed before its routine use.
Automatic video segmentation and indexing
NASA Astrophysics Data System (ADS)
Chahir, Youssef; Chen, Liming
1999-08-01
Indexing is an important aspect of video database management. Video indexing involves the analysis of video sequences, which is a computationally intensive process. However, effective management of digital video requires robust indexing techniques. The main purpose of our proposed video segmentation is twofold. Firstly, we develop an algorithm that identifies camera shot boundary. The approach is based on the use of combination of color histograms and block-based technique. Next, each temporal segment is represented by a color reference frame which specifies the shot similarities and which is used in the constitution of scenes. Experimental results using a variety of videos selected in the corpus of the French Audiovisual National Institute are presented to demonstrate the effectiveness of performing shot detection, the content characterization of shots and the scene constitution.
Time Series Analysis of Technology Trends based on the Internet Resources
NASA Astrophysics Data System (ADS)
Kobayashi, Shin-Ichi; Shirai, Yasuyuki; Hiyane, Kazuo; Kumeno, Fumihiro; Inujima, Hiroshi; Yamauchi, Noriyoshi
Information technology is increasingly important in recent years for the development of our society. IT has brought many changes to everything in our society with incredible speed. Hence, when we investigate R & D themes or plan business strategies in IT, we must understand overall situation around the target technology area besides technology itself. Especially it is crucial to understand overall situation as time series to know what will happen in the near future in the target area. For this purpose, we developed a method to generate Multiple-phased trend maps automatically based on the Internet content. Furthermore, we introduced quantitative indicators to analyze near future possible changes. According to the evaluation of this method we got successful and interesting results.
Effectiveness of an automatic tracking software in underwater motion analysis.
Magalhaes, Fabrício A; Sawacha, Zimi; Di Michele, Rocco; Cortesi, Matteo; Gatta, Giorgio; Fantozzi, Silvia
2013-01-01
Tracking of markers placed on anatomical landmarks is a common practice in sports science to perform the kinematic analysis that interests both athletes and coaches. Although different software programs have been developed to automatically track markers and/or features, none of them was specifically designed to analyze underwater motion. Hence, this study aimed to evaluate the effectiveness of a software developed for automatic tracking of underwater movements (DVP), based on the Kanade-Lucas-Tomasi feature tracker. Twenty-one video recordings of different aquatic exercises (n = 2940 markers' positions) were manually tracked to determine the markers' center coordinates. Then, the videos were automatically tracked using DVP and a commercially available software (COM). Since tracking techniques may produce false targets, an operator was instructed to stop the automatic procedure and to correct the position of the cursor when the distance between the calculated marker's coordinate and the reference one was higher than 4 pixels. The proportion of manual interventions required by the software was used as a measure of the degree of automation. Overall, manual interventions were 10.4% lower for DVP (7.4%) than for COM (17.8%). Moreover, when examining the different exercise modes separately, the percentage of manual interventions was 5.6% to 29.3% lower for DVP than for COM. Similar results were observed when analyzing the type of marker rather than the type of exercise, with 9.9% less manual interventions for DVP than for COM. In conclusion, based on these results, the developed automatic tracking software presented can be used as a valid and useful tool for underwater motion analysis. Key PointsThe availability of effective software for automatic tracking would represent a significant advance for the practical use of kinematic analysis in swimming and other aquatic sports.An important feature of automatic tracking software is to require limited human interventions and supervision, thus allowing short processing time.When tracking underwater movements, the degree of automation of the tracking procedure is influenced by the capability of the algorithm to overcome difficulties linked to the small target size, the low image quality and the presence of background clutters.The newly developed feature-tracking algorithm has shown a good automatic tracking effectiveness in underwater motion analysis with significantly smaller percentage of required manual interventions when compared to a commercial software.
Automatic emotional expression analysis from eye area
NASA Astrophysics Data System (ADS)
Akkoç, Betül; Arslan, Ahmet
2015-02-01
Eyes play an important role in expressing emotions in nonverbal communication. In the present study, emotional expression classification was performed based on the features that were automatically extracted from the eye area. Fırst, the face area and the eye area were automatically extracted from the captured image. Afterwards, the parameters to be used for the analysis through discrete wavelet transformation were obtained from the eye area. Using these parameters, emotional expression analysis was performed through artificial intelligence techniques. As the result of the experimental studies, 6 universal emotions consisting of expressions of happiness, sadness, surprise, disgust, anger and fear were classified at a success rate of 84% using artificial neural networks.
Towards automatic music transcription: note extraction based on independent subspace analysis
NASA Astrophysics Data System (ADS)
Wellhausen, Jens; Hoynck, Michael
2005-01-01
Due to the increasing amount of music available electronically the need of automatic search, retrieval and classification systems for music becomes more and more important. In this paper an algorithm for automatic transcription of polyphonic piano music into MIDI data is presented, which is a very interesting basis for database applications, music analysis and music classification. The first part of the algorithm performs a note accurate temporal audio segmentation. In the second part, the resulting segments are examined using Independent Subspace Analysis to extract sounding notes. Finally, the results are used to build a MIDI file as a new representation of the piece of music which is examined.
Towards automatic music transcription: note extraction based on independent subspace analysis
NASA Astrophysics Data System (ADS)
Wellhausen, Jens; Höynck, Michael
2004-12-01
Due to the increasing amount of music available electronically the need of automatic search, retrieval and classification systems for music becomes more and more important. In this paper an algorithm for automatic transcription of polyphonic piano music into MIDI data is presented, which is a very interesting basis for database applications, music analysis and music classification. The first part of the algorithm performs a note accurate temporal audio segmentation. In the second part, the resulting segments are examined using Independent Subspace Analysis to extract sounding notes. Finally, the results are used to build a MIDI file as a new representation of the piece of music which is examined.
A hierarchical structure for automatic meshing and adaptive FEM analysis
NASA Technical Reports Server (NTRS)
Kela, Ajay; Saxena, Mukul; Perucchio, Renato
1987-01-01
A new algorithm for generating automatically, from solid models of mechanical parts, finite element meshes that are organized as spatially addressable quaternary trees (for 2-D work) or octal trees (for 3-D work) is discussed. Because such meshes are inherently hierarchical as well as spatially addressable, they permit efficient substructuring techniques to be used for both global analysis and incremental remeshing and reanalysis. The global and incremental techniques are summarized and some results from an experimental closed loop 2-D system in which meshing, analysis, error evaluation, and remeshing and reanalysis are done automatically and adaptively are presented. The implementation of 3-D work is briefly discussed.
Frejlichowski, Dariusz; Gościewska, Katarzyna; Forczmański, Paweł; Hofman, Radosław
2014-06-05
"SmartMonitor" is an intelligent security system based on image analysis that combines the advantages of alarm, video surveillance and home automation systems. The system is a complete solution that automatically reacts to every learned situation in a pre-specified way and has various applications, e.g., home and surrounding protection against unauthorized intrusion, crime detection or supervision over ill persons. The software is based on well-known and proven methods and algorithms for visual content analysis (VCA) that were appropriately modified and adopted to fit specific needs and create a video processing model which consists of foreground region detection and localization, candidate object extraction, object classification and tracking. In this paper, the "SmartMonitor" system is presented along with its architecture, employed methods and algorithms, and object analysis approach. Some experimental results on system operation are also provided. In the paper, focus is put on one of the aforementioned functionalities of the system, namely supervision over ill persons.
Research in interactive scene analysis
NASA Technical Reports Server (NTRS)
Tenenbaum, J. M.; Garvey, T. D.; Weyl, S. A.; Wolf, H. C.
1975-01-01
An interactive scene interpretation system (ISIS) was developed as a tool for constructing and experimenting with man-machine and automatic scene analysis methods tailored for particular image domains. A recently developed region analysis subsystem based on the paradigm of Brice and Fennema is described. Using this subsystem a series of experiments was conducted to determine good criteria for initially partitioning a scene into atomic regions and for merging these regions into a final partition of the scene along object boundaries. Semantic (problem-dependent) knowledge is essential for complete, correct partitions of complex real-world scenes. An interactive approach to semantic scene segmentation was developed and demonstrated on both landscape and indoor scenes. This approach provides a reasonable methodology for segmenting scenes that cannot be processed completely automatically, and is a promising basis for a future automatic system. A program is described that can automatically generate strategies for finding specific objects in a scene based on manually designated pictorial examples.
Klapsing, Philipp; Herrmann, Peter; Quintel, Michael; Moerer, Onnen
2017-12-01
Quantitative lung computed tomographic (CT) analysis yields objective data regarding lung aeration but is currently not used in clinical routine primarily because of the labor-intensive process of manual CT segmentation. Automatic lung segmentation could help to shorten processing times significantly. In this study, we assessed bias and precision of lung CT analysis using automatic segmentation compared with manual segmentation. In this monocentric clinical study, 10 mechanically ventilated patients with mild to moderate acute respiratory distress syndrome were included who had received lung CT scans at 5- and 45-mbar airway pressure during a prior study. Lung segmentations were performed both automatically using a computerized algorithm and manually. Automatic segmentation yielded similar lung volumes compared with manual segmentation with clinically minor differences both at 5 and 45 mbar. At 5 mbar, results were as follows: overdistended lung 49.58mL (manual, SD 77.37mL) and 50.41mL (automatic, SD 77.3mL), P=.028; normally aerated lung 2142.17mL (manual, SD 1131.48mL) and 2156.68mL (automatic, SD 1134.53mL), P = .1038; and poorly aerated lung 631.68mL (manual, SD 196.76mL) and 646.32mL (automatic, SD 169.63mL), P = .3794. At 45 mbar, values were as follows: overdistended lung 612.85mL (manual, SD 449.55mL) and 615.49mL (automatic, SD 451.03mL), P=.078; normally aerated lung 3890.12mL (manual, SD 1134.14mL) and 3907.65mL (automatic, SD 1133.62mL), P = .027; and poorly aerated lung 413.35mL (manual, SD 57.66mL) and 469.58mL (automatic, SD 70.14mL), P=.007. Bland-Altman analyses revealed the following mean biases and limits of agreement at 5 mbar for automatic vs manual segmentation: overdistended lung +0.848mL (±2.062mL), normally aerated +14.51mL (±49.71mL), and poorly aerated +14.64mL (±98.16mL). At 45 mbar, results were as follows: overdistended +2.639mL (±8.231mL), normally aerated 17.53mL (±41.41mL), and poorly aerated 56.23mL (±100.67mL). Automatic single CT image and whole lung segmentation were faster than manual segmentation (0.17 vs 125.35seconds [P<.0001] and 10.46 vs 7739.45seconds [P<.0001]). Automatic lung CT segmentation allows fast analysis of aerated lung regions. A reduction of processing times by more than 99% allows the use of quantitative CT at the bedside. Copyright © 2016 Elsevier Inc. All rights reserved.
Setting up Conditions for Negotiation in Science
ERIC Educational Resources Information Center
Yoon, Sae Yeol; Bennett, William; Mendez, Claudia Aguirre; Hand, Brian
2010-01-01
When using an argument based inquiry approach like the Science Writing Heuristic (SWH) approach, argumentation between peers and with a teacher will provide great opportunities for students to experience negotiation of meaning in relation to science content. However, students do not automatically engage in dialogue and argumentation with…
Podcasting the Sciences: A Practical Overview
ERIC Educational Resources Information Center
Barsky, Eugene; Lindstrom, Kevin
2008-01-01
University science education has been undergoing great amount of change since the commercialization of the Internet a decade ago. Mobile technologies in science education can encompass more than the proximal teaching and learning environment. Podcasting, for example, allows audio content from user-selected feeds to be automatically downloaded to…
10 CFR 50.34 - Contents of applications; technical information.
Code of Federal Regulations, 2011 CFR
2011-01-01
..., and a program to ensure that the results of these studies are factored into the final design of the... assessment study to determine the optimum automatic depressurization system (ADS) design modifications that... capability for containment purging/venting designed to minimize the purging time consistent with ALARA...
10 CFR 50.34 - Contents of applications; technical information.
Code of Federal Regulations, 2013 CFR
2013-01-01
..., and a program to ensure that the results of these studies are factored into the final design of the... assessment study to determine the optimum automatic depressurization system (ADS) design modifications that... capability for containment purging/venting designed to minimize the purging time consistent with ALARA...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fang, Y; Huang, H; Su, T
Purpose: Texture-based quantification of image heterogeneity has been a popular topic for imaging studies in recent years. As previous studies mainly focus on oncological applications, we report our recent efforts of applying such techniques on cardiac perfusion imaging. A fully automated procedure has been developed to perform texture analysis for measuring the image heterogeneity. Clinical data were used to evaluate the preliminary performance of such methods. Methods: Myocardial perfusion images of Thallium-201 scans were collected from 293 patients with suspected coronary artery disease. Each subject underwent a Tl-201 scan and a percutaneous coronary intervention (PCI) within three months. The PCImore » Result was used as the gold standard of coronary ischemia of more than 70% stenosis. Each Tl-201 scan was spatially normalized to an image template for fully automatic segmentation of the LV. The segmented voxel intensities were then carried into the texture analysis with our open-source software Chang Gung Image Texture Analysis toolbox (CGITA). To evaluate the clinical performance of the image heterogeneity for detecting the coronary stenosis, receiver operating characteristic (ROC) analysis was used to compute the overall accuracy, sensitivity and specificity as well as the area under curve (AUC). Those indices were compared to those obtained from the commercially available semi-automatic software QPS. Results: With the fully automatic procedure to quantify heterogeneity from Tl-201 scans, we were able to achieve a good discrimination with good accuracy (74%), sensitivity (73%), specificity (77%) and AUC of 0.82. Such performance is similar to those obtained from the semi-automatic QPS software that gives a sensitivity of 71% and specificity of 77%. Conclusion: Based on fully automatic procedures of data processing, our preliminary data indicate that the image heterogeneity of myocardial perfusion imaging can provide useful information for automatic determination of the myocardial ischemia.« less
Liu, Bin; Wu, Hao; Zhang, Deyuan; Wang, Xiaolong; Chou, Kuo-Chen
2017-02-21
To expedite the pace in conducting genome/proteome analysis, we have developed a Python package called Pse-Analysis. The powerful package can automatically complete the following five procedures: (1) sample feature extraction, (2) optimal parameter selection, (3) model training, (4) cross validation, and (5) evaluating prediction quality. All the work a user needs to do is to input a benchmark dataset along with the query biological sequences concerned. Based on the benchmark dataset, Pse-Analysis will automatically construct an ideal predictor, followed by yielding the predicted results for the submitted query samples. All the aforementioned tedious jobs can be automatically done by the computer. Moreover, the multiprocessing technique was adopted to enhance computational speed by about 6 folds. The Pse-Analysis Python package is freely accessible to the public at http://bioinformatics.hitsz.edu.cn/Pse-Analysis/, and can be directly run on Windows, Linux, and Unix.
NASA Technical Reports Server (NTRS)
Hou, Gene
1998-01-01
Sensitivity analysis is a technique for determining derivatives of system responses with respect to design parameters. Among many methods available for sensitivity analysis, automatic differentiation has been proven through many applications in fluid dynamics and structural mechanics to be an accurate and easy method for obtaining derivatives. Nevertheless, the method can be computational expensive and can require a high memory space. This project will apply an automatic differentiation tool, ADIFOR, to a p-version finite element code to obtain first- and second- order then-nal derivatives, respectively. The focus of the study is on the implementation process and the performance of the ADIFOR-enhanced codes for sensitivity analysis in terms of memory requirement, computational efficiency, and accuracy.
New Tools to Convert PDF Math Contents into Accessible e-Books Efficiently.
Suzuki, Masakazu; Terada, Yugo; Kanahori, Toshihiro; Yamaguchi, Katsuhito
2015-01-01
New features in our math-OCR software to convert PDF math contents into accessible e-books are shown. A method for recognizing PDF is thoroughly improved. In addition, contents in any selected area including math formulas in a PDF file can be cut and pasted into a document in various accessible formats, which is automatically recognized and converted into texts and accessible math formulas through this process. Combining it with our authoring tool for a technical document, one can easily produce accessible e-books in various formats such as DAISY, accessible EPUB3, DAISY-like HTML5, Microsoft Word with math objects and so on. Those contents are useful for various print-disabled students ranging from the blind to the dyslexic.
Mehmood, Irfan; Ejaz, Naveed; Sajjad, Muhammad; Baik, Sung Wook
2013-10-01
The objective of the present study is to explore prioritization methods in diagnostic imaging modalities to automatically determine the contents of medical images. In this paper, we propose an efficient prioritization of brain MRI. First, the visual perception of the radiologists is adapted to identify salient regions. Then this saliency information is used as an automatic label for accurate segmentation of brain lesion to determine the scientific value of that image. The qualitative and quantitative results prove that the rankings generated by the proposed method are closer to the rankings created by radiologists. Copyright © 2013 Elsevier Ltd. All rights reserved.
Advanced Concepts of Naval Engineering Maintenance Training. Volume 2. Appendix F
1976-05-01
maintenance instruction, the Hagan Automatic Boiler Control (ABC) course. These job requirements also included the tasks, skills, and knowledges for all...Pressure 1 3/4 NAVTRAEQÜIPCEN 74-C-0151-1 TABLE OF CONTENTS VOLUME II OF II APPENDIX F Page Hagan Automatic Boiler Controls Systems (FAS) 6...a c a ■* w ■H u 0 M « s? u ’• B n J-riH 3 o c 0 hhO a a o -i •H -I 0 -H 9J ■ a a oi « C -a u <rl « vi) •a - 8 ai >> u u
Comparison of histomorphometrical data obtained with two different image analysis methods.
Ballerini, Lucia; Franke-Stenport, Victoria; Borgefors, Gunilla; Johansson, Carina B
2007-08-01
A common way to determine tissue acceptance of biomaterials is to perform histomorphometrical analysis on histologically stained sections from retrieved samples with surrounding tissue, using various methods. The "time and money consuming" methods and techniques used are often "in house standards". We address light microscopic investigations of bone tissue reactions on un-decalcified cut and ground sections of threaded implants. In order to screen sections and generate results faster, the aim of this pilot project was to compare results generated with the in-house standard visual image analysis tool (i.e., quantifications and judgements done by the naked eye) with a custom made automatic image analysis program. The histomorphometrical bone area measurements revealed no significant differences between the methods but the results of the bony contacts varied significantly. The raw results were in relative agreement, i.e., the values from the two methods were proportional to each other: low bony contact values in the visual method corresponded to low values with the automatic method. With similar resolution images and further improvements of the automatic method this difference should become insignificant. A great advantage using the new automatic image analysis method is that it is time saving--analysis time can be significantly reduced.
Stewart, Brandon D; Payne, B Keith
2008-10-01
The evidence for whether intentional control strategies can reduce automatic stereotyping is mixed. Therefore, the authors tested the utility of implementation intentions--specific plans linking a behavioral opportunity to a specific response--in reducing automatic bias. In three experiments, automatic stereotyping was reduced when participants made an intention to think specific counterstereotypical thoughts whenever they encountered a Black individual. The authors used two implicit tasks and process dissociation analysis, which allowed them to separate contributions of automatic and controlled thinking to task performance. Of importance, the reduction in stereotyping was driven by a change in automatic stereotyping and not controlled thinking. This benefit was acquired with little practice and generalized to novel faces. Thus, implementation intentions may be an effective and efficient means for controlling automatic aspects of thought.
Iakovidis, Dimitris K; Koulaouzidis, Anastasios
2014-11-01
The advent of wireless capsule endoscopy (WCE) has revolutionized the diagnostic approach to small-bowel disease. However, the task of reviewing WCE video sequences is laborious and time-consuming; software tools offering automated video analysis would enable a timelier and potentially a more accurate diagnosis. To assess the validity of innovative, automatic lesion-detection software in WCE. A color feature-based pattern recognition methodology was devised and applied to the aforementioned image group. This study was performed at the Royal Infirmary of Edinburgh, United Kingdom, and the Technological Educational Institute of Central Greece, Lamia, Greece. A total of 137 deidentified WCE single images, 77 showing pathology and 60 normal images. The proposed methodology, unlike state-of-the-art approaches, is capable of detecting several different types of lesions. The average performance, in terms of the area under the receiver-operating characteristic curve, reached 89.2 ± 0.9%. The best average performance was obtained for angiectasias (97.5 ± 2.4%) and nodular lymphangiectasias (96.3 ± 3.6%). Single expert for annotation of pathologies, single type of WCE model, use of single images instead of entire WCE videos. A simple, yet effective, approach allowing automatic detection of all types of abnormalities in capsule endoscopy is presented. Based on color pattern recognition, it outperforms previous state-of-the-art approaches. Moreover, it is robust in the presence of luminal contents and is capable of detecting even very small lesions. Crown Copyright © 2014. Published by Elsevier Inc. All rights reserved.
Martinez Manzanera, Octavio; Elting, Jan Willem; van der Hoeven, Johannes H.; Maurits, Natasha M.
2016-01-01
In the clinic, tremor is diagnosed during a time-limited process in which patients are observed and the characteristics of tremor are visually assessed. For some tremor disorders, a more detailed analysis of these characteristics is needed. Accelerometry and electromyography can be used to obtain a better insight into tremor. Typically, routine clinical assessment of accelerometry and electromyography data involves visual inspection by clinicians and occasionally computational analysis to obtain objective characteristics of tremor. However, for some tremor disorders these characteristics may be different during daily activity. This variability in presentation between the clinic and daily life makes a differential diagnosis more difficult. A long-term recording of tremor by accelerometry and/or electromyography in the home environment could help to give a better insight into the tremor disorder. However, an evaluation of such recordings using routine clinical standards would take too much time. We evaluated a range of techniques that automatically detect tremor segments in accelerometer data, as accelerometer data is more easily obtained in the home environment than electromyography data. Time can be saved if clinicians only have to evaluate the tremor characteristics of segments that have been automatically detected in longer daily activity recordings. We tested four non-parametric methods and five parametric methods on clinical accelerometer data from 14 patients with different tremor disorders. The consensus between two clinicians regarding the presence or absence of tremor on 3943 segments of accelerometer data was employed as reference. The nine methods were tested against this reference to identify their optimal parameters. Non-parametric methods generally performed better than parametric methods on our dataset when optimal parameters were used. However, one parametric method, employing the high frequency content of the tremor bandwidth under consideration (High Freq) performed similarly to non-parametric methods, but had the highest recall values, suggesting that this method could be employed for automatic tremor detection. PMID:27258018
Ictal semiology in hippocampal versus extrahippocampal temporal lobe epilepsy.
Gil-Nagel, A; Risinger, M W
1997-01-01
We have analysed retrospectively the clinical features and electroencephalograms in 35 patients with complex partial seizures of temporal lobe origin who were seizure-free after epilepsy surgery. Two groups were differentiated for statistical analysis: 16 patients had hippocampal temporal lobe seizures (HTS) and 19 patients had extrahippocampal temporal lobe seizures (ETS) associated with a small tumour of the lateral or inferior temporal cortex. All patients in the HTS group had ictal onset verified with intracranial recordings (depth or subdural electrodes). In the ETS group, extrahippocampal onset was verified with intracranial recordings in eight patients and assumed, because of failure of a previous amygdalohippocampectomy, in one patient. Historical information, ictal semiology and ictal EEG of typical seizures were analysed in each patient. The occurrence of early and late oral automatisms and dystonic posturing of an upper extremity was analysed separately. A prior history of febrile convulsions was obtained in 13 HTS patients (81.3%) but in none with ETS (P < 0.0001, Fisher's exact test). An epigastric aura preceded seizures in five patients with HTS (31.3%) and none with ETS (P = 0.0135, Fisher's exact test), while an aura with experiential content was recalled by nine patients with ETS (47.4%) and none with HTS (P = 0.0015), Fisher's exact test). Early oral automatisms occurred in 11 patients with HTS (68.8%) and in two with ETS (10.5%) (P = 0.0005, Fisher's exact test). Early motor involvement of the contralateral upper extremity without oral automatisms occurred in three patients with HTS (18.8%) and in 10 with ETS (52.6%) (P = 0.0298, Fisher's exact test). Arrest reaction, vocalization, speech, facial grimace, postictal cough, late oral automatisms and late motor involvement of the contralateral arm and hand occurred with similar frequency in both groups. These observations show that the early clinical features of HTS and ETS are different.
Li, Jianyou; Tanaka, Hiroya
2018-01-01
Traditional splinting processes are skill dependent and irreversible, and patient satisfaction levels during rehabilitation are invariably lowered by the heavy structure and poor ventilation of splints. To overcome this drawback, use of the 3D-printing technology has been proposed in recent years, and there has been an increase in public awareness. However, application of 3D-printing technologies is limited by the low CAD proficiency of clinicians as well as unforeseen scan flaws within anatomic models.A programmable modeling tool has been employed to develop a semi-automatic design system for generating a printable splint model. The modeling process was divided into five stages, and detailed steps involved in construction of the proposed system as well as automatic thickness calculation, the lattice structure, and assembly method have been thoroughly described. The proposed approach allows clinicians to verify the state of the splint model at every stage, thereby facilitating adjustment of input content and/or other parameters to help solve possible modeling issues. A finite element analysis simulation was performed to evaluate the structural strength of generated models. A fit investigation was applied on fabricated splints and volunteers to assess the wearing experience. Manual modeling steps involved in complex splint designs have been programed into the proposed automatic system. Clinicians define the splinting region by drawing two curves, thereby obtaining the final model within minutes. The proposed system is capable of automatically patching up minor flaws within the limb model as well as calculating the thickness and lattice density of various splints. Large splints could be divided into three parts for simultaneous multiple printing. This study highlights the advantages, limitations, and possible strategies concerning application of programmable modeling tools in clinical processes, thereby aiding clinicians with lower CAD proficiencies to become adept with splint design process, thus improving the overall design efficiency of 3D-printed splints.
FAMA: Fast Automatic MOOG Analysis
NASA Astrophysics Data System (ADS)
Magrini, Laura; Randich, Sofia; Friel, Eileen; Spina, Lorenzo; Jacobson, Heather; Cantat-Gaudin, Tristan; Donati, Paolo; Baglioni, Roberto; Maiorca, Enrico; Bragaglia, Angela; Sordo, Rosanna; Vallenari, Antonella
2014-02-01
FAMA (Fast Automatic MOOG Analysis), written in Perl, computes the atmospheric parameters and abundances of a large number of stars using measurements of equivalent widths (EWs) automatically and independently of any subjective approach. Based on the widely-used MOOG code, it simultaneously searches for three equilibria, excitation equilibrium, ionization balance, and the relationship between logn(FeI) and the reduced EWs. FAMA also evaluates the statistical errors on individual element abundances and errors due to the uncertainties in the stellar parameters. Convergence criteria are not fixed "a priori" but instead are based on the quality of the spectra.
Automatic zebrafish heartbeat detection and analysis for zebrafish embryos.
Pylatiuk, Christian; Sanchez, Daniela; Mikut, Ralf; Alshut, Rüdiger; Reischl, Markus; Hirth, Sofia; Rottbauer, Wolfgang; Just, Steffen
2014-08-01
A fully automatic detection and analysis method of heartbeats in videos of nonfixed and nonanesthetized zebrafish embryos is presented. This method reduces the manual workload and time needed for preparation and imaging of the zebrafish embryos, as well as for evaluating heartbeat parameters such as frequency, beat-to-beat intervals, and arrhythmicity. The method is validated by a comparison of the results from automatic and manual detection of the heart rates of wild-type zebrafish embryos 36-120 h postfertilization and of embryonic hearts with bradycardia and pauses in the cardiac contraction.
Automatic segmentation of time-lapse microscopy images depicting a live Dharma embryo.
Zacharia, Eleni; Bondesson, Maria; Riu, Anne; Ducharme, Nicole A; Gustafsson, Jan-Åke; Kakadiaris, Ioannis A
2011-01-01
Biological inferences about the toxicity of chemicals reached during experiments on the zebrafish Dharma embryo can be greatly affected by the analysis of the time-lapse microscopy images depicting the embryo. Among the stages of image analysis, automatic and accurate segmentation of the Dharma embryo is the most crucial and challenging. In this paper, an accurate and automatic segmentation approach for the segmentation of the Dharma embryo data obtained by fluorescent time-lapse microscopy is proposed. Experiments performed in four stacks of 3D images over time have shown promising results.
Quantification of regional fat volume in rat MRI
NASA Astrophysics Data System (ADS)
Sacha, Jaroslaw P.; Cockman, Michael D.; Dufresne, Thomas E.; Trokhan, Darren
2003-05-01
Multiple initiatives in the pharmaceutical and beauty care industries are directed at identifying therapies for weight management. Body composition measurements are critical for such initiatives. Imaging technologies that can be used to measure body composition noninvasively include DXA (dual energy x-ray absorptiometry) and MRI (magnetic resonance imaging). Unlike other approaches, MRI provides the ability to perform localized measurements of fat distribution. Several factors complicate the automatic delineation of fat regions and quantification of fat volumes. These include motion artifacts, field non-uniformity, brightness and contrast variations, chemical shift misregistration, and ambiguity in delineating anatomical structures. We have developed an approach to deal practically with those challenges. The approach is implemented in a package, the Fat Volume Tool, for automatic detection of fat tissue in MR images of the rat abdomen, including automatic discrimination between abdominal and subcutaneous regions. We suppress motion artifacts using masking based on detection of implicit landmarks in the images. Adaptive object extraction is used to compensate for intensity variations. This approach enables us to perform fat tissue detection and quantification in a fully automated manner. The package can also operate in manual mode, which can be used for verification of the automatic analysis or for performing supervised segmentation. In supervised segmentation, the operator has the ability to interact with the automatic segmentation procedures to touch-up or completely overwrite intermediate segmentation steps. The operator's interventions steer the automatic segmentation steps that follow. This improves the efficiency and quality of the final segmentation. Semi-automatic segmentation tools (interactive region growing, live-wire, etc.) improve both the accuracy and throughput of the operator when working in manual mode. The quality of automatic segmentation has been evaluated by comparing the results of fully automated analysis to manual analysis of the same images. The comparison shows a high degree of correlation that validates the quality of the automatic segmentation approach.
Keil, Andreas; Moratti, Stephan; Sabatinelli, Dean; Bradley, Margaret M; Lang, Peter J
2005-08-01
Affectively arousing visual stimuli have been suggested to automatically attract attentional resources in order to optimize sensory processing. The present study crosses the factors of spatial selective attention and affective content, and examines the relationship between instructed (spatial) and automatic attention to affective stimuli. In addition to response times and error rate, electroencephalographic data from 129 electrodes were recorded during a covert spatial attention task. This task required silent counting of random-dot targets embedded in a 10 Hz flicker of colored pictures presented to both hemifields. Steady-state visual evoked potentials (ssVEPs) were obtained to determine amplitude and phase of electrocortical responses to pictures. An increase of ssVEP amplitude was observed as an additive function of spatial attention and emotional content. Statistical parametric mapping of this effect indicated occipito-temporal and parietal cortex activation contralateral to the attended visual hemifield in ssVEP amplitude modulation. This difference was most pronounced during selection of the left visual hemifield, at right temporal electrodes. In line with this finding, phase information revealed accelerated processing of aversive arousing, compared to affectively neutral pictures. The data suggest that affective stimulus properties modulate the spatiotemporal process along the ventral stream, encompassing amplitude amplification and timing changes of posterior and temporal cortex.
The virtual reality simulator dV-Trainer(®) is a valid assessment tool for robotic surgical skills.
Perrenot, Cyril; Perez, Manuela; Tran, Nguyen; Jehl, Jean-Philippe; Felblinger, Jacques; Bresler, Laurent; Hubert, Jacques
2012-09-01
Exponential development of minimally invasive techniques, such as robotic-assisted devices, raises the question of how to assess robotic surgery skills. Early development of virtual simulators has provided efficient tools for laparoscopic skills certification based on objective scoring, high availability, and lower cost. However, similar evaluation is lacking for robotic training. The purpose of this study was to assess several criteria, such as reliability, face, content, construct, and concurrent validity of a new virtual robotic surgery simulator. This prospective study was conducted from December 2009 to April 2010 using three simulators dV-Trainers(®) (MIMIC Technologies(®)) and one Da Vinci S(®) (Intuitive Surgical(®)). Seventy-five subjects, divided into five groups according to their initial surgical training, were evaluated based on five representative exercises of robotic specific skills: 3D perception, clutching, visual force feedback, EndoWrist(®) manipulation, and camera control. Analysis was extracted from (1) questionnaires (realism and interest), (2) automatically generated data from simulators, and (3) subjective scoring by two experts of depersonalized videos of similar exercises with robot. Face and content validity were generally considered high (77 %). Five levels of ability were clearly identified by the simulator (ANOVA; p = 0.0024). There was a strong correlation between automatic data from dV-Trainer and subjective evaluation with robot (r = 0.822). Reliability of scoring was high (r = 0.851). The most relevant criteria were time and economy of motion. The most relevant exercises were Pick and Place and Ring and Rail. The dV-Trainer(®) simulator proves to be a valid tool to assess basic skills of robotic surgery.
The Semiautomated Test System: A Tool for Standardized Performance Testing.
ERIC Educational Resources Information Center
Ramsey, H. Rudy
For performance tests to be truly standardized, they must be administered in a way that will minimize variation due to operator intervention and errors. Through such technological developments as low-cost digital computers and digital logic modules, automatic test administration without restriction of test content has become possible. A…
Template Authoring Environment for the Automatic Generation of Narrative Content
ERIC Educational Resources Information Center
Caropreso, Maria Fernanda; Inkpen, Diana; Keshtkar, Fazel; Khan, Shahzad
2012-01-01
Natural Language Generation (NLG) systems can make data accessible in an easily digestible textual form; but using such systems requires sophisticated linguistic and sometimes even programming knowledge. We have designed and implemented an environment for creating and modifying NLG templates that requires no programming knowledge, and can operate…
Methodologies for Crawler Based Web Surveys.
ERIC Educational Resources Information Center
Thelwall, Mike
2002-01-01
Describes Web survey methodologies used to study the content of the Web, and discusses search engines and the concept of crawling the Web. Highlights include Web page selection methodologies; obstacles to reliable automatic indexing of Web sites; publicly indexable pages; crawling parameters; and tests for file duplication. (Contains 62…
Irrigation scheduling using soil moisture sensors
USDA-ARS?s Scientific Manuscript database
Soil moisture sensors were evaluated and used for irrigation scheduling in humid region. Soil moisture sensors were installed in soil at depths of 15cm, 30cm, and 61cm belowground. Soil volumetric water content was automatically measured by the sensors in a time interval of an hour during the crop g...
49 CFR 552.14 - Content of petition.
Code of Federal Regulations, 2012 CFR
2012-10-01
... for Expedited Rulemaking To Establish Dynamic Automatic Suppression System Test Procedures for Federal... petitioner shall provide the following information: (a) A set of proposed test procedures for S28.1, S28.2... unbelted occupant positions that are likely to occur during a frontal crash where pre-crash braking occurs...
49 CFR 552.14 - Content of petition.
Code of Federal Regulations, 2014 CFR
2014-10-01
... for Expedited Rulemaking To Establish Dynamic Automatic Suppression System Test Procedures for Federal... petitioner shall provide the following information: (a) A set of proposed test procedures for S28.1, S28.2... unbelted occupant positions that are likely to occur during a frontal crash where pre-crash braking occurs...
49 CFR 552.14 - Content of petition.
Code of Federal Regulations, 2013 CFR
2013-10-01
... for Expedited Rulemaking To Establish Dynamic Automatic Suppression System Test Procedures for Federal... petitioner shall provide the following information: (a) A set of proposed test procedures for S28.1, S28.2... unbelted occupant positions that are likely to occur during a frontal crash where pre-crash braking occurs...
Towards Automatic Threat Recognition
2006-12-01
York: Bantam. Forschungsinstitut für Kommunikation, Informationsverarbeitung und Ergonomie FGAN Informationstechnik und Führungssysteme KIE Towards...Informationsverarbeitung und Ergonomie FGAN Informationstechnik und Führungssysteme KIE Content Preliminaries about Information Fusion The System Ontology Unification...as Processing Principle Back to the Example Conclusion and Outlook Forschungsinstitut für Kommunikation, Informationsverarbeitung und Ergonomie FGAN
HOW TO LEARN AN UNWRITTEN LANGUAGE.
ERIC Educational Resources Information Center
GUDSCHINSKY, SARAH C.
A PRACTICAL GUIDE FOR THE ANTHROPOLOGY STUDENT CONFRONTED WITH LEARNING A LANGUAGE IN THE FIELD, THIS BOOK FOCUSES ON ACQUIRING EVERYDAY CONVERSATION RATHER THAN DIFFICULT LINGUISTIC PROBLEMS. THE FORM AND CONTENT ARE BASED ON THE FOLLOWING BASIC PREMISES--(1) LEARNING A LANGUAGE CONSISTS OF DISCOVERING AND CONTROLLING AS AUTOMATIC HABITS THE…
Automatic page composition with nested sub-layouts
NASA Astrophysics Data System (ADS)
Hunter, Andrew
2013-03-01
This paper provides an overview of a system for the automatic composition of publications. The system first composes nested hierarchies of contents, then applies layout engines at branch points in the hierarchies to explore layout options, and finally selects the best overall options for the finished publications. Although the system has been developed as a general platform for automated publishing, this paper describes its application to the composition and layout of a magazine-like publication for social content from Facebook. The composition process works by assembling design fragments that have been populated with text and images from the Facebook social network. The fragments constitute a design language for a publication. Each design fragment is a nested mutable sub-layout that has no specific size or shape until after it has been laid-out. The layout process balances the space requirements of the fragment's internal contents with its external context in the publication. The mutability of sub-layouts requires that their layout options must be kept open until all the other contents that share the same space have been considered. Coping with large numbers of options is one of the greatest challenges in layout automation. Most existing layout methods work by rapidly elimination design options rather than by keeping options open. A further goal of this publishing system is to confirm that a custom publication can be generated quickly by the described methods. In general, the faster that publications can be created, the greater the opportunities for the technology.
NASA Technical Reports Server (NTRS)
Nguyen, Duc T.; Storaasli, Olaf O.; Qin, Jiangning; Qamar, Ramzi
1994-01-01
An automatic differentiation tool (ADIFOR) is incorporated into a finite element based structural analysis program for shape and non-shape design sensitivity analysis of structural systems. The entire analysis and sensitivity procedures are parallelized and vectorized for high performance computation. Small scale examples to verify the accuracy of the proposed program and a medium scale example to demonstrate the parallel vector performance on multiple CRAY C90 processors are included.
Vibrational energy distribution analysis (VEDA): scopes and limitations.
Jamróz, Michał H
2013-10-01
The principle of operations of the VEDA program written by the author for Potential Energy Distribution (PED) analysis of theoretical vibrational spectra is described. Nowadays, the PED analysis is indispensible tool in serious analysis of the vibrational spectra. To perform the PED analysis it is necessary to define 3N-6 linearly independent local mode coordinates. Already for 20-atomic molecules it is a difficult task. The VEDA program reads the input data automatically from the Gaussian program output files. Then, VEDA automatically proposes an introductory set of local mode coordinates. Next, the more adequate coordinates are proposed by the program and optimized to obtain maximal elements of each column (internal coordinate) of the PED matrix (the EPM parameter). The possibility for an automatic optimization of PED contributions is a unique feature of the VEDA program absent in any other programs performing PED analysis. Copyright © 2013 Elsevier B.V. All rights reserved.
Vibrational Energy Distribution Analysis (VEDA): Scopes and limitations
NASA Astrophysics Data System (ADS)
Jamróz, Michał H.
2013-10-01
The principle of operations of the VEDA program written by the author for Potential Energy Distribution (PED) analysis of theoretical vibrational spectra is described. Nowadays, the PED analysis is indispensible tool in serious analysis of the vibrational spectra. To perform the PED analysis it is necessary to define 3N-6 linearly independent local mode coordinates. Already for 20-atomic molecules it is a difficult task. The VEDA program reads the input data automatically from the Gaussian program output files. Then, VEDA automatically proposes an introductory set of local mode coordinates. Next, the more adequate coordinates are proposed by the program and optimized to obtain maximal elements of each column (internal coordinate) of the PED matrix (the EPM parameter). The possibility for an automatic optimization of PED contributions is a unique feature of the VEDA program absent in any other programs performing PED analysis.
Estimating soil water content from ground penetrating radar coarse root reflections
NASA Astrophysics Data System (ADS)
Liu, X.; Cui, X.; Chen, J.; Li, W.; Cao, X.
2016-12-01
Soil water content (SWC) is an indispensable variable for understanding the organization of natural ecosystems and biodiversity. Especially in semiarid and arid regions, soil moisture is the plants primary source of water and largely determine their strategies for growth and survival, such as root depth, distribution and competition between them. Ground penetrating radar (GPR), a kind of noninvasive geophysical technique, has been regarded as an accurate tool for measuring soil water content at intermediate scale in past decades. For soil water content estimation with surface GPR, fixed antenna offset reflection method has been considered to have potential to obtain average soil water content between land surface and reflectors, and provide high resolution and few measurement time. In this study, 900MHz surface GPR antenna was used to estimate SWC with fixed offset reflection method; plant coarse roots (with diameters greater than 5 mm) were regarded as reflectors; a kind of advanced GPR data interpretation method, HADA (hyperbola automatic detection algorithm), was introduced to automatically obtain average velocity by recognizing coarse root hyperbolic reflection signals on GPR radargrams during estimating SWC. In addition, a formula was deduced to determine interval average SWC between two roots at different depths as well. We examined the performance of proposed method on a dataset simulated under different scenarios. Results showed that HADA could provide a reasonable average velocity to estimate SWC without knowledge of root depth and interval average SWC also be determined. When the proposed method was applied to estimation of SWC on a real-field measurement dataset, a very small soil water content vertical variation gradient about 0.006 with depth was captured as well. Therefore, the proposed method could be used to estimate average soil water content from ground penetrating radar coarse root reflections and obtain interval average SWC between two roots at different depths. It is very promising for measuring root-zone-soil-moisture and mapping soil moisture distribution around a shrub or even in field plot scale.
ERIC Educational Resources Information Center
Ma, Dongmei; Yu, Xiaoru; Zhang, Haomin
2017-01-01
The present study aimed to investigate second language (L2) word-level and sentence-level automatic processing among English as a foreign language students through a comparative analysis of students with different proficiency levels. As a multidimensional and dynamic construct, automaticity is conceptualized as processing speed, stability, and…
Isotope Ratio Monitoring Gas Chromatography Mass Spectrometry (IRM-GCMS)
NASA Technical Reports Server (NTRS)
Freeman, K. H.; Ricci, S. A.; Studley, A.; Hayes, J. M.
1989-01-01
On Earth, the C-13 content of organic compounds is depleted by roughly 13 to 23 permil from atmospheric carbon dioxide. This difference is largely due to isotope effects associated with the fixation of inorganic carbon by photosynthetic organisms. If life once existed on Mars, then it is reasonable to expect to observe a similar fractionation. Although the strongly oxidizing conditions on the surface of Mars make preservation of ancient organic material unlikely, carbon-isotope evidence for the existence of life on Mars may still be preserved. Carbon depleted in C-13 could be preserved either in organic compounds within buried sediments, or in carbonate minerals produced by the oxidation of organic material. A technique is introduced for rapid and precise measurement of the C-13 contents of individual organic compounds. A gas chromatograph is coupled to an isotope-ratio mass spectrometer through a combustion interface, enabling on-line isotopic analysis of isolated compounds. The isotope ratios are determined by integration of ion currents over the course of each chromatographic peak. Software incorporates automatic peak determination, corrections for background, and deconvolution of overlapped peaks. Overall performance of the instrument was evaluated by the analysis of a mixture of high purity n-alkanes of know isotopic composition. Isotopic values measured via IRM-GCMS averaged withing 0.55 permil of their conventionally measured values.
Approaches to the automatic generation and control of finite element meshes
NASA Technical Reports Server (NTRS)
Shephard, Mark S.
1987-01-01
The algorithmic approaches being taken to the development of finite element mesh generators capable of automatically discretizing general domains without the need for user intervention are discussed. It is demonstrated that because of the modeling demands placed on a automatic mesh generator, all the approaches taken to date produce unstructured meshes. Consideration is also given to both a priori and a posteriori mesh control devices for automatic mesh generators as well as their integration with geometric modeling and adaptive analysis procedures.
Social Risk and Depression: Evidence from Manual and Automatic Facial Expression Analysis
Girard, Jeffrey M.; Cohn, Jeffrey F.; Mahoor, Mohammad H.; Mavadati, Seyedmohammad; Rosenwald, Dean P.
2014-01-01
Investigated the relationship between change over time in severity of depression symptoms and facial expression. Depressed participants were followed over the course of treatment and video recorded during a series of clinical interviews. Facial expressions were analyzed from the video using both manual and automatic systems. Automatic and manual coding were highly consistent for FACS action units, and showed similar effects for change over time in depression severity. For both systems, when symptom severity was high, participants made more facial expressions associated with contempt, smiled less, and those smiles that occurred were more likely to be accompanied by facial actions associated with contempt. These results are consistent with the “social risk hypothesis” of depression. According to this hypothesis, when symptoms are severe, depressed participants withdraw from other people in order to protect themselves from anticipated rejection, scorn, and social exclusion. As their symptoms fade, participants send more signals indicating a willingness to affiliate. The finding that automatic facial expression analysis was both consistent with manual coding and produced the same pattern of depression effects suggests that automatic facial expression analysis may be ready for use in behavioral and clinical science. PMID:24598859
Zheng, Rencheng; Yamabe, Shigeyuki; Nakano, Kimihiko; Suda, Yoshihiro
2015-01-01
Nowadays insight into human-machine interaction is a critical topic with the large-scale development of intelligent vehicles. Biosignal analysis can provide a deeper understanding of driver behaviors that may indicate rationally practical use of the automatic technology. Therefore, this study concentrates on biosignal analysis to quantitatively evaluate mental stress of drivers during automatic driving of trucks, with vehicles set at a closed gap distance apart to reduce air resistance to save energy consumption. By application of two wearable sensor systems, a continuous measurement was realized for palmar perspiration and masseter electromyography, and a biosignal processing method was proposed to assess mental stress levels. In a driving simulator experiment, ten participants completed automatic driving with 4, 8, and 12 m gap distances from the preceding vehicle, and manual driving with about 25 m gap distance as a reference. It was found that mental stress significantly increased when the gap distances decreased, and an abrupt increase in mental stress of drivers was also observed accompanying a sudden change of the gap distance during automatic driving, which corresponded to significantly higher ride discomfort according to subjective reports. PMID:25738768
ERIC Educational Resources Information Center
Chen, Hsinchun; Martinez, Joanne; Kirchhoff, Amy; Ng, Tobun D.; Schatz, Bruce R.
1998-01-01
Grounded on object filtering, automatic indexing, and co-occurrence analysis, an experiment was performed using a parallel supercomputer to analyze over 400,000 abstracts in an INSPEC computer engineering collection. A user evaluation revealed that system-generated thesauri were better than the human-generated INSPEC subject thesaurus in concept…
Automatic Online Lecture Highlighting Based on Multimedia Analysis
ERIC Educational Resources Information Center
Che, Xiaoyin; Yang, Haojin; Meinel, Christoph
2018-01-01
Textbook highlighting is widely considered to be beneficial for students. In this paper, we propose a comprehensive solution to highlight the online lecture videos in both sentence- and segment-level, just as is done with paper books. The solution is based on automatic analysis of multimedia lecture materials, such as speeches, transcripts, and…
Automatic Text Analysis Based on Transition Phenomena of Word Occurrences
ERIC Educational Resources Information Center
Pao, Miranda Lee
1978-01-01
Describes a method of selecting index terms directly from a word frequency list, an idea originally suggested by Goffman. Results of the analysis of word frequencies of two articles seem to indicate that the automated selection of index terms from a frequency list holds some promise for automatic indexing. (Author/MBR)
Automatic Coding of Dialogue Acts in Collaboration Protocols
ERIC Educational Resources Information Center
Erkens, Gijsbert; Janssen, Jeroen
2008-01-01
Although protocol analysis can be an important tool for researchers to investigate the process of collaboration and communication, the use of this method of analysis can be time consuming. Hence, an automatic coding procedure for coding dialogue acts was developed. This procedure helps to determine the communicative function of messages in online…
Three-dimensional murine airway segmentation in micro-CT images
NASA Astrophysics Data System (ADS)
Shi, Lijun; Thiesse, Jacqueline; McLennan, Geoffrey; Hoffman, Eric A.; Reinhardt, Joseph M.
2007-03-01
Thoracic imaging for small animals has emerged as an important tool for monitoring pulmonary disease progression and therapy response in genetically engineered animals. Micro-CT is becoming the standard thoracic imaging modality in small animal imaging because it can produce high-resolution images of the lung parenchyma, vasculature, and airways. Segmentation, measurement, and visualization of the airway tree is an important step in pulmonary image analysis. However, manual analysis of the airway tree in micro-CT images can be extremely time-consuming since a typical dataset is usually on the order of several gigabytes in size. Automated and semi-automated tools for micro-CT airway analysis are desirable. In this paper, we propose an automatic airway segmentation method for in vivo micro-CT images of the murine lung and validate our method by comparing the automatic results to manual tracing. Our method is based primarily on grayscale morphology. The results show good visual matches between manually segmented and automatically segmented trees. The average true positive volume fraction compared to manual analysis is 91.61%. The overall runtime for the automatic method is on the order of 30 minutes per volume compared to several hours to a few days for manual analysis.
Thread concept for automatic task parallelization in image analysis
NASA Astrophysics Data System (ADS)
Lueckenhaus, Maximilian; Eckstein, Wolfgang
1998-09-01
Parallel processing of image analysis tasks is an essential method to speed up image processing and helps to exploit the full capacity of distributed systems. However, writing parallel code is a difficult and time-consuming process and often leads to an architecture-dependent program that has to be re-implemented when changing the hardware. Therefore it is highly desirable to do the parallelization automatically. For this we have developed a special kind of thread concept for image analysis tasks. Threads derivated from one subtask may share objects and run in the same context but may process different threads of execution and work on different data in parallel. In this paper we describe the basics of our thread concept and show how it can be used as basis of an automatic task parallelization to speed up image processing. We further illustrate the design and implementation of an agent-based system that uses image analysis threads for generating and processing parallel programs by taking into account the available hardware. The tests made with our system prototype show that the thread concept combined with the agent paradigm is suitable to speed up image processing by an automatic parallelization of image analysis tasks.
Formal analysis and evaluation of the back-off procedure in IEEE802.11P VANET
NASA Astrophysics Data System (ADS)
Jin, Li; Zhang, Guoan; Zhu, Xiaojun
2017-07-01
The back-off procedure is one of the media access control technologies in 802.11P communication protocol. It plays an important role in avoiding message collisions and allocating channel resources. Formal methods are effective approaches for studying the performances of communication systems. In this paper, we establish a discrete time model for the back-off procedure. We use Markov Decision Processes (MDPs) to model the non-deterministic and probabilistic behaviors of the procedure, and use the probabilistic computation tree logic (PCTL) language to express different properties, which ensure that the discrete time model performs their basic functionality. Based on the model and PCTL specifications, we study the effect of contention window length on the number of senders in the neighborhood of given receivers, and that on the station’s expected cost required by the back-off procedure to successfully send packets. The variation of the window length may increase or decrease the maximum probability of correct transmissions within a time contention unit. We propose to use PRISM model checker to describe our proposed back-off procedure for IEEE802.11P protocol in vehicle network, and define different probability properties formulas to automatically verify the model and derive numerical results. The obtained results are helpful for justifying the values of the time contention unit.
Automatic Clustering Using FSDE-Forced Strategy Differential Evolution
NASA Astrophysics Data System (ADS)
Yasid, A.
2018-01-01
Clustering analysis is important in datamining for unsupervised data, cause no adequate prior knowledge. One of the important tasks is defining the number of clusters without user involvement that is known as automatic clustering. This study intends on acquiring cluster number automatically utilizing forced strategy differential evolution (AC-FSDE). Two mutation parameters, namely: constant parameter and variable parameter are employed to boost differential evolution performance. Four well-known benchmark datasets were used to evaluate the algorithm. Moreover, the result is compared with other state of the art automatic clustering methods. The experiment results evidence that AC-FSDE is better or competitive with other existing automatic clustering algorithm.
Effect of water content on stability of landslides triggered by earthquakes
NASA Astrophysics Data System (ADS)
Beyabanaki, S.; Bagtzoglou, A. C.; Anagnostou, E. N.
2013-12-01
Earthquake- triggered landslides are one of the most important natural hazards that often result in serious structural damage and loss of life. They are widely studied by several researchers. However, less attention has been focused on soil water content. Although the effect of water content has been widely studied for rainfall- triggered landslides [1], much less attention has been given to it for stability analysis of earthquake- triggered landslides. We developed a combined hydrology and stability model to investigate effect of soil water content on earthquake-triggered landslides. For this purpose, Bishop's method is used to do the slope stability analysis and Richard's equation is employed to model infiltration. Bishop's method is one the most widely methods used for analyzing stability of slopes [2]. Earthquake acceleration coefficient (EAC) is also considered in the model to analyze the effect of earthquake on slope stability. Also, this model is able to automatically determine geometry of the potential landslide. In this study, slopes with different initial water contents are simulated. First, the simulation is performed in the case of earthquake only with different EACs and water contents. As shown in Fig. 1, initial water content has a significant effect on factor of safety (FS). Greater initial water contents lead to less FS. This impact is more significant when EAC is small. Also, when initial water content is high, landslides can happen even with small earthquake accelerations. Moreover, in this study, effect of water content on geometry of landslides is investigated. For this purpose, different cases of landslides triggered by earthquakes only and both rainfall and earthquake for different initial water contents are simulated. The results show that water content has more significant effect on geometry of landslides triggered by rainfall than those triggered by an earthquake. Finally, effect of water content on landslides triggered by earthquakes during rainfall is investigated. In this study, after different durations of rainfall, an earthquake is applied to the model and the elapsed time in which the FS gets less than one obtains by trial and error. The results for different initial water contents and earthquake acceleration coefficients show that landslides can happen after shorter rainfall duration when water content is greater. If water content is high enough, the landslide occurs even without rainfall. References [1] Ray RL, Jacobs JM, de Alba P. Impact of unsaturated zone soil moisture and groundwater table on slope instability. J. Geotech. Geoenviron. Eng., 2010, 136(10):1448-1458. [2] Das B. Principles of Foundation Engineering. Stanford, Cengage Learning, 2011. Fig. 1. Effect of initial water content on FS for different EACs
Hunter, James; Freer, Yvonne; Gatt, Albert; Reiter, Ehud; Sripada, Somayajulu; Sykes, Cindy
2012-11-01
Our objective was to determine whether and how a computer system could automatically generate helpful natural language nursing shift summaries solely from an electronic patient record system, in a neonatal intensive care unit (NICU). A system was developed which automatically generates partial NICU shift summaries (for the respiratory and cardiovascular systems), using data-to-text technology. It was evaluated for 2 months in the NICU at the Royal Infirmary of Edinburgh, under supervision. In an on-ward evaluation, a substantial majority of the summaries was found by outgoing and incoming nurses to be understandable (90%), and a majority was found to be accurate (70%), and helpful (59%). The evaluation also served to identify some outstanding issues, especially with regard to extra content the nurses wanted to see in the computer-generated summaries. It is technically possible automatically to generate limited natural language NICU shift summaries from an electronic patient record. However, it proved difficult to handle electronic data that was intended primarily for display to the medical staff, and considerable engineering effort would be required to create a deployable system from our proof-of-concept software. Copyright © 2012 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Fredouille, Corinne; Pouchoulin, Gilles; Ghio, Alain; Revis, Joana; Bonastre, Jean-François; Giovanni, Antoine
2009-12-01
This paper addresses voice disorder assessment. It proposes an original back-and-forth methodology involving an automatic classification system as well as knowledge of the human experts (machine learning experts, phoneticians, and pathologists). The goal of this methodology is to bring a better understanding of acoustic phenomena related to dysphonia. The automatic system was validated on a dysphonic corpus (80 female voices), rated according to the GRBAS perceptual scale by an expert jury. Firstly, focused on the frequency domain, the classification system showed the interest of 0-3000 Hz frequency band for the classification task based on the GRBAS scale. Later, an automatic phonemic analysis underlined the significance of consonants and more surprisingly of unvoiced consonants for the same classification task. Submitted to the human experts, these observations led to a manual analysis of unvoiced plosives, which highlighted a lengthening of VOT according to the dysphonia severity validated by a preliminary statistical analysis.
Comparison of the efficiency between two sampling plans for aflatoxins analysis in maize
Mallmann, Adriano Olnei; Marchioro, Alexandro; Oliveira, Maurício Schneider; Rauber, Ricardo Hummes; Dilkin, Paulo; Mallmann, Carlos Augusto
2014-01-01
Variance and performance of two sampling plans for aflatoxins quantification in maize were evaluated. Eight lots of maize were sampled using two plans: manual, using sampling spear for kernels; and automatic, using a continuous flow to collect milled maize. Total variance and sampling, preparation, and analysis variance were determined and compared between plans through multifactor analysis of variance. Four theoretical distribution models were used to compare aflatoxins quantification distributions in eight maize lots. The acceptance and rejection probabilities for a lot under certain aflatoxin concentration were determined using variance and the information on the selected distribution model to build the operational characteristic curves (OC). Sampling and total variance were lower at the automatic plan. The OC curve from the automatic plan reduced both consumer and producer risks in comparison to the manual plan. The automatic plan is more efficient than the manual one because it expresses more accurately the real aflatoxin contamination in maize. PMID:24948911
NASA Astrophysics Data System (ADS)
Maklad, Ahmed S.; Matsuhiro, Mikio; Suzuki, Hidenobu; Kawata, Yoshiki; Niki, Noboru; Shimada, Mitsuo; Iinuma, Gen
2017-03-01
In abdominal disease diagnosis and various abdominal surgeries planning, segmentation of abdominal blood vessel (ABVs) is a very imperative task. Automatic segmentation enables fast and accurate processing of ABVs. We proposed a fully automatic approach for segmenting ABVs through contrast enhanced CT images by a hybrid of 3D region growing and 4D curvature analysis. The proposed method comprises three stages. First, candidates of bone, kidneys, ABVs and heart are segmented by an auto-adapted threshold. Second, bone is auto-segmented and classified into spine, ribs and pelvis. Third, ABVs are automatically segmented in two sub-steps: (1) kidneys and abdominal part of the heart are segmented, (2) ABVs are segmented by a hybrid approach that integrates a 3D region growing and 4D curvature analysis. Results are compared with two conventional methods. Results show that the proposed method is very promising in segmenting and classifying bone, segmenting whole ABVs and may have potential utility in clinical use.
Application of synthetic fire-resistant oils in oil systems of turbine equipment for NPPs
NASA Astrophysics Data System (ADS)
Galimova, L. A.
2017-10-01
Results of the investigation of the synthetic fire-resistant turbine oil Fyrquel-L state in oil systems of turbosets under their operation in the equipment and oil supply facilities of nuclear power plants (NPPs) are presented. On the basis of the analysis of the operating experience, it is established that, for reliable and safe operation of the turbine equipment, at which oil systems synthetic fire-resistant oils on the phosphoric acid esters basis are used, special attention should be paid to two main factors, namely, both the guarantee of the normalized oil water content under the operation and storage and temperature regime of the operation. Methods of the acid number maintenance and reduction are shown. Results of the analysis and investigation of influence of temperature and of the variation of the qualitative state of the synthetic fair-resistant oil on its water content are reported. It is shown that the fire-resistant turbine oils are characterized by high hydrophilicity, and, in distinction to the mineral turbine oils, are capable to contain a significant amount of dissolved water, which is not extracted under the use of separation technologies. It is shown that the more degradation products are contained in oil and higher acid number, the more amount of dissolved water it is capable to retain. It is demonstrated that the organization of chemical control of the total water content of fireresistant oils with the use of the coulometric method is an important element to support the reliable operation of oil systems. It is recommended to use automatic controls of water content for organization of daily monitoring of oil state in the oil system. Recommendations and measures for improvement of oil operation on the NPP, the water content control, the use of oil cleaning plants, and the oil transfer for storage during repair works are developed.
Penzkofer, Michael; Baron, Andrea; Naumann, Annette; Krähmer, Andrea; Schulz, Hartwig; Heuberger, Heidi
2018-01-01
The essential oil is an important compound of the root and rhizome of medicinally used valerian ( Valeriana officinalis L. s.l.), with a stated minimum content in the European pharmacopoeia. The essential oil is located in droplets, of which the position and distribution in the total root cross-section of different valerian varieties, root thicknesses and root horizons are determined in this study using an adapted fluorescence-microscopy and automatic imaging analysis method. The study was initiated by the following facts:A probable negative correlation between essential oil content and root thickness in selected single plants (elites), observed during the breeding of coarsely rooted valerian with high oil content.Higher essential oil content after careful hand-harvest and processing of the roots. In preliminary tests, the existence of oil containing droplets in the outer and inner regions of the valerian roots was confirmed by histological techniques and light-microscopy, as well as Fourier-transform infrared spectroscopy. Based on this, fluorescence-microscopy followed by image analysis of entire root cross-sections, showed that a large number of oil droplets (on average 43% of total oil droplets) are located close to the root surface. The remaining oil droplets are located in the inner regions (parenchyma) and showed varying density gradients from the inner to the outer regions depending on genotype, root thickness and harvesting depth. Fluorescence-microscopy is suitable to evaluate prevalence and distribution of essential oil droplets of valerian in entire root cross-sections. The oil droplet density gradient varies among genotypes. Genotypes with a linear rather than an exponential increase of oil droplet density from the inner to the outer parenchyma can be chosen for better stability during post-harvest processing. The negative correlation of essential oil content and root thickness as observed in our breeding material can be counteracted through a selection towards generally high oil droplet density levels, and large oil droplet sizes independent of root thickness.
NASA Astrophysics Data System (ADS)
Lü, Chengxu; Jiang, Xunpeng; Zhou, Xingfan; Zhang, Yinqiao; Zhang, Naiqian; Wei, Chongfeng; Mao, Wenhua
2017-10-01
Wet gluten is a useful quality indicator for wheat, and short wave near infrared spectroscopy (NIRS) is a high performance technique with the advantage of economic rapid and nondestructive test. To study the feasibility of short wave NIRS analyzing wet gluten directly from wheat seed, 54 representative wheat seed samples were collected and scanned by spectrometer. 8 spectral pretreatment method and genetic algorithm (GA) variable selection method were used to optimize analysis. Both quantitative and qualitative model of wet gluten were built by partial least squares regression and discriminate analysis. For quantitative analysis, normalization is the optimized pretreatment method, 17 wet gluten sensitive variables are selected by GA, and GA model performs a better result than that of all variable model, with R2V=0.88, and RMSEV=1.47. For qualitative analysis, automatic weighted least squares baseline is the optimized pretreatment method, all variable models perform better results than those of GA models. The correct classification rates of 3 class of <24%, 24-30%, >30% wet gluten content are 95.45, 84.52, and 90.00%, respectively. The short wave NIRS technique shows potential for both quantitative and qualitative analysis of wet gluten for wheat seed.
Oil content Monitor/Control system and method
NASA Astrophysics Data System (ADS)
Schmitt, R. F.; Gavin, J. A.; Kempel, F. D.; Waltrick, C. N.
1985-07-01
This patent application discloses an oil content monitor/control unit system which is configured to automatically monitor and control processed effluent from an associated oil/water separator so that if the processed effluent exceeds predetermined in-port or at-sea oil concentration limits, it is either recirculated to an associated oil/water separator via a ship's bilge for additional processing, or diverted to a holding tank for storage. On the other hand, if the oil concentration of the processed effluent is less than determined in-port or at-sea limits, it is discharged overboard.
ERIC Educational Resources Information Center
Hamade, Rachel; Hewlett, Nigel; Scanlon, Emer
2006-01-01
This study aimed to evaluate a new automatic tracheostoma valve: the Provox FreeHands HME (manufactured by Atos Medical AB, Sweden). Data from four laryngectomee participants using automatic and also manual occlusion were subjected to acoustic and perceptual analysis. The main results were a significant decrease, from the manual to automatic…
NASA Astrophysics Data System (ADS)
Irshad, Mehreen; Muhammad, Nazeer; Sharif, Muhammad; Yasmeen, Mussarat
2018-04-01
Conventionally, cardiac MR image analysis is done manually. Automatic examination for analyzing images can replace the monotonous tasks of massive amounts of data to analyze the global and regional functions of the cardiac left ventricle (LV). This task is performed using MR images to calculate the analytic cardiac parameter like end-systolic volume, end-diastolic volume, ejection fraction, and myocardial mass, respectively. These analytic parameters depend upon genuine delineation of epicardial, endocardial, papillary muscle, and trabeculations contours. In this paper, we propose an automatic segmentation method using the sum of absolute differences technique to localize the left ventricle. Blind morphological operations are proposed to segment and detect the LV contours of the epicardium and endocardium, automatically. We test the benchmark Sunny Brook dataset for evaluation of the proposed work. Contours of epicardium and endocardium are compared quantitatively to determine contour's accuracy and observe high matching values. Similarity or overlapping of an automatic examination to the given ground truth analysis by an expert are observed with high accuracy as with an index value of 91.30% . The proposed method for automatic segmentation gives better performance relative to existing techniques in terms of accuracy.
van Damme, Philip; Quesada-Martínez, Manuel; Cornet, Ronald; Fernández-Breis, Jesualdo Tomás
2018-06-13
Ontologies and terminologies have been identified as key resources for the achievement of semantic interoperability in biomedical domains. The development of ontologies is performed as a joint work by domain experts and knowledge engineers. The maintenance and auditing of these resources is also the responsibility of such experts, and this is usually a time-consuming, mostly manual task. Manual auditing is impractical and ineffective for most biomedical ontologies, especially for larger ones. An example is SNOMED CT, a key resource in many countries for codifying medical information. SNOMED CT contains more than 300000 concepts. Consequently its auditing requires the support of automatic methods. Many biomedical ontologies contain natural language content for humans and logical axioms for machines. The 'lexically suggest, logically define' principle means that there should be a relation between what is expressed in natural language and as logical axioms, and that such a relation should be useful for auditing and quality assurance. Besides, the meaning of this principle is that the natural language content for humans could be used to generate the logical axioms for the machines. In this work, we propose a method that combines lexical analysis and clustering techniques to (1) identify regularities in the natural language content of ontologies; (2) cluster, by similarity, labels exhibiting a regularity; (3) extract relevant information from those clusters; and (4) propose logical axioms for each cluster with the support of axiom templates. These logical axioms can then be evaluated with the existing axioms in the ontology to check their correctness and completeness, which are two fundamental objectives in auditing and quality assurance. In this paper, we describe the application of the method to two SNOMED CT modules, a 'congenital' module, obtained using concepts exhibiting the attribute Occurrence - Congenital, and a 'chronic' module, using concepts exhibiting the attribute Clinical course - Chronic. We obtained a precision and a recall of respectively 75% and 28% for the 'congenital' module, and 64% and 40% for the 'chronic' one. We consider these results to be promising, so our method can contribute to the support of content editors by using automatic methods for assuring the quality of biomedical ontologies and terminologies. Copyright © 2018. Published by Elsevier Inc.
Production of dioxins and furans for various solid fuels burnt in 25 kW automatic boiler
NASA Astrophysics Data System (ADS)
Hopan, František; Horák, Jiří; Krpec, Kamil; Kubesa, Petr; Dej, Milan; Laciok, Vendula
2016-06-01
There has been brown coal, black coal and maize straw in a pellet form burnt in an automatic boiler. Production of dibenzodioxins and dibenzofuranes, recomputated through toxicity equivalents, expressed as the emission factor relative to the fuel unit, has differentiated in a range of ca. three orders (0.05 up to 78.9 ng/kg) in dependence on a sort of the used fuel. The measured values have been compared with emission factors used for the emission inventory in the Czech Republic and Poland and with the emission limit applicable for waste incineration plants. The study has proven the influence of chlorine content in fuel on production of dioxins and furanes.
[Progress in the development of insulin pumps and their advanced automatic functions].
Prázný, Martin
2015-04-01
Patients with type 1 diabetes are exposed to permanent burden consisting of careful glucose self-monitoring and precise insulin dosage based on measured glucose values, carbohydrates content in the food and both planned and non-planned physical activity. Erroneous insulin dosing causes frequent both hypoglycemia and hyperglycemia. Hypoglycemia is, however, the most clinically significant complication limiting the optimal diabetes control. Automatic features for insulin dosage integrated in insulin pumps are thus very important. Low glucose suspend (LGS) and Predictive Low Glucose Management (PLGM) use glucose sensor values to prevent hypoglycemia, shorten the time spent in hypoglycemic range and present further step forward to fully closed-loop system of insulin treatment.
An automatic indexing method for medical documents.
Wagner, M. M.
1991-01-01
This paper describes MetaIndex, an automatic indexing program that creates symbolic representations of documents for the purpose of document retrieval. MetaIndex uses a simple transition network parser to recognize a language that is derived from the set of main concepts in the Unified Medical Language System Metathesaurus (Meta-1). MetaIndex uses a hierarchy of medical concepts, also derived from Meta-1, to represent the content of documents. The goal of this approach is to improve document retrieval performance by better representation of documents. An evaluation method is described, and the performance of MetaIndex on the task of indexing the Slice of Life medical image collection is reported. PMID:1807564
Study of a safety margin system for powered-lift STOL aircraft
NASA Technical Reports Server (NTRS)
Heffley, R. K.; Jewell, W. F.
1978-01-01
A study was conducted to explore the feasibility of a safety margin system for powered-lift aircraft which require a backside piloting technique. The objective of the safety margin system was to present multiple safety margin criteria as a single variable which could be tracked manually or automatically and which could be monitored for the purpose of deriving safety margin status. The study involved a pilot-in-the-loop analysis of several safety margin system concepts and a simulation experiment to evaluate those concepts which showed promise of providing a good solution. A system was ultimately configured which offered reasonable compromises in controllability, status information content, and the ability to regulate the safety margin at some expense of the allowable low speed flight path envelope.
Knowledge Management in Sensor Enabled Online Services
NASA Astrophysics Data System (ADS)
Smyth, Dominick; Cappellari, Paolo; Roantree, Mark
The Future Internet, has as its vision, the development of improved features and usability for services, applications and content. In many cases, services can be provided automatically through the use of monitors or sensors. This means web generated sensor data becoming available not only to the companies that own the sensors but also to the domain users who generate the data and to information and knowledge workers who harvest the output. The goal is improving the service through better usage of the information provided by the service. Applications and services vary from climate, traffic, health and sports event monitoring. In this paper, we present the WSW system that harvests web sensor data to provide additional and, in some cases, more accurate information using an analysis of both live and warehoused information.
Autoradiographic method for quantitation of deposition and distribution of radiocalcium in bone
Lawrence Riggs, B; Bassingthwaighte, James B.; Jowsey, Jenifer; Peter Pequegnat, E
2010-01-01
A method is described for quantitating autoradiographs of bone-seeking isotopes in microscopic sections of bone. Autoradiographs of bone sections containing 45Ca and internal calibration standards are automatically scanned with a microdensitometer. The digitized optical density output is stored on magnetic tape and is converted by computer to equivalent activity of 45Ca per gram of bone. The computer determines the total 45Ca uptake in the bone section and, on the basis of optical density and anatomic position, quantitatively divides the uptake into 4 components, each representing a separate physiologic process (bone formation, secondary mineralization, diffuse long-term exchange, and surface short-term exchange). The method is also applicable for quantitative analysis of microradiographs of bone sections for mineral content and density. PMID:5416906
Surface Location In Scene Content Analysis
NASA Astrophysics Data System (ADS)
Hall, E. L.; Tio, J. B. K.; McPherson, C. A.; Hwang, J. J.
1981-12-01
The purpose of this paper is to describe techniques and algorithms for the location in three dimensions of planar and curved object surfaces using a computer vision approach. Stereo imaging techniques are demonstrated for planar object surface location using automatic segmentation, vertex location and relational table matching. For curved surfaces, the locations of corresponding 'points is very difficult. However, an example using a grid projection technique for the location of the surface of a curved cup is presented to illustrate a solution. This method consists of first obtaining the perspective transformation matrix from the images, then using these matrices to compute the three dimensional point locations of the grid points on the surface. These techniques may be used in object location for such applications as missile guidance, robotics, and medical diagnosis and treatment.
Application of software technology to automatic test data analysis
NASA Technical Reports Server (NTRS)
Stagner, J. R.
1991-01-01
The verification process for a major software subsystem was partially automated as part of a feasibility demonstration. The methods employed are generally useful and applicable to other types of subsystems. The effort resulted in substantial savings in test engineer analysis time and offers a method for inclusion of automatic verification as a part of regression testing.
ERIC Educational Resources Information Center
Okurut, Jeje Moses
2018-01-01
The impact of automatic promotion practice on students dropping out of Uganda's primary education was assessed using propensity score in difference in differences analysis technique. The analysis strategy was instrumental in addressing the selection bias problem, as well as biases arising from common trends over time, and permanent latent…
Applying reliability analysis to design electric power systems for More-electric aircraft
NASA Astrophysics Data System (ADS)
Zhang, Baozhu
The More-Electric Aircraft (MEA) is a type of aircraft that replaces conventional hydraulic and pneumatic systems with electrically powered components. These changes have significantly challenged the aircraft electric power system design. This thesis investigates how reliability analysis can be applied to automatically generate system topologies for the MEA electric power system. We first use a traditional method of reliability block diagrams to analyze the reliability level on different system topologies. We next propose a new methodology in which system topologies, constrained by a set reliability level, are automatically generated. The path-set method is used for analysis. Finally, we interface these sets of system topologies with control synthesis tools to automatically create correct-by-construction control logic for the electric power system.
ERIC Educational Resources Information Center
Technology & Learning, 2007
2007-01-01
A podcast is audio or visual content that is automatically delivered over a network via free subscription. The advantage podcasts have over traditional oral reports is that students can edit and revise until what they say and how they say it is perfected. iLife applications are ideal for creating podcasts and other digital projects because of…
Mathematical models of water application for a variable rate irrigating hill-seeder
USDA-ARS?s Scientific Manuscript database
A variable rate irrigating hill-seeder can adjust water application automatically according to the difference in soil moisture content in the field to alleviate drought and save water. Two key problems to realize variable rate water application are how to determine the right amount of water for the ...
Mathematic models of water application for a variable rate irrigating hill-seeder
USDA-ARS?s Scientific Manuscript database
A variable rate irrigating hill-seeder can adjust water application automatically according to the difference in soil moisture content in the field to alleviate drought and save water. Two key problems to realize variable rate water application are how to determine the right amount of water for the ...
ERIC Educational Resources Information Center
LaCelle-Peterson, Mark W.; Rivera, Charlene
1994-01-01
Educational reforms will not automatically have the same effects for native English speakers and English language learners (ELLs). Equitable assessment for ELLs must consider equity issues of assessment technologies, provide information on ELLs' developing language abilities and content-area achievement, and be comprehensive, flexible, progress…
BabeLO--An Extensible Converter of Programming Exercises Formats
ERIC Educational Resources Information Center
Queiros, R.; Leal, J. P.
2013-01-01
In the last two decades, there was a proliferation of programming exercise formats that hinders interoperability in automatic assessment. In the lack of a widely accepted standard, a pragmatic solution is to convert content among the existing formats. BabeLO is a programming exercise converter providing services to a network of heterogeneous…
Adaptable Learning Assistant for Item Bank Management
ERIC Educational Resources Information Center
Nuntiyagul, Atorn; Naruedomkul, Kanlaya; Cercone, Nick; Wongsawang, Damras
2008-01-01
We present PKIP, an adaptable learning assistant tool for managing question items in item banks. PKIP is not only able to automatically assist educational users to categorize the question items into predefined categories by their contents but also to correctly retrieve the items by specifying the category and/or the difficulty level. PKIP adapts…
Earle, Paul S.; Wald, David J.; Jaiswal, Kishor S.; Allen, Trevor I.; Hearne, Michael G.; Marano, Kristin D.; Hotovec, Alicia J.; Fee, Jeremy
2009-01-01
Within minutes of a significant earthquake anywhere on the globe, the U.S. Geological Survey (USGS) Prompt Assessment of Global Earthquakes for Response (PAGER) system assesses its potential societal impact. PAGER automatically estimates the number of people exposed to severe ground shaking and the shaking intensity at affected cities. Accompanying maps of the epicentral region show the population distribution and estimated ground-shaking intensity. A regionally specific comment describes the inferred vulnerability of the regional building inventory and, when available, lists recent nearby earthquakes and their effects. PAGER's results are posted on the USGS Earthquake Program Web site (http://earthquake.usgs.gov/), consolidated in a concise one-page report, and sent in near real-time to emergency responders, government agencies, and the media. Both rapid and accurate results are obtained through manual and automatic updates of PAGER's content in the hours following significant earthquakes. These updates incorporate the most recent estimates of earthquake location, magnitude, faulting geometry, and first-hand accounts of shaking. PAGER relies on a rich set of earthquake analysis and assessment tools operated by the USGS and contributing Advanced National Seismic System (ANSS) regional networks. A focused research effort is underway to extend PAGER's near real-time capabilities beyond population exposure to quantitative estimates of fatalities, injuries, and displaced population.
Fraley, Stephanie I.; Athamanolap, Pornpat; Masek, Billie J.; Hardick, Justin; Carroll, Karen C.; Hsieh, Yu-Hsiang; Rothman, Richard E.; Gaydos, Charlotte A.; Wang, Tza-Huei; Yang, Samuel
2016-01-01
High Resolution Melt (HRM) is a versatile and rapid post-PCR DNA analysis technique primarily used to differentiate sequence variants among only a few short amplicons. We recently developed a one-vs-one support vector machine algorithm (OVO SVM) that enables the use of HRM for identifying numerous short amplicon sequences automatically and reliably. Herein, we set out to maximize the discriminating power of HRM + SVM for a single genetic locus by testing longer amplicons harboring significantly more sequence information. Using universal primers that amplify the hypervariable bacterial 16 S rRNA gene as a model system, we found that long amplicons yield more complex HRM curve shapes. We developed a novel nested OVO SVM approach to take advantage of this feature and achieved 100% accuracy in the identification of 37 clinically relevant bacteria in Leave-One-Out-Cross-Validation. A subset of organisms were independently tested. Those from pure culture were identified with high accuracy, while those tested directly from clinical blood bottles displayed more technical variability and reduced accuracy. Our findings demonstrate that long sequences can be accurately and automatically profiled by HRM with a novel nested SVM approach and suggest that clinical sample testing is feasible with further optimization. PMID:26778280
Automatic activation of word phonology from print in deep dyslexia.
Katz, R B; Lanzoni, S M
1992-11-01
The performance of deep dyslexics in oral reading and other tasks suggests that they are poor at activating the phonology of words and non-words from printed stimuli. As the tasks ordinarily used to test deep dyslexics require controlled processing, it is possible that the phonology of printed words can be better activated on an automatic basis. This study investigated this possibility by testing a deep dyslexic patient on a lexical decision task with pairs of stimuli presented simultaneously. In Experiment 1, which used content words as stimuli, the deep dyslexic, like normal subjects, showed faster reaction times on trials with rhyming, similarly spelled stimuli (e.g. bribe-tribe) than on control trials (consisting of non-rhyming, dissimilarly spelled words), but slower reaction times on trials with non-rhyming, similarly spelled stimuli (e.g. couch-touch). When the experiment was repeated using function words as stimuli, the patient no longer showed a phonological effect. Therefore, the phonological activation of printed content words by deep dyslexics may be better than would be expected on the basis of their oral reading performance.
Gaussian curvature analysis allows for automatic block placement in multi-block hexahedral meshing.
Ramme, Austin J; Shivanna, Kiran H; Magnotta, Vincent A; Grosland, Nicole M
2011-10-01
Musculoskeletal finite element analysis (FEA) has been essential to research in orthopaedic biomechanics. The generation of a volumetric mesh is often the most challenging step in a FEA. Hexahedral meshing tools that are based on a multi-block approach rely on the manual placement of building blocks for their mesh generation scheme. We hypothesise that Gaussian curvature analysis could be used to automatically develop a building block structure for multi-block hexahedral mesh generation. The Automated Building Block Algorithm incorporates principles from differential geometry, combinatorics, statistical analysis and computer science to automatically generate a building block structure to represent a given surface without prior information. We have applied this algorithm to 29 bones of varying geometries and successfully generated a usable mesh in all cases. This work represents a significant advancement in automating the definition of building blocks.
Automatic morphological classification of galaxy images
Shamir, Lior
2009-01-01
We describe an image analysis supervised learning algorithm that can automatically classify galaxy images. The algorithm is first trained using a manually classified images of elliptical, spiral, and edge-on galaxies. A large set of image features is extracted from each image, and the most informative features are selected using Fisher scores. Test images can then be classified using a simple Weighted Nearest Neighbor rule such that the Fisher scores are used as the feature weights. Experimental results show that galaxy images from Galaxy Zoo can be classified automatically to spiral, elliptical and edge-on galaxies with accuracy of ~90% compared to classifications carried out by the author. Full compilable source code of the algorithm is available for free download, and its general-purpose nature makes it suitable for other uses that involve automatic image analysis of celestial objects. PMID:20161594
Wang, Yang; Wang, Xiaohua; Liu, Fangnan; Jiang, Xiaoning; Xiao, Yun; Dong, Xuehan; Kong, Xianglei; Yang, Xuemei; Tian, Donghua; Qu, Zhiyong
2016-01-01
Few studies have looked at the relationship between psychological and the mental health status of pregnant women in rural China. The current study aims to explore the potential mediating effect of negative automatic thoughts between negative life events and antenatal depression. Data were collected in June 2012 and October 2012. 495 rural pregnant women were interviewed. Depressive symptoms were measured by the Edinburgh postnatal depression scale, stresses of pregnancy were measured by the pregnancy pressure scale, negative automatic thoughts were measured by the automatic thoughts questionnaire, and negative life events were measured by the life events scale for pregnant women. We used logistic regression and path analysis to test the mediating effect. The prevalence of antenatal depression was 13.7%. In the logistic regression, the only socio-demographic and health behavior factor significantly related to antenatal depression was sleep quality. Negative life events were not associated with depression in the fully adjusted model. Path analysis showed that the eventual direct and general effects of negative automatic thoughts were 0.39 and 0.51, which were larger than the effects of negative life events. This study suggested that there was a potentially significant mediating effect of negative automatic thoughts. Pregnant women who had lower scores of negative automatic thoughts were more likely to suffer less from negative life events which might lead to antenatal depression.
Toward image phylogeny forests: automatically recovering semantically similar image relationships.
Dias, Zanoni; Goldenstein, Siome; Rocha, Anderson
2013-09-10
In the past few years, several near-duplicate detection methods appeared in the literature to identify the cohabiting versions of a given document online. Following this trend, there are some initial attempts to go beyond the detection task, and look into the structure of evolution within a set of related images overtime. In this paper, we aim at automatically identify the structure of relationships underlying the images, correctly reconstruct their past history and ancestry information, and group them in distinct trees of processing history. We introduce a new algorithm that automatically handles sets of images comprising different related images, and outputs the phylogeny trees (also known as a forest) associated with them. Image phylogeny algorithms have many applications such as finding the first image within a set posted online (useful for tracking copyright infringement perpetrators), hint at child pornography content creators, and narrowing down a list of suspects for online harassment using photographs. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Amini, Reza; Sabourin, Catherine; De Koninck, Joseph
2011-12-01
Scientific study of dreams requires the most objective methods to reliably analyze dream content. In this context, artificial intelligence should prove useful for an automatic and non subjective scoring technique. Past research has utilized word search and emotional affiliation methods, to model and automatically match human judges' scoring of dream report's negative emotional tone. The current study added word associations to improve the model's accuracy. Word associations were established using words' frequency of co-occurrence with their defining words as found in a dictionary and an encyclopedia. It was hypothesized that this addition would facilitate the machine learning model and improve its predictability beyond those of previous models. With a sample of 458 dreams, this model demonstrated an improvement in accuracy from 59% to 63% (kappa=.485) on the negative emotional tone scale, and for the first time reached an accuracy of 77% (kappa=.520) on the positive scale. Copyright © 2011 Elsevier Inc. All rights reserved.
Keyphrase based Evaluation of Automatic Text Summarization
NASA Astrophysics Data System (ADS)
Elghannam, Fatma; El-Shishtawy, Tarek
2015-05-01
The development of methods to deal with the informative contents of the text units in the matching process is a major challenge in automatic summary evaluation systems that use fixed n-gram matching. The limitation causes inaccurate matching between units in a peer and reference summaries. The present study introduces a new Keyphrase based Summary Evaluator KpEval for evaluating automatic summaries. The KpEval relies on the keyphrases since they convey the most important concepts of a text. In the evaluation process, the keyphrases are used in their lemma form as the matching text unit. The system was applied to evaluate different summaries of Arabic multi-document data set presented at TAC2011. The results showed that the new evaluation technique correlates well with the known evaluation systems: Rouge1, Rouge2, RougeSU4, and AutoSummENG MeMoG. KpEval has the strongest correlation with AutoSummENG MeMoG, Pearson and spearman correlation coefficient measures are 0.8840, 0.9667 respectively.
Choël, Marie; Deboudt, Karine; Osán, János; Flament, Pascal; Van Grieken, René
2005-09-01
Atmospheric aerosols consist of a complex heterogeneous mixture of particles. Single-particle analysis techniques are known to provide unique information on the size-resolved chemical composition of aerosols. A scanning electron microscope (SEM) combined with a thin-window energy-dispersive X-ray (EDX) detector enables the morphological and elemental analysis of single particles down to 0.1 microm with a detection limit of 1-10 wt %, low-Z elements included. To obtain data statistically representative of the air masses sampled, a computer-controlled procedure can be implemented in order to run hundreds of single-particle analyses (typically 1000-2000) automatically in a relatively short period of time (generally 4-8 h, depending on the setup and on the particle loading). However, automated particle analysis by SEM-EDX raises two practical challenges: the accuracy of the particle recognition and the reliability of the quantitative analysis, especially for micrometer-sized particles with low atomic number contents. Since low-Z analysis is hampered by the use of traditional polycarbonate membranes, an alternate choice of substrate is a prerequisite. In this work, boron is being studied as a promising material for particle microanalysis. As EDX is generally said to probe a volume of approximately 1 microm3, geometry effects arise from the finite size of microparticles. These particle geometry effects must be corrected by means of a robust concentration calculation procedure. Conventional quantitative methods developed for bulk samples generate elemental concentrations considerably in error when applied to microparticles. A new methodology for particle microanalysis, combining the use of boron as the substrate material and a reverse Monte Carlo quantitative program, was tested on standard particles ranging from 0.25 to 10 microm. We demonstrate that the quantitative determination of low-Z elements in microparticles is achievable and that highly accurate results can be obtained using the automatic data processing described here compared to conventional methods.
Automatic rule generation for high-level vision
NASA Technical Reports Server (NTRS)
Rhee, Frank Chung-Hoon; Krishnapuram, Raghu
1992-01-01
A new fuzzy set based technique that was developed for decision making is discussed. It is a method to generate fuzzy decision rules automatically for image analysis. This paper proposes a method to generate rule-based approaches to solve problems such as autonomous navigation and image understanding automatically from training data. The proposed method is also capable of filtering out irrelevant features and criteria from the rules.
Drug residues in used syringes in Switzerland: A comparative study.
Lefrançois, Elodie; Augsburger, Marc; Esseiva, Pierre
2018-05-01
Harm reduction services, including needle-exchange programmes, have been implemented in Switzerland for over 20 years. Their main aim is to lessen the negative social and/or physical consequences associated with illicit drug consumption and, therefore, improve prevention messages. To this end, knowledge of illicit drug consumption practices is necessary. Periodic self-report surveys are the primary source of data for monitoring drug users' behaviour. Analysis of residual content of used syringes can bring further and objective knowledge about consumed products through analytically confirmed data. Used syringes were sampled in 2 syringe-exchange facilities in Lausanne. These structures are a bus where the users bring back their syringes (ABS) and an automatic injecting kit dispenser (AIKD). Once syringes were collected, a validated gas chromatography-mass spectrometry (GC-MS) method was implemented in order to detect drugs (licit or illicit) contained in the residual content of used syringes. Cocaine was the most common drug detected alone (39% in ABS and 31% in AIKD), followed by the simultaneous detection of heroin and cocaine (12% and 17%) and heroin and midazolam (12% and 17%). The differences between the illicit drugs distribution of used syringes collected in AIKD and ABS were not statistically significant. Analysis of residual content of used syringes as a monitoring tool is an original approach that has already led to a better understanding of the habits of drug-injection users. Over the long term, this approach is a powerful tool to track and detect new consumption practices in a quasi-real-time. Copyright © 2017 John Wiley & Sons, Ltd.
Validation of Computerized Automatic Calculation of the Sequential Organ Failure Assessment Score
Harrison, Andrew M.; Pickering, Brian W.; Herasevich, Vitaly
2013-01-01
Purpose. To validate the use of a computer program for the automatic calculation of the sequential organ failure assessment (SOFA) score, as compared to the gold standard of manual chart review. Materials and Methods. Adult admissions (age > 18 years) to the medical ICU with a length of stay greater than 24 hours were studied in the setting of an academic tertiary referral center. A retrospective cross-sectional analysis was performed using a derivation cohort to compare automatic calculation of the SOFA score to the gold standard of manual chart review. After critical appraisal of sources of disagreement, another analysis was performed using an independent validation cohort. Then, a prospective observational analysis was performed using an implementation of this computer program in AWARE Dashboard, which is an existing real-time patient EMR system for use in the ICU. Results. Good agreement between the manual and automatic SOFA calculations was observed for both the derivation (N=94) and validation (N=268) cohorts: 0.02 ± 2.33 and 0.29 ± 1.75 points, respectively. These results were validated in AWARE (N=60). Conclusion. This EMR-based automatic tool accurately calculates SOFA scores and can facilitate ICU decisions without the need for manual data collection. This tool can also be employed in a real-time electronic environment. PMID:23936639
NASA Technical Reports Server (NTRS)
Yao, Tse-Min; Choi, Kyung K.
1987-01-01
An automatic regridding method and a three dimensional shape design parameterization technique were constructed and integrated into a unified theory of shape design sensitivity analysis. An algorithm was developed for general shape design sensitivity analysis of three dimensional eleastic solids. Numerical implementation of this shape design sensitivity analysis method was carried out using the finite element code ANSYS. The unified theory of shape design sensitivity analysis uses the material derivative of continuum mechanics with a design velocity field that represents shape change effects over the structural design. Automatic regridding methods were developed by generating a domain velocity field with boundary displacement method. Shape design parameterization for three dimensional surface design problems was illustrated using a Bezier surface with boundary perturbations that depend linearly on the perturbation of design parameters. A linearization method of optimization, LINRM, was used to obtain optimum shapes. Three examples from different engineering disciplines were investigated to demonstrate the accuracy and versatility of this shape design sensitivity analysis method.
Analysis and Comparison of Some Automatic Vehicle Monitoring Systems
DOT National Transportation Integrated Search
1973-07-01
In 1970 UMTA solicited proposals and selected four companies to develop systems to demonstrate the feasibility of different automatic vehicle monitoring techniques. The demonstrations culminated in experiments in Philadelphia to assess the performanc...
An efficient scheme for automatic web pages categorization using the support vector machine
NASA Astrophysics Data System (ADS)
Bhalla, Vinod Kumar; Kumar, Neeraj
2016-07-01
In the past few years, with an evolution of the Internet and related technologies, the number of the Internet users grows exponentially. These users demand access to relevant web pages from the Internet within fraction of seconds. To achieve this goal, there is a requirement of an efficient categorization of web page contents. Manual categorization of these billions of web pages to achieve high accuracy is a challenging task. Most of the existing techniques reported in the literature are semi-automatic. Using these techniques, higher level of accuracy cannot be achieved. To achieve these goals, this paper proposes an automatic web pages categorization into the domain category. The proposed scheme is based on the identification of specific and relevant features of the web pages. In the proposed scheme, first extraction and evaluation of features are done followed by filtering the feature set for categorization of domain web pages. A feature extraction tool based on the HTML document object model of the web page is developed in the proposed scheme. Feature extraction and weight assignment are based on the collection of domain-specific keyword list developed by considering various domain pages. Moreover, the keyword list is reduced on the basis of ids of keywords in keyword list. Also, stemming of keywords and tag text is done to achieve a higher accuracy. An extensive feature set is generated to develop a robust classification technique. The proposed scheme was evaluated using a machine learning method in combination with feature extraction and statistical analysis using support vector machine kernel as the classification tool. The results obtained confirm the effectiveness of the proposed scheme in terms of its accuracy in different categories of web pages.
Shchudlo, Natalia A; Varsegova, Tatyana N; Shchudlo, Mikhail M; Stepanov, Mikhail A; Yemanov, Andrey A
2018-05-25
Peroneal neuropathy is one of the complications of orthopaedic leg lengthening. Methods of treatment include slowing of distraction and decompression both of which may lead to additional complications. The purpose of this study was to analyse the changes in histologic peroneal nerve structure during experimental orthopaedic lengthening using various modes of manual or automatic distraction. The obtained data provide the basis for better understanding of peroneal neuropathy pathogenesis and refinement of prophylaxis and preventive treatment protocols. Four experimental models of canine leg lengthening using the Ilizarov fixator were studied: 1 (n = 10)-manual distraction-1 mm/day divided into four increments; 2 (n = 12)-automatic distraction-1 mm/day in 60 increments, 3 (n = 9) and 4 (n = 9)-increased rate of high frequency automatic distraction: 3 mm/day in 120 and 180 increments, respectively. In peroneal nerves semi-thin sections cross-sectional fascicular areas, content of adipocytes in epineurium, endoneurial vascularisation, morphometric parameters of nerve fibres were assessed by computerised analysis at the end of distraction and of consolidation periods and 30 days after fixator removal. In Groups 1-2 massive nerve fibre degeneration along with epineural vessels obliteration was revealed in two cases from 22, whereas in Groups 3-4 there were 10 from 18 (p < 0.01). Injuries of perineurium and endoneurial vessels were noted in Group 3, and long-lasting thinning of nerve fascicles in Group 4. The decrease in epineurial fat tissue was revealed in all groups, more drastic in 3. Modifications and injuries of nerve sheaths and blood vessels depending on distraction rate and frequency contribute to peroneal neuropathy. Its mechanical, circulatory and metabolic causes are discussed.
Convertini, Livia S.; Quatraro, Sabrina; Ressa, Stefania; Velasco, Annalisa
2015-01-01
Background. Even though the interpretation of natural language messages is generally conceived as the result of a conscious processing of the message content, the influence of unconscious factors is also well known. What is still insufficiently known is the way such factors work. We have tackled interpretation assuming it is a process, whose basic features are the same for the whole humankind, and employing a naturalistic approach (careful observation of phenomena in conditions the closest to “natural” ones, and precise description before and independently of data statistical analysis). Methodology. Our field research involved a random sample of 102 adults. We presented them with a complete real world-like case of written communication using unabridged message texts. We collected data (participants’ written reports on their interpretations) in controlled conditions through a specially designed questionnaire (closed and opened answers); then, we treated it through qualitative and quantitative methods. Principal Findings. We gathered some evidence that, in written message interpretation, between reading and the attribution of conscious meaning, an intermediate step could exist (we named it “disassembling”) which looks like an automatic reaction to the text words/expressions. Thus, the process of interpretation would be a discontinuous sequence of three steps having different natures: the initial “decoding” step (i.e., reading, which requires technical abilities), disassembling (the automatic reaction, an unconscious passage) and the final conscious attribution of meaning. If this is true, words and expressions would firstly function like physical stimuli, before being taken into account as symbols. Such hypothesis, once confirmed, could help explaining some links between the cultural (human communication) and the biological (stimulus-reaction mechanisms as the basis for meanings) dimension of humankind. PMID:26528419
Method of center localization for objects containing concentric arcs
NASA Astrophysics Data System (ADS)
Kuznetsova, Elena G.; Shvets, Evgeny A.; Nikolaev, Dmitry P.
2015-02-01
This paper proposes a method for automatic center location of objects containing concentric arcs. The method utilizes structure tensor analysis and voting scheme optimized with Fast Hough Transform. Two applications of the proposed method are considered: (i) wheel tracking in video-based system for automatic vehicle classification and (ii) tree growth rings analysis on a tree cross cut image.
Toward automatic finite element analysis
NASA Technical Reports Server (NTRS)
Kela, Ajay; Perucchio, Renato; Voelcker, Herbert
1987-01-01
Two problems must be solved if the finite element method is to become a reliable and affordable blackbox engineering tool. Finite element meshes must be generated automatically from computer aided design databases and mesh analysis must be made self-adaptive. The experimental system described solves both problems in 2-D through spatial and analytical substructuring techniques that are now being extended into 3-D.
Automatic Single Event Effects Sensitivity Analysis of a 13-Bit Successive Approximation ADC
NASA Astrophysics Data System (ADS)
Márquez, F.; Muñoz, F.; Palomo, F. R.; Sanz, L.; López-Morillo, E.; Aguirre, M. A.; Jiménez, A.
2015-08-01
This paper presents Analog Fault Tolerant University of Seville Debugging System (AFTU), a tool to evaluate the Single-Event Effect (SEE) sensitivity of analog/mixed signal microelectronic circuits at transistor level. As analog cells can behave in an unpredictable way when critical areas interact with the particle hitting, there is a need for designers to have a software tool that allows an automatic and exhaustive analysis of Single-Event Effects influence. AFTU takes the test-bench SPECTRE design, emulates radiation conditions and automatically evaluates vulnerabilities using user-defined heuristics. To illustrate the utility of the tool, the SEE sensitivity of a 13-bits Successive Approximation Analog-to-Digital Converter (ADC) has been analysed. This circuit was selected not only because it was designed for space applications, but also due to the fact that a manual SEE sensitivity analysis would be too time-consuming. After a user-defined test campaign, it was detected that some voltage transients were propagated to a node where a parasitic diode was activated, affecting the offset cancelation, and therefore the whole resolution of the ADC. A simple modification of the scheme solved the problem, as it was verified with another automatic SEE sensitivity analysis.
Adaptive platform for fluorescence microscopy-based high-content screening
NASA Astrophysics Data System (ADS)
Geisbauer, Matthias; Röder, Thorsten; Chen, Yang; Knoll, Alois; Uhl, Rainer
2010-04-01
Fluorescence microscopy has become a widely used tool for the study of medically relevant intra- and intercellular processes. Extracting meaningful information out of a bulk of acquired images is usually performed during a separate post-processing task. Thus capturing raw data results in an unnecessary huge number of images, whereas usually only a few images really show the particular information that is searched for. Here we propose a novel automated high-content microscope system, which enables experiments to be carried out with only a minimum of human interaction. It facilitates a huge speed-increase for cell biology research and its applications compared to the widely performed workflows. Our fluorescence microscopy system can automatically execute application-dependent data processing algorithms during the actual experiment. They are used for image contrast enhancement, cell segmentation and/or cell property evaluation. On-the-fly retrieved information is used to reduce data and concomitantly control the experiment process in real-time. Resulting in a closed loop of perception and action the system can greatly decrease the amount of stored data on one hand and increases the relative valuable data content on the other hand. We demonstrate our approach by addressing the problem of automatically finding cells with a particular combination of labeled receptors and then selectively stimulate them with antagonists or agonists. The results are then compared against the results of traditional, static systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Adel, G.T.; Luttrell, G.H.
Automatic control of fine coal cleaning circuits has traditionally been limited by the lack of sensors for on-line ash analysis. Although several nuclear-based analyzers are available, none have seen widespread acceptance. This is largely due to the fact that nuclear sensors are expensive and tend to be influenced by changes in seam type and pyrite content. Recently, researchers at VPI&SU have developed an optical sensor for phosphate analysis. The sensor uses image processing technology to analyze video images of phosphate ore. It is currently being used by PCS Phosphate for off-line analysis of dry flotation concentrate. The primary advantages ofmore » optical sensors over nuclear sensors are that hey are significantly cheaper, are not subject to measurement variations due to changes in high atomic number materials, are inherently safer and require no special radiation permitting. The purpose of this work is to apply the knowledge gained in the development of an optical phosphate analyzer to the development of an on-line ash analyzer for fine coal slurries. During the past quarter, the current prototype of the on-line optical ash analyzer was subjected to extensive testing at the Middlefork coal preparation plant. Initial work focused on obtaining correlations between ash content and mean gray level, while developmental work on the more comprehensive neural network calibration approach continued. Test work to date shows a promising trend in the correlation between ash content and mean gray level. Unfortunately, data scatter remains significant. Recent tests seem to eliminate variations in percent solids, particle size distribution, measurement angle and light setting as causes for the data scatter; however, equipment warm-up time and number of images taken per measurement appear to have a significant impact on the gray-level values obtained. 8 figs., 8 tabs.« less
Yang, Zhen; Bogovic, John A; Carass, Aaron; Ye, Mao; Searson, Peter C; Prince, Jerry L
2013-03-13
With the rapid development of microscopy for cell imaging, there is a strong and growing demand for image analysis software to quantitatively study cell morphology. Automatic cell segmentation is an important step in image analysis. Despite substantial progress, there is still a need to improve the accuracy, efficiency, and adaptability to different cell morphologies. In this paper, we propose a fully automatic method for segmenting cells in fluorescence images of confluent cell monolayers. This method addresses several challenges through a combination of ideas. 1) It realizes a fully automatic segmentation process by first detecting the cell nuclei as initial seeds and then using a multi-object geometric deformable model (MGDM) for final segmentation. 2) To deal with different defects in the fluorescence images, the cell junctions are enhanced by applying an order-statistic filter and principal curvature based image operator. 3) The final segmentation using MGDM promotes robust and accurate segmentation results, and guarantees no overlaps and gaps between neighboring cells. The automatic segmentation results are compared with manually delineated cells, and the average Dice coefficient over all distinguishable cells is 0.88.
Zeng, Xueqiang; Luo, Gang
2017-12-01
Machine learning is broadly used for clinical data analysis. Before training a model, a machine learning algorithm must be selected. Also, the values of one or more model parameters termed hyper-parameters must be set. Selecting algorithms and hyper-parameter values requires advanced machine learning knowledge and many labor-intensive manual iterations. To lower the bar to machine learning, miscellaneous automatic selection methods for algorithms and/or hyper-parameter values have been proposed. Existing automatic selection methods are inefficient on large data sets. This poses a challenge for using machine learning in the clinical big data era. To address the challenge, this paper presents progressive sampling-based Bayesian optimization, an efficient and automatic selection method for both algorithms and hyper-parameter values. We report an implementation of the method. We show that compared to a state of the art automatic selection method, our method can significantly reduce search time, classification error rate, and standard deviation of error rate due to randomization. This is major progress towards enabling fast turnaround in identifying high-quality solutions required by many machine learning-based clinical data analysis tasks.
NASA Technical Reports Server (NTRS)
Dejesusparada, N. (Principal Investigator); Dossantos, A. P.; Novo, E. M. L. D.; Duarte, V.
1981-01-01
The use of LANDSAT data to evaluate pasture quality in the Amazon region is demonstrated. Pasture degradation in deforested areas of a traditional tropical forest cattle-raising region was estimated. Automatic analysis using interactive multispectral analysis (IMAGE-100) shows that 24% of the deforested areas were occupied by natural vegetation regrowth, 24% by exposed soil, 15% by degraded pastures, and 46% was suitable grazing land.
Automatic Conflict Detection on Contracts
NASA Astrophysics Data System (ADS)
Fenech, Stephen; Pace, Gordon J.; Schneider, Gerardo
Many software applications are based on collaborating, yet competing, agents or virtual organisations exchanging services. Contracts, expressing obligations, permissions and prohibitions of the different actors, can be used to protect the interests of the organisations engaged in such service exchange. However, the potentially dynamic composition of services with different contracts, and the combination of service contracts with local contracts can give rise to unexpected conflicts, exposing the need for automatic techniques for contract analysis. In this paper we look at automatic analysis techniques for contracts written in the contract language mathcal{CL}. We present a trace semantics of mathcal{CL} suitable for conflict analysis, and a decision procedure for detecting conflicts (together with its proof of soundness, completeness and termination). We also discuss its implementation and look into the applications of the contract analysis approach we present. These techniques are applied to a small case study of an airline check-in desk.
Optical Automatic Car Identification (OACI) : Volume 1. Advanced System Specification.
DOT National Transportation Integrated Search
1978-12-01
A performance specification is provided in this report for an Optical Automatic Car Identification (OACI) scanner system which features 6% improved readability over existing industry scanner systems. It also includes the analysis and rationale which ...
Boschetti, Lucio; Ottavian, Matteo; Facco, Pierantonio; Barolo, Massimiliano; Serva, Lorenzo; Balzan, Stefania; Novelli, Enrico
2013-11-01
The use of near-infrared spectroscopy (NIRS) is proposed in this study for the characterization of the quality parameters of a smoked and dry-cured meat product known as Bauernspeck (originally from Northern Italy), as well as of some technological traits of the pork carcass used for its manufacturing. In particular, NIRS is shown to successfully estimate several key quality parameters (including water activity, moisture, dry matter, ash and protein content), suggesting its suitability for real time application in replacement of expensive and time consuming chemical analysis. Furthermore, a correlative approach based on canonical correlation analysis was used to investigate the spectral regions that are mostly correlated to the characteristics of interest. The identification of these regions, which can be linked to the absorbance of the main functional chemical groups, is intended to provide a better understanding of the chemical structure of the substrate under investigation. Copyright © 2013 Elsevier Ltd. All rights reserved.
Harder, Nathalie; Mora-Bermúdez, Felipe; Godinez, William J; Wünsche, Annelie; Eils, Roland; Ellenberg, Jan; Rohr, Karl
2009-11-01
Live-cell imaging allows detailed dynamic cellular phenotyping for cell biology and, in combination with small molecule or drug libraries, for high-content screening. Fully automated analysis of live cell movies has been hampered by the lack of computational approaches that allow tracking and recognition of individual cell fates over time in a precise manner. Here, we present a fully automated approach to analyze time-lapse movies of dividing cells. Our method dynamically categorizes cells into seven phases of the cell cycle and five aberrant morphological phenotypes over time. It reliably tracks cells and their progeny and can thus measure the length of mitotic phases and detect cause and effect if mitosis goes awry. We applied our computational scheme to annotate mitotic phenotypes induced by RNAi gene knockdown of CKAP5 (also known as ch-TOG) or by treatment with the drug nocodazole. Our approach can be readily applied to comparable assays aiming at uncovering the dynamic cause of cell division phenotypes.
Intelligent transient transitions detection of LRE test bed
NASA Astrophysics Data System (ADS)
Zhu, Fengyu; Shen, Zhengguang; Wang, Qi
2013-01-01
Health Monitoring Systems is an implementation of monitoring strategies for complex systems whereby avoiding catastrophic failure, extending life and leading to improved asset management. A Health Monitoring Systems generally encompasses intelligence at many levels and sub-systems including sensors, actuators, devices, etc. In this paper, a smart sensor is studied, which is use to detect transient transitions of liquid-propellant rocket engines test bed. In consideration of dramatic changes of variable condition, wavelet decomposition is used to work real time in areas. Contrast to traditional Fourier transform method, the major advantage of adding wavelet analysis is the ability to detect transient transitions as well as obtaining the frequency content using a much smaller data set. Historically, transient transitions were only detected by offline analysis of the data. The methods proposed in this paper provide an opportunity to detect transient transitions automatically as well as many additional data anomalies, and provide improved data-correction and sensor health diagnostic abilities. The developed algorithms have been tested on actual rocket test data.
Adaptive time-variant models for fuzzy-time-series forecasting.
Wong, Wai-Keung; Bai, Enjian; Chu, Alice Wai-Ching
2010-12-01
A fuzzy time series has been applied to the prediction of enrollment, temperature, stock indices, and other domains. Related studies mainly focus on three factors, namely, the partition of discourse, the content of forecasting rules, and the methods of defuzzification, all of which greatly influence the prediction accuracy of forecasting models. These studies use fixed analysis window sizes for forecasting. In this paper, an adaptive time-variant fuzzy-time-series forecasting model (ATVF) is proposed to improve forecasting accuracy. The proposed model automatically adapts the analysis window size of fuzzy time series based on the prediction accuracy in the training phase and uses heuristic rules to generate forecasting values in the testing phase. The performance of the ATVF model is tested using both simulated and actual time series including the enrollments at the University of Alabama, Tuscaloosa, and the Taiwan Stock Exchange Capitalization Weighted Stock Index (TAIEX). The experiment results show that the proposed ATVF model achieves a significant improvement in forecasting accuracy as compared to other fuzzy-time-series forecasting models.
The potential of latent semantic analysis for machine grading of clinical case summaries.
Kintsch, Walter
2002-02-01
This paper introduces latent semantic analysis (LSA), a machine learning method for representing the meaning of words, sentences, and texts. LSA induces a high-dimensional semantic space from reading a very large amount of texts. The meaning of words and texts can be represented as vectors in this space and hence can be compared automatically and objectively. A generative theory of the mental lexicon based on LSA is described. The word vectors LSA constructs are context free, and each word, irrespective of how many meanings or senses it has, is represented by a single vector. However, when a word is used in different contexts, context appropriate word senses emerge. Several applications of LSA to educational software are described, involving the ability of LSA to quickly compare the content of texts, such as an essay written by a student and a target essay. An LSA-based software tool is sketched for machine grading of clinical case summaries written by medical students.
Frejlichowski, Dariusz; Gościewska, Katarzyna; Forczmański, Paweł; Hofman, Radosław
2014-01-01
“SmartMonitor” is an intelligent security system based on image analysis that combines the advantages of alarm, video surveillance and home automation systems. The system is a complete solution that automatically reacts to every learned situation in a pre-specified way and has various applications, e.g., home and surrounding protection against unauthorized intrusion, crime detection or supervision over ill persons. The software is based on well-known and proven methods and algorithms for visual content analysis (VCA) that were appropriately modified and adopted to fit specific needs and create a video processing model which consists of foreground region detection and localization, candidate object extraction, object classification and tracking. In this paper, the “SmartMonitor” system is presented along with its architecture, employed methods and algorithms, and object analysis approach. Some experimental results on system operation are also provided. In the paper, focus is put on one of the aforementioned functionalities of the system, namely supervision over ill persons. PMID:24905854
ERIC Educational Resources Information Center
Hsu, Ching-Kun; Hwang, Gwo-Jen; Chang, Chih-Kai
2014-01-01
Fostering the listening comprehension of English as Foreign Language (EFL) learners has been recognized as an important and challenging issue. Videos have been used as one of the English listening learning resources; however, without effective learning supports, EFL students are likely to encounter difficulties in comprehending the video content,…
ERIC Educational Resources Information Center
Kwon, Oh-Woog; Lee, Kiyoung; Kim, Young-Kil; Lee, Yunkeun
2015-01-01
This paper introduces a Dialog-Based Computer-Assisted second-Language Learning (DB-CALL) system using semantic and grammar correctness evaluations and the results of its experiment. While the system dialogues with English learners about a given topic, it automatically evaluates the grammar and content properness of their English utterances, then…
Talk the Talk: Learner-Generated Podcasts as Catalysts for Knowledge Creation
ERIC Educational Resources Information Center
Lee, Mark J. W.; McLoughlin, Catherine; Chan, Anthony
2008-01-01
Podcasting allows audio content from one or more user-selected feeds or channels to be automatically downloaded to one's computer as it becomes available, then later transferred to a portable player for consumption at a convenient time and place. It is enjoying phenomenal growth in mainstream society, alongside other Web 2.0 technologies that…
Use of Short Podcasts to Reinforce Learning Outcomes in Biology
ERIC Educational Resources Information Center
Aguiar, Cristina; Carvalho, Ana Amelia; Carvalho, Carla Joana
2009-01-01
Podcasts are audio or video files which can be automatically downloaded to one's computer when the episodes become available, then later transferred to a portable player for listening. The technology thereby enables the user to listen to and/or watch the content anywhere at any time. Formerly popular as radio shows, podcasting was rapidly explored…
Examining the Effect of Academic Procrastination on Achievement Using LMS Data in E-Learning
ERIC Educational Resources Information Center
You, Ji Won
2015-01-01
This study aimed to investigate the effect of academic procrastination on e-learning course achievement. Because all of the interactions among students, instructors, and contents in an e-learning environment were automatically recorded in a learning management system (LMS), procrastination such as the delays in weekly scheduled learning and late…
Automatic Sound Generation for Spherical Objects Hitting Straight Beams Based on Physical Models.
ERIC Educational Resources Information Center
Rauterberg, M.; And Others
Sounds are the result of one or several interactions between one or several objects at a certain place and in a certain environment; the attributes of every interaction influence the generated sound. The following factors influence users in human/computer interaction: the organization of the learning environment, the content of the learning tasks,…
Automatic Guidance of Visual Attention from Verbal Working Memory
ERIC Educational Resources Information Center
Soto, David; Humphreys, Glyn W.
2007-01-01
Previous studies have shown that visual attention can be captured by stimuli matching the contents of working memory (WM). Here, the authors assessed the nature of the representation that mediates the guidance of visual attention from WM. Observers were presented with either verbal or visual primes (to hold in memory, Experiment 1; to verbalize,…
Video-Based Big Data Analytics in Cyberlearning
ERIC Educational Resources Information Center
Wang, Shuangbao; Kelly, William
2017-01-01
In this paper, we present a novel system, inVideo, for video data analytics, and its use in transforming linear videos into interactive learning objects. InVideo is able to analyze video content automatically without the need for initial viewing by a human. Using a highly efficient video indexing engine we developed, the system is able to analyze…
[The effects of interpretation bias for social events and automatic thoughts on social anxiety].
Aizawa, Naoki
2015-08-01
Many studies have demonstrated that individuals with social anxiety interpret ambiguous social situations negatively. It is, however, not clear whether the interpretation bias discriminatively contributes to social anxiety in comparison with depressive automatic thoughts. The present study investigated the effects of negative interpretation bias and automatic thoughts on social anxiety. The Social Intent Interpretation-Questionnaire, which measures the tendency to interpret ambiguous social events as implying other's rejective intents, the short Japanese version of the Automatic Thoughts Questionnaire-Revised, and the Anthropophobic Tendency Scale were administered to 317 university students. Covariance structure analysis indicated that both rejective intent interpretation bias and negative automatic thoughts contributed to mental distress in social situations mediated by a sense of powerlessness and excessive concern about self and others in social situations. Positive automatic thoughts reduced mental distress. These results indicate the importance of interpretation bias and negative automatic thoughts in the development and maintenance of social anxiety. Implications for understanding of the cognitive features of social anxiety were discussed.
Sulfur Content Precision Control Technology for CO2-Shielded Welding Wire Steel
NASA Astrophysics Data System (ADS)
Chaofa, Zhang; Huaqiang, Hao; Youbing, Xiang; Shanxi, Liu
As a kind of impurity and displaying with FeS and MnS form in steel, Sulfur can make the disadvantage effect on the performance of hot-working, welding and corrosion resistance. The high content sulfur in steel can cause the hot brittle phenomenon for the steel. For the welding steel, when the sulfur content is higher, the drawing performance of wire rod become worst and the yield of wire rod decrease. When the sulfur is lower, the automatic wire feeding performance for the gas shielded welding become worst and the weld seam is not smooth. According to the results of welding expert research, 0.010%≤ S≤ 0.020% in CO2-shielded welding wire steel is reasonable.
Quality Control of True Height Profiles Obtained Automatically from Digital Ionograms.
1982-05-01
nece.,ssary and Identify by block number) Ionosphere Digisonde Electron Density Profile Ionogram Autoscaling ARTIST 2 , ABSTRACT (Continue on reverae...analysis technique currently used with the ionogram traces scaled automatically by the ARTIST software [Reinisch and Huang, 1983; Reinisch et al...19841, and the generalized polynomial analysis technique POLAN [Titheridge, 1985], using the same ARTIST -identified ionogram traces. 2. To determine how
Attitudes as Object-Evaluation Associations of Varying Strength
Fazio, Russell H.
2009-01-01
Historical developments regarding the attitude concept are reviewed, and set the stage for consideration of a theoretical perspective that views attitude, not as a hypothetical construct, but as evaluative knowledge. A model of attitudes as object-evaluation associations of varying strength is summarized, along with research supporting the model’s contention that at least some attitudes are represented in memory and activated automatically upon the individual’s encountering the attitude object. The implications of the theoretical perspective for a number of recent discussions related to the attitude concept are elaborated. Among these issues are the notion of attitudes as “constructions,” the presumed malleability of automatically-activated attitudes, correspondence between implicit and explicit measures of attitude, and postulated dual or multiple attitudes. PMID:19424447
A low-cost inertial smoothing system for landing approach guidance
NASA Technical Reports Server (NTRS)
Niessen, F. R.
1973-01-01
Accurate position and velocity information with low noise content for instrument approaches and landings is required for both control and display applications. In a current VTOL automatic instrument approach and landing research program, radar-derived landing guidance position reference signals, which are noisy, have been mixed with acceleration information derived from low-cost onboard sensors to provide high-quality position and velocity information. An in-flight comparison of signal quality and accuracy has shown good agreement between the low-cost inertial smoothing system and an aided inertial navigation system. Furthermore, the low-cost inertial smoothing system has been proven to be satisfactory in control and display system applications for both automatic and pilot-in-the-loop instrument approaches and landings.
Minimization In Digital Design As A Meta-Planning Problem
NASA Astrophysics Data System (ADS)
Ho, William P. C.; Wu, Jung-Gen
1987-05-01
In our model-based expert system for automatic digital system design, we formalize the design process into three sub-processes - compiling high-level behavioral specifications into primitive behavioral operations, grouping primitive operations into behavioral functions, and grouping functions into modules. Consideration of design minimization explicitly controls decision-making in the last two subprocesses. Design minimization, a key task in the automatic design of digital systems, is complicated by the high degree of interaction among the time sequence and content of design decisions. In this paper, we present an AI approach which directly addresses these interactions and their consequences by modeling the minimization prob-lem as a planning problem, and the management of design decision-making as a meta-planning problem.
NASA Astrophysics Data System (ADS)
Zhang, Zaixuan; Wang, Kequan; Kim, Insoo S.; Wang, Jianfeng; Feng, Haiqi; Guo, Ning; Yu, Xiangdong; Zhou, Bangquan; Wu, Xiaobiao; Kim, Yohee
2000-05-01
The DOFTS system that has applied to temperature automatically alarm system of coal mine and tunnel has been researched. It is a real-time, on line and multi-point measurement system. The wavelength of LD is 1550 nm, on the 6 km optical fiber, 3000 points temperature signal is sampled and the spatial position is certain. Temperature measured region: -50 degree(s)C--100 degree(s)C; measured uncertain value: +/- 3 degree(s)C; temperature resolution: 0.1 degree(s)C; spatial resolution: <5 cm (optical fiber sensor probe); <8 m (spread optical fiber); measured time: <70 s. In the paper, the operated principles, underground test, test content and practical test results have been discussed.
Development for equipment of the milk macromolecules content detection
NASA Astrophysics Data System (ADS)
Ding, Guochao; Li, Weimin; Shang, Tingyi; Xi, Yang; Gao, Yunli; Zhou, Zhen
Developed an experimental device for rapid and accurate detection of milk macromolecular content. This device developed based on laser scattered through principle, the principle use of the ingredients of the scattered light and transmitted light ratio characterization of macromolecules. Peristaltic pump to achieve automatic input and output of the milk samples, designing weak signal detection amplifier circuit for detecting the ratio with ICL7650. Real-time operating system μC / OS-II is the core design of the software part of the whole system. The experimental data prove that the device can achieve a fast real-time measurement of milk macromolecules.
Taking Science On-air with Google+
NASA Astrophysics Data System (ADS)
Gay, P.
2014-01-01
Cost has long been a deterrent when trying to stream live events to large audiences. While streaming providers like UStream have free options, they include advertising and typically limit broadcasts to originating from a single location. In the autumn of 2011, Google premiered a new, free, video streaming tool -- Hangouts on Air -- as part of their Google+ social network. This platform allows up to ten different computers to stream live content to an unlimited audience, and automatically archives that content to YouTube. In this article we discuss best practices for using this technology to stream events over the internet.
Content-based image exploitation for situational awareness
NASA Astrophysics Data System (ADS)
Gains, David
2008-04-01
Image exploitation is of increasing importance to the enterprise of building situational awareness from multi-source data. It involves image acquisition, identification of objects of interest in imagery, storage, search and retrieval of imagery, and the distribution of imagery over possibly bandwidth limited networks. This paper describes an image exploitation application that uses image content alone to detect objects of interest, and that automatically establishes and preserves spatial and temporal relationships between images, cameras and objects. The application features an intuitive user interface that exposes all images and information generated by the system to an operator thus facilitating the formation of situational awareness.
A content-based news video retrieval system: NVRS
NASA Astrophysics Data System (ADS)
Liu, Huayong; He, Tingting
2009-10-01
This paper focus on TV news programs and design a content-based news video browsing and retrieval system, NVRS, which is convenient for users to fast browsing and retrieving news video by different categories such as political, finance, amusement, etc. Combining audiovisual features and caption text information, the system automatically segments a complete news program into separate news stories. NVRS supports keyword-based news story retrieval, category-based news story browsing and generates key-frame-based video abstract for each story. Experiments show that the method of story segmentation is effective and the retrieval is also efficient.
Planning applications in image analysis
NASA Technical Reports Server (NTRS)
Boddy, Mark; White, Jim; Goldman, Robert; Short, Nick, Jr.
1994-01-01
We describe two interim results from an ongoing effort to automate the acquisition, analysis, archiving, and distribution of satellite earth science data. Both results are applications of Artificial Intelligence planning research to the automatic generation of processing steps for image analysis tasks. First, we have constructed a linear conditional planner (CPed), used to generate conditional processing plans. Second, we have extended an existing hierarchical planning system to make use of durations, resources, and deadlines, thus supporting the automatic generation of processing steps in time and resource-constrained environments.
Automatic cataloguing and characterization of Earth science data using SE-trees
NASA Technical Reports Server (NTRS)
Rymon, Ron; Short, Nicholas M., Jr.
1994-01-01
In the future, NASA's Earth Observing System (EOS) platforms will produce enormous amounts of remote sensing image data that will be stored in the EOS Data Information System. For the past several years, the Intelligent Data Management group at Goddard's Information Science and Technology Office has been researching techniques for automatically cataloguing and characterizing image data (ADCC) from EOS into a distributed database. At the core of the approach, scientists will be able to retrieve data based upon the contents of the imagery. The ability to automatically classify imagery is key to the success of contents-based search. We report results from experiments applying a novel machine learning framework, based on Set-Enumeration (SE) trees, to the ADCC domain. We experiment with two images: one taken from the Blackhills region in South Dakota; and the other from the Washington DC area. In a classical machine learning experimentation approach, an image's pixels are randomly partitioned into training (i.e. including ground truth or survey data) and testing sets. The prediction model is built using the pixels in the training set, and its performance is estimated using the testing set. With the first Blackhills image, we perform various experiments achieving an accuracy level of 83.2 percent, compared to 72.7 percent using a Back Propagation Neural Network (BPNN) and 65.3 percent using a Gaussain Maximum Likelihood Classifier (GMLC). However, with the Washington DC image, we were only able to achieve 71.4 percent, compared with 67.7 percent reported for the BPNN model and 62.3 percent for the GMLC.
NASA Astrophysics Data System (ADS)
Durocher, M.; Mostofi Zadeh, S.; Burn, D. H.; Ashkar, F.
2017-12-01
Floods are one of the most costly hazards and frequency analysis of river discharges is an important part of the tools at our disposal to evaluate their inherent risks and to provide an adequate response. In comparison to the common examination of annual streamflow maximums, peaks over threshold (POT) is an interesting alternative that makes better use of the available information by including more than one flood event per year (on average). However, a major challenge is the selection of a satisfactory threshold above which peaks are assumed to respect certain conditions necessary for an adequate estimation of the risk. Additionally, studies have shown that POT is also a valuable approach to investigate the evolution of flood regimes in the context of climate change. Recently, automatic procedures for the selection of the threshold were suggested to guide that important choice, which otherwise rely on graphical tools and expert judgment. Furthermore, having an automatic procedure that is objective allows for quickly repeating the analysis on a large number of samples, which is useful in the context of large databases or for uncertainty analysis based on a resampling approach. This study investigates the impact of considering such procedures in a case study including many sites across Canada. A simulation study is conducted to evaluate the bias and predictive power of the automatic procedures in similar conditions as well as investigating the power of derived nonstationarity tests. The results obtained are also evaluated in the light of expert judgments established in a previous study. Ultimately, this study provides a thorough examination of the considerations that need to be addressed when conducting POT analysis using automatic threshold selection.
An automatic method to detect and track the glottal gap from high speed videoendoscopic images.
Andrade-Miranda, Gustavo; Godino-Llorente, Juan I; Moro-Velázquez, Laureano; Gómez-García, Jorge Andrés
2015-10-29
The image-based analysis of the vocal folds vibration plays an important role in the diagnosis of voice disorders. The analysis is based not only on the direct observation of the video sequences, but also in an objective characterization of the phonation process by means of features extracted from the recorded images. However, such analysis is based on a previous accurate identification of the glottal gap, which is the most challenging step for a further automatic assessment of the vocal folds vibration. In this work, a complete framework to automatically segment and track the glottal area (or glottal gap) is proposed. The algorithm identifies a region of interest that is adapted along time, and combine active contours and watershed transform for the final delineation of the glottis and also an automatic procedure for synthesize different videokymograms is proposed. Thanks to the ROI implementation, our technique is robust to the camera shifting and also the objective test proved the effectiveness and performance of the approach in the most challenging scenarios that it is when exist an inappropriate closure of the vocal folds. The novelties of the proposed algorithm relies on the used of temporal information for identify an adaptive ROI and the use of watershed merging combined with active contours for the glottis delimitation. Additionally, an automatic procedure for synthesize multiline VKG by the identification of the glottal main axis is developed.
High-Speed Automatic Microscopy for Real Time Tracks Reconstruction in Nuclear Emulsion
NASA Astrophysics Data System (ADS)
D'Ambrosio, N.
2006-06-01
The Oscillation Project with Emulsion-tRacking Apparatus (OPERA) experiment will use a massive nuclear emulsion detector to search for /spl nu//sub /spl mu///spl rarr//spl nu//sub /spl tau// oscillation by identifying /spl tau/ leptons through the direct detection of their decay topology. The feasibility of experiments using a large mass emulsion detector is linked to the impressive progress under way in the development of automatic emulsion analysis. A new generation of scanning systems requires the development of fast automatic microscopes for emulsion scanning and image analysis to reconstruct tracks of elementary particles. The paper presents the European Scanning System (ESS) developed in the framework of OPERA collaboration.
Application of Magnetic Nanoparticles in Pretreatment Device for POPs Analysis in Water
NASA Astrophysics Data System (ADS)
Chu, Dongzhi; Kong, Xiangfeng; Wu, Bingwei; Fan, Pingping; Cao, Xuan; Zhang, Ting
2018-01-01
In order to reduce process time and labour force of POPs pretreatment, and solve the problem that extraction column was easily clogged, the paper proposed a new technology of extraction and enrichment which used magnetic nanoparticles. Automatic pretreatment system had automatic sampling unit, extraction enrichment unit and elution enrichment unit. The paper briefly introduced the preparation technology of magnetic nanoparticles, and detailly introduced the structure and control system of automatic pretreatment system. The result of magnetic nanoparticles mass recovery experiments showed that the system had POPs analysis preprocessing capability, and the recovery rate of magnetic nanoparticles were over 70%. In conclusion, the author proposed three points optimization recommendation.
Application of Semantic Tagging to Generate Superimposed Information on a Digital Encyclopedia
NASA Astrophysics Data System (ADS)
Garrido, Piedad; Tramullas, Jesus; Martinez, Francisco J.
We can find in the literature several works regarding the automatic or semi-automatic processing of textual documents with historic information using free software technologies. However, more research work is needed to integrate the analysis of the context and provide coverage to the peculiarities of the Spanish language from a semantic point of view. This research work proposes a novel knowledge-based strategy based on combining subject-centric computing, a topic-oriented approach, and superimposed information. It subsequent combination with artificial intelligence techniques led to an automatic analysis after implementing a made-to-measure interpreted algorithm which, in turn, produced a good number of associations and events with 90% reliability.
SPE-HPTLC of procyanidins from the barks of different species and clones of Salix.
Pobłocka-Olech, Loretta; Krauze-Baranowska, Mirosława
2008-11-04
A SPE-HPTLC method was developed for the qualitative and quantitative analysis of procyanidin B(1) in willow barks. The chromatography was performed on HPTLC silica gel layer with the mobile phase chloroform-ethanol-formic acid (50:40:6 v/v/v), in the Automatic Developing Chamber-ADC 2. The methanol extracts from willow barks were purified by SPE method on RP-18 silica gel columns with methanol-water (7:93 v/v) as the eluent. The presence of procyanidin B(1) was revealed in the majority of investigated willow barks. The content of procyanidin B(1) varied from 0.26 mg/g in the extract of Salix purpurea clone 1067-2.24 mg/g in the extract of Salix alba clone 1100. The method was validated for linearity, precision, LOD, LOQ and repeatability.
Four-Channel Biosignal Analysis and Feature Extraction for Automatic Emotion Recognition
NASA Astrophysics Data System (ADS)
Kim, Jonghwa; André, Elisabeth
This paper investigates the potential of physiological signals as a reliable channel for automatic recognition of user's emotial state. For the emotion recognition, little attention has been paid so far to physiological signals compared to audio-visual emotion channels such as facial expression or speech. All essential stages of automatic recognition system using biosignals are discussed, from recording physiological dataset up to feature-based multiclass classification. Four-channel biosensors are used to measure electromyogram, electrocardiogram, skin conductivity and respiration changes. A wide range of physiological features from various analysis domains, including time/frequency, entropy, geometric analysis, subband spectra, multiscale entropy, etc., is proposed in order to search the best emotion-relevant features and to correlate them with emotional states. The best features extracted are specified in detail and their effectiveness is proven by emotion recognition results.
Physical and biochemical properties of airborne flour particles involved in occupational asthma.
Laurière, Michel; Gorner, Peter; Bouchez-Mahiout, Isabelle; Wrobel, Richard; Breton, Christine; Fabriès, Jean-François; Choudat, Dominique
2008-11-01
Aerosol particles which deeply penetrate the human airways and which trigger baker's asthma manifestations are known to represent only a part of flour and of airborne particles found in bakeries. They were a major focus of this study. To this end, aerosols were produced from different wheat and rye flours, using an automatic generator designed for bronchial challenge. Particles were characterized for their size distribution, their ability to be deposited in the airways, their protein content, their histological composition and their reactivity with immunoglobulin E (IgE) present in sera from asthmatic bakers. Like dust particles collected in the bakery, the aerosols produced showed increased protein content but decreased IgE reactive protein content when compared to the corresponding bulk flours. The sodium dodecyl sulfate-polyacrylamide gel electrophoresis analysis of these particles showed a predominance of endosperm gluten proteins. Under scanning electron microscopy, flour particles displayed various tissue fragments with entrapped large A-starch and small B- or C-starch granules, whereas aerosol particles appeared primarily as a mixture of the endosperm intracellular interstitial protein matrix and small B- or C-starch granules free or still associated. These observations showed that aerosols supposed to penetrate deeply the airways, mainly correspond to intracellular fragments of endosperm cells enriched in gluten proteins but with lower amount of allergens belonging to albumins or globulins.
Argo: an integrative, interactive, text mining-based workbench supporting curation
Rak, Rafal; Rowley, Andrew; Black, William; Ananiadou, Sophia
2012-01-01
Curation of biomedical literature is often supported by the automatic analysis of textual content that generally involves a sequence of individual processing components. Text mining (TM) has been used to enhance the process of manual biocuration, but has been focused on specific databases and tasks rather than an environment integrating TM tools into the curation pipeline, catering for a variety of tasks, types of information and applications. Processing components usually come from different sources and often lack interoperability. The well established Unstructured Information Management Architecture is a framework that addresses interoperability by defining common data structures and interfaces. However, most of the efforts are targeted towards software developers and are not suitable for curators, or are otherwise inconvenient to use on a higher level of abstraction. To overcome these issues we introduce Argo, an interoperable, integrative, interactive and collaborative system for text analysis with a convenient graphic user interface to ease the development of processing workflows and boost productivity in labour-intensive manual curation. Robust, scalable text analytics follow a modular approach, adopting component modules for distinct levels of text analysis. The user interface is available entirely through a web browser that saves the user from going through often complicated and platform-dependent installation procedures. Argo comes with a predefined set of processing components commonly used in text analysis, while giving the users the ability to deposit their own components. The system accommodates various areas and levels of user expertise, from TM and computational linguistics to ontology-based curation. One of the key functionalities of Argo is its ability to seamlessly incorporate user-interactive components, such as manual annotation editors, into otherwise completely automatic pipelines. As a use case, we demonstrate the functionality of an in-built manual annotation editor that is well suited for in-text corpus annotation tasks. Database URL: http://www.nactem.ac.uk/Argo PMID:22434844
A benchmark for comparison of dental radiography analysis algorithms.
Wang, Ching-Wei; Huang, Cheng-Ta; Lee, Jia-Hong; Li, Chung-Hsing; Chang, Sheng-Wei; Siao, Ming-Jhih; Lai, Tat-Ming; Ibragimov, Bulat; Vrtovec, Tomaž; Ronneberger, Olaf; Fischer, Philipp; Cootes, Tim F; Lindner, Claudia
2016-07-01
Dental radiography plays an important role in clinical diagnosis, treatment and surgery. In recent years, efforts have been made on developing computerized dental X-ray image analysis systems for clinical usages. A novel framework for objective evaluation of automatic dental radiography analysis algorithms has been established under the auspices of the IEEE International Symposium on Biomedical Imaging 2015 Bitewing Radiography Caries Detection Challenge and Cephalometric X-ray Image Analysis Challenge. In this article, we present the datasets, methods and results of the challenge and lay down the principles for future uses of this benchmark. The main contributions of the challenge include the creation of the dental anatomy data repository of bitewing radiographs, the creation of the anatomical abnormality classification data repository of cephalometric radiographs, and the definition of objective quantitative evaluation for comparison and ranking of the algorithms. With this benchmark, seven automatic methods for analysing cephalometric X-ray image and two automatic methods for detecting bitewing radiography caries have been compared, and detailed quantitative evaluation results are presented in this paper. Based on the quantitative evaluation results, we believe automatic dental radiography analysis is still a challenging and unsolved problem. The datasets and the evaluation software will be made available to the research community, further encouraging future developments in this field. (http://www-o.ntust.edu.tw/~cweiwang/ISBI2015/). Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.
Real-time image annotation by manifold-based biased Fisher discriminant analysis
NASA Astrophysics Data System (ADS)
Ji, Rongrong; Yao, Hongxun; Wang, Jicheng; Sun, Xiaoshuai; Liu, Xianming
2008-01-01
Automatic Linguistic Annotation is a promising solution to bridge the semantic gap in content-based image retrieval. However, two crucial issues are not well addressed in state-of-art annotation algorithms: 1. The Small Sample Size (3S) problem in keyword classifier/model learning; 2. Most of annotation algorithms can not extend to real-time online usage due to their low computational efficiencies. This paper presents a novel Manifold-based Biased Fisher Discriminant Analysis (MBFDA) algorithm to address these two issues by transductive semantic learning and keyword filtering. To address the 3S problem, Co-Training based Manifold learning is adopted for keyword model construction. To achieve real-time annotation, a Bias Fisher Discriminant Analysis (BFDA) based semantic feature reduction algorithm is presented for keyword confidence discrimination and semantic feature reduction. Different from all existing annotation methods, MBFDA views image annotation from a novel Eigen semantic feature (which corresponds to keywords) selection aspect. As demonstrated in experiments, our manifold-based biased Fisher discriminant analysis annotation algorithm outperforms classical and state-of-art annotation methods (1.K-NN Expansion; 2.One-to-All SVM; 3.PWC-SVM) in both computational time and annotation accuracy with a large margin.
MeSH indexing based on automatically generated summaries.
Jimeno-Yepes, Antonio J; Plaza, Laura; Mork, James G; Aronson, Alan R; Díaz, Alberto
2013-06-26
MEDLINE citations are manually indexed at the U.S. National Library of Medicine (NLM) using as reference the Medical Subject Headings (MeSH) controlled vocabulary. For this task, the human indexers read the full text of the article. Due to the growth of MEDLINE, the NLM Indexing Initiative explores indexing methodologies that can support the task of the indexers. Medical Text Indexer (MTI) is a tool developed by the NLM Indexing Initiative to provide MeSH indexing recommendations to indexers. Currently, the input to MTI is MEDLINE citations, title and abstract only. Previous work has shown that using full text as input to MTI increases recall, but decreases precision sharply. We propose using summaries generated automatically from the full text for the input to MTI to use in the task of suggesting MeSH headings to indexers. Summaries distill the most salient information from the full text, which might increase the coverage of automatic indexing approaches based on MEDLINE. We hypothesize that if the results were good enough, manual indexers could possibly use automatic summaries instead of the full texts, along with the recommendations of MTI, to speed up the process while maintaining high quality of indexing results. We have generated summaries of different lengths using two different summarizers, and evaluated the MTI indexing on the summaries using different algorithms: MTI, individual MTI components, and machine learning. The results are compared to those of full text articles and MEDLINE citations. Our results show that automatically generated summaries achieve similar recall but higher precision compared to full text articles. Compared to MEDLINE citations, summaries achieve higher recall but lower precision. Our results show that automatic summaries produce better indexing than full text articles. Summaries produce similar recall to full text but much better precision, which seems to indicate that automatic summaries can efficiently capture the most important contents within the original articles. The combination of MEDLINE citations and automatically generated summaries could improve the recommendations suggested by MTI. On the other hand, indexing performance might be dependent on the MeSH heading being indexed. Summarization techniques could thus be considered as a feature selection algorithm that might have to be tuned individually for each MeSH heading.
Automatic generation of user material subroutines for biomechanical growth analysis.
Young, Jonathan M; Yao, Jiang; Ramasubramanian, Ashok; Taber, Larry A; Perucchio, Renato
2010-10-01
The analysis of the biomechanics of growth and remodeling in soft tissues requires the formulation of specialized pseudoelastic constitutive relations. The nonlinear finite element analysis package ABAQUS allows the user to implement such specialized material responses through the coding of a user material subroutine called UMAT. However, hand coding UMAT subroutines is a challenge even for simple pseudoelastic materials and requires substantial time to debug and test the code. To resolve this issue, we develop an automatic UMAT code generation procedure for pseudoelastic materials using the symbolic mathematics package MATHEMATICA and extend the UMAT generator to include continuum growth. The performance of the automatically coded UMAT is tested by simulating the stress-stretch response of a material defined by a Fung-orthotropic strain energy function, subject to uniaxial stretching, equibiaxial stretching, and simple shear in ABAQUS. The MATHEMATICA UMAT generator is then extended to include continuum growth by adding a growth subroutine to the automatically generated UMAT. The MATHEMATICA UMAT generator correctly derives the variables required in the UMAT code, quickly providing a ready-to-use UMAT. In turn, the UMAT accurately simulates the pseudoelastic response. In order to test the growth UMAT, we simulate the growth-based bending of a bilayered bar with differing fiber directions in a nongrowing passive layer. The anisotropic passive layer, being topologically tied to the growing isotropic layer, causes the bending bar to twist laterally. The results of simulations demonstrate the validity of the automatically coded UMAT, used in both standardized tests of hyperelastic materials and for a biomechanical growth analysis.
Automatic high throughput empty ISO container verification
NASA Astrophysics Data System (ADS)
Chalmers, Alex
2007-04-01
Encouraging results are presented for the automatic analysis of radiographic images of a continuous stream of ISO containers to confirm they are truly empty. A series of image processing algorithms are described that process real-time data acquired during the actual inspection of each container and assigns each to one of the classes "empty", "not empty" or "suspect threat". This research is one step towards achieving fully automated analysis of cargo containers.
Automatic Artifact Removal from Electroencephalogram Data Based on A Priori Artifact Information.
Zhang, Chi; Tong, Li; Zeng, Ying; Jiang, Jingfang; Bu, Haibing; Yan, Bin; Li, Jianxin
2015-01-01
Electroencephalogram (EEG) is susceptible to various nonneural physiological artifacts. Automatic artifact removal from EEG data remains a key challenge for extracting relevant information from brain activities. To adapt to variable subjects and EEG acquisition environments, this paper presents an automatic online artifact removal method based on a priori artifact information. The combination of discrete wavelet transform and independent component analysis (ICA), wavelet-ICA, was utilized to separate artifact components. The artifact components were then automatically identified using a priori artifact information, which was acquired in advance. Subsequently, signal reconstruction without artifact components was performed to obtain artifact-free signals. The results showed that, using this automatic online artifact removal method, there were statistical significant improvements of the classification accuracies in both two experiments, namely, motor imagery and emotion recognition.
Automatic Artifact Removal from Electroencephalogram Data Based on A Priori Artifact Information
Zhang, Chi; Tong, Li; Zeng, Ying; Jiang, Jingfang; Bu, Haibing; Li, Jianxin
2015-01-01
Electroencephalogram (EEG) is susceptible to various nonneural physiological artifacts. Automatic artifact removal from EEG data remains a key challenge for extracting relevant information from brain activities. To adapt to variable subjects and EEG acquisition environments, this paper presents an automatic online artifact removal method based on a priori artifact information. The combination of discrete wavelet transform and independent component analysis (ICA), wavelet-ICA, was utilized to separate artifact components. The artifact components were then automatically identified using a priori artifact information, which was acquired in advance. Subsequently, signal reconstruction without artifact components was performed to obtain artifact-free signals. The results showed that, using this automatic online artifact removal method, there were statistical significant improvements of the classification accuracies in both two experiments, namely, motor imagery and emotion recognition. PMID:26380294
Automatic contact in DYNA3D for vehicle crashworthiness
DOE Office of Scientific and Technical Information (OSTI.GOV)
Whirley, R.G.; Engelmann, B.E.
1993-07-15
This paper presents a new formulation for the automatic definition and treatment of mechanical contact in explicit nonlinear finite element analysis. Automatic contact offers the benefits of significantly reduced model construction time and fewer opportunities for user error, but faces significant challenges in reliability and computational costs. This paper discusses in detail a new four-step automatic contact algorithm. Key aspects of the proposed method include automatic identification of adjacent and opposite surfaces in the global search phase, and the use of a smoothly varying surface normal which allows a consistent treatment of shell intersection and corner contact conditions without ad-hocmore » rules. The paper concludes with three examples which illustrate the performance of the newly proposed algorithm in the public DYNA3D code.« less
Proteopedia: Exciting Advances in the 3D Encyclopedia of Biomolecular Structure
NASA Astrophysics Data System (ADS)
Prilusky, Jaime; Hodis, Eran; Sussman, Joel L.
Proteopedia is a collaborative, 3D web-encyclopedia of protein, nucleic acid and other structures. Proteopedia ( http://www.proteopedia.org ) presents 3D biomolecule structures in a broadly accessible manner to a diverse scientific audience through easy-to-use molecular visualization tools integrated into a wiki environment that anyone with a user account can edit. We describe recent advances in the web resource in the areas of content and software. In terms of content, we describe a large growth in user-added content as well as improvements in automatically-generated content for all PDB entry pages in the resource. In terms of software, we describe new features ranging from the capability to create pages hidden from public view to the capability to export pages for offline viewing. New software features also include an improved file-handling system and availability of biological assemblies of protein structures alongside their asymmetric units.
On a program manifold's stability of one contour automatic control systems
NASA Astrophysics Data System (ADS)
Zumatov, S. S.
2017-12-01
Methodology of analysis of stability is expounded to the one contour systems automatic control feedback in the presence of non-linearities. The methodology is based on the use of the simplest mathematical models of the nonlinear controllable systems. Stability of program manifolds of one contour automatic control systems is investigated. The sufficient conditions of program manifold's absolute stability of one contour automatic control systems are obtained. The Hurwitz's angle of absolute stability was determined. The sufficient conditions of program manifold's absolute stability of control systems by the course of plane in the mode of autopilot are obtained by means Lyapunov's second method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
R.I. Rudyka; Y.E. Zingerman; K.G. Lavrov
Up-to-date mathematical methods, such as correlation analysis and expert systems, are employed in creating a model of the coking process. Automatic coking-control systems developed by Giprokoks rule out human error. At an existing coke battery, after introducing automatic control, the heating-gas consumption is reduced by {>=}5%.
ERIC Educational Resources Information Center
Kurtz, Peter; And Others
This report is concerned with the implementation of two interrelated computer systems: an automatic document analysis and classification package, and an on-line interactive information retrieval system which utilizes the information gathered during the automatic classification phase. Well-known techniques developed by Salton and Dennis have been…
1995-06-01
Energy efficient, 30 and 40 watt ballasts are Rapid Start, thermally protected, automatic resetting. Class P, high or low power factor as required...BALLASTS Energy efficient, 30 ana 40 watt Rapic Start, thermally protected, automatic resetting. Class P. high power factor, CEM, sound rated A. unless...BALLASTS Energy efficient, 40 Watt Rapid Start, thermally protected, automatic resetting, Class P, high power factor, CBM, sound rated A, unless
Automatic Masking for Robust 3D-2D Image Registration in Image-Guided Spine Surgery.
Ketcha, M D; De Silva, T; Uneri, A; Kleinszig, G; Vogt, S; Wolinsky, J-P; Siewerdsen, J H
During spinal neurosurgery, patient-specific information, planning, and annotation such as vertebral labels can be mapped from preoperative 3D CT to intraoperative 2D radiographs via image-based 3D-2D registration. Such registration has been shown to provide a potentially valuable means of decision support in target localization as well as quality assurance of the surgical product. However, robust registration can be challenged by mismatch in image content between the preoperative CT and intraoperative radiographs, arising, for example, from anatomical deformation or the presence of surgical tools within the radiograph. In this work, we develop and evaluate methods for automatically mitigating the effect of content mismatch by leveraging the surgical planning data to assign greater weight to anatomical regions known to be reliable for registration and vital to the surgical task while removing problematic regions that are highly deformable or often occluded by surgical tools. We investigated two approaches to assigning variable weight (i.e., "masking") to image content and/or the similarity metric: (1) masking the preoperative 3D CT ("volumetric masking"); and (2) masking within the 2D similarity metric calculation ("projection masking"). The accuracy of registration was evaluated in terms of projection distance error (PDE) in 61 cases selected from an IRB-approved clinical study. The best performing of the masking techniques was found to reduce the rate of gross failure (PDE > 20 mm) from 11.48% to 5.57% in this challenging retrospective data set. These approaches provided robustness to content mismatch and eliminated distinct failure modes of registration. Such improvement was gained without additional workflow and has motivated incorporation of the masking methods within a system under development for prospective clinical studies.
Automatic masking for robust 3D-2D image registration in image-guided spine surgery
NASA Astrophysics Data System (ADS)
Ketcha, M. D.; De Silva, T.; Uneri, A.; Kleinszig, G.; Vogt, S.; Wolinsky, J.-P.; Siewerdsen, J. H.
2016-03-01
During spinal neurosurgery, patient-specific information, planning, and annotation such as vertebral labels can be mapped from preoperative 3D CT to intraoperative 2D radiographs via image-based 3D-2D registration. Such registration has been shown to provide a potentially valuable means of decision support in target localization as well as quality assurance of the surgical product. However, robust registration can be challenged by mismatch in image content between the preoperative CT and intraoperative radiographs, arising, for example, from anatomical deformation or the presence of surgical tools within the radiograph. In this work, we develop and evaluate methods for automatically mitigating the effect of content mismatch by leveraging the surgical planning data to assign greater weight to anatomical regions known to be reliable for registration and vital to the surgical task while removing problematic regions that are highly deformable or often occluded by surgical tools. We investigated two approaches to assigning variable weight (i.e., "masking") to image content and/or the similarity metric: (1) masking the preoperative 3D CT ("volumetric masking"); and (2) masking within the 2D similarity metric calculation ("projection masking"). The accuracy of registration was evaluated in terms of projection distance error (PDE) in 61 cases selected from an IRB-approved clinical study. The best performing of the masking techniques was found to reduce the rate of gross failure (PDE > 20 mm) from 11.48% to 5.57% in this challenging retrospective data set. These approaches provided robustness to content mismatch and eliminated distinct failure modes of registration. Such improvement was gained without additional workflow and has motivated incorporation of the masking methods within a system under development for prospective clinical studies.
Automatically exposing OpenLifeData via SADI semantic Web Services.
González, Alejandro Rodríguez; Callahan, Alison; Cruz-Toledo, José; Garcia, Adrian; Egaña Aranguren, Mikel; Dumontier, Michel; Wilkinson, Mark D
2014-01-01
Two distinct trends are emerging with respect to how data is shared, collected, and analyzed within the bioinformatics community. First, Linked Data, exposed as SPARQL endpoints, promises to make data easier to collect and integrate by moving towards the harmonization of data syntax, descriptive vocabularies, and identifiers, as well as providing a standardized mechanism for data access. Second, Web Services, often linked together into workflows, normalize data access and create transparent, reproducible scientific methodologies that can, in principle, be re-used and customized to suit new scientific questions. Constructing queries that traverse semantically-rich Linked Data requires substantial expertise, yet traditional RESTful or SOAP Web Services cannot adequately describe the content of a SPARQL endpoint. We propose that content-driven Semantic Web Services can enable facile discovery of Linked Data, independent of their location. We use a well-curated Linked Dataset - OpenLifeData - and utilize its descriptive metadata to automatically configure a series of more than 22,000 Semantic Web Services that expose all of its content via the SADI set of design principles. The OpenLifeData SADI services are discoverable via queries to the SHARE registry and easy to integrate into new or existing bioinformatics workflows and analytical pipelines. We demonstrate the utility of this system through comparison of Web Service-mediated data access with traditional SPARQL, and note that this approach not only simplifies data retrieval, but simultaneously provides protection against resource-intensive queries. We show, through a variety of different clients and examples of varying complexity, that data from the myriad OpenLifeData can be recovered without any need for prior-knowledge of the content or structure of the SPARQL endpoints. We also demonstrate that, via clients such as SHARE, the complexity of federated SPARQL queries is dramatically reduced.
Automatic Camera Control System for a Distant Lecture with Videoing a Normal Classroom.
ERIC Educational Resources Information Center
Suganuma, Akira; Nishigori, Shuichiro
The growth of a communication network technology enables students to take part in a distant lecture. Although many lectures are conducted in universities by using Web contents, normal lectures using a blackboard are still held. The latter style lecture is good for a teacher's dynamic explanation. A way to modify it for a distant lecture is to…
ERIC Educational Resources Information Center
Gierl, Mark J.; Lai, Hollis
2013-01-01
Changes to the design and development of our educational assessments are resulting in the unprecedented demand for a large and continuous supply of content-specific test items. One way to address this growing demand is with automatic item generation (AIG). AIG is the process of using item models to generate test items with the aid of computer…
ERIC Educational Resources Information Center
Lai, Jason Kwong-Hung; Leung, Howard; Hu, Zhi-Hui; Tang, Jeff K. T.; Xu, Yun
2010-01-01
One of the difficulties in learning Chinese characters is distinguishing similar characters. This can cause misunderstanding and miscommunication in daily life. Thus, it is important for students learning the Chinese language to be able to distinguish similar characters and understand their proper usage. In this paper, the authors propose a game…
Automatic thermographic image defect detection of composites
NASA Astrophysics Data System (ADS)
Luo, Bin; Liebenberg, Bjorn; Raymont, Jeff; Santospirito, SP
2011-05-01
Detecting defects, and especially reliably measuring defect sizes, are critical objectives in automatic NDT defect detection applications. In this work, the Sentence software is proposed for the analysis of pulsed thermography and near IR images of composite materials. Furthermore, the Sentence software delivers an end-to-end, user friendly platform for engineers to perform complete manual inspections, as well as tools that allow senior engineers to develop inspection templates and profiles, reducing the requisite thermographic skill level of the operating engineer. Finally, the Sentence software can also offer complete independence of operator decisions by the fully automated "Beep on Defect" detection functionality. The end-to-end automatic inspection system includes sub-systems for defining a panel profile, generating an inspection plan, controlling a robot-arm and capturing thermographic images to detect defects. A statistical model has been built to analyze the entire image, evaluate grey-scale ranges, import sentencing criteria and automatically detect impact damage defects. A full width half maximum algorithm has been used to quantify the flaw sizes. The identified defects are imported into the sentencing engine which then sentences (automatically compares analysis results against acceptance criteria) the inspection by comparing the most significant defect or group of defects against the inspection standards.
Barrington, Luke; Turnbull, Douglas; Lanckriet, Gert
2012-01-01
Searching for relevant content in a massive amount of multimedia information is facilitated by accurately annotating each image, video, or song with a large number of relevant semantic keywords, or tags. We introduce game-powered machine learning, an integrated approach to annotating multimedia content that combines the effectiveness of human computation, through online games, with the scalability of machine learning. We investigate this framework for labeling music. First, a socially-oriented music annotation game called Herd It collects reliable music annotations based on the “wisdom of the crowds.” Second, these annotated examples are used to train a supervised machine learning system. Third, the machine learning system actively directs the annotation games to collect new data that will most benefit future model iterations. Once trained, the system can automatically annotate a corpus of music much larger than what could be labeled using human computation alone. Automatically annotated songs can be retrieved based on their semantic relevance to text-based queries (e.g., “funky jazz with saxophone,” “spooky electronica,” etc.). Based on the results presented in this paper, we find that actively coupling annotation games with machine learning provides a reliable and scalable approach to making searchable massive amounts of multimedia data. PMID:22460786
Content-based cell pathology image retrieval by combining different features
NASA Astrophysics Data System (ADS)
Zhou, Guangquan; Jiang, Lu; Luo, Limin; Bao, Xudong; Shu, Huazhong
2004-04-01
Content Based Color Cell Pathology Image Retrieval is one of the newest computer image processing applications in medicine. Recently, some algorithms have been developed to achieve this goal. Because of the particularity of cell pathology images, the result of the image retrieval based on single characteristic is not satisfactory. A new method for pathology image retrieval by combining color, texture and morphologic features to search cell images is proposed. Firstly, nucleus regions of leukocytes in images are automatically segmented by K-mean clustering method. Then single leukocyte region is detected by utilizing thresholding algorithm segmentation and mathematics morphology. The features that include color, texture and morphologic features are extracted from single leukocyte to represent main attribute in the search query. The features are then normalized because the numerical value range and physical meaning of extracted features are different. Finally, the relevance feedback system is introduced. So that the system can automatically adjust the weights of different features and improve the results of retrieval system according to the feedback information. Retrieval results using the proposed method fit closely with human perception and are better than those obtained with the methods based on single feature.
Game-powered machine learning.
Barrington, Luke; Turnbull, Douglas; Lanckriet, Gert
2012-04-24
Searching for relevant content in a massive amount of multimedia information is facilitated by accurately annotating each image, video, or song with a large number of relevant semantic keywords, or tags. We introduce game-powered machine learning, an integrated approach to annotating multimedia content that combines the effectiveness of human computation, through online games, with the scalability of machine learning. We investigate this framework for labeling music. First, a socially-oriented music annotation game called Herd It collects reliable music annotations based on the "wisdom of the crowds." Second, these annotated examples are used to train a supervised machine learning system. Third, the machine learning system actively directs the annotation games to collect new data that will most benefit future model iterations. Once trained, the system can automatically annotate a corpus of music much larger than what could be labeled using human computation alone. Automatically annotated songs can be retrieved based on their semantic relevance to text-based queries (e.g., "funky jazz with saxophone," "spooky electronica," etc.). Based on the results presented in this paper, we find that actively coupling annotation games with machine learning provides a reliable and scalable approach to making searchable massive amounts of multimedia data.
Estimated content percentages of volatile liquids and fat extractables in ready-to-eat foods.
Daft, J L; Cline, J K; Palmer, R E; Sisk, R L; Griffitt, K R
1996-01-01
Content percentages of volatile liquids and fat extractables in 340 samples of ready-to-eat foods were determined gravimetrically. Volatile liquids were determined by drying samples in a microwave oven with a self-contained balance; results were printed out automatically. Fat extractables were extracted from the samples with mixed ethers; extracts were dried and weighed manually. The samples, 191 nonfat and 149 fatty (containing ca 2% or more fat) foods, represent about 5000 different food items and include infant and toddler, ethnic, fast, and imported items. Samples were initially prepared for screening of essential and toxic elements and chemical contamination by chopping and mixing into homogenous composites. Content determinations were then made on separate portions from each composite. Content results were put into a database for evaluation. Overall, mean results from both determinations agree with published data for moisture and fat contents of similar food items. Coefficients of variation, however, were lower for determination of volatile liquids than for that of fat extractables.
Efficient Method of Achieving Agreements between Individuals and Organizations about RFID Privacy
NASA Astrophysics Data System (ADS)
Cha, Shi-Cho
This work presents novel technical and legal approaches that address privacy concerns for personal data in RFID systems. In recent years, to minimize the conflict between convenience and the privacy risk of RFID systems, organizations have been requested to disclose their policies regarding RFID activities, obtain customer consent, and adopt appropriate mechanisms to enforce these policies. However, current research on RFID typically focuses on enforcement mechanisms to protect personal data stored in RFID tags and prevent organizations from tracking user activity through information emitted by specific RFID tags. A missing piece is how organizations can obtain customers' consent efficiently and flexibly. This study recommends that organizations obtain licenses automatically or semi-automatically before collecting personal data via RFID technologies rather than deal with written consents. Such digitalized and standard licenses can be checked automatically to ensure that collection and use of personal data is based on user consent. While individuals can easily control who has licenses and license content, the proposed framework provides an efficient and flexible way to overcome the deficiencies in current privacy protection technologies for RFID systems.
Task-irrelevant spider associations affect categorization performance.
Woud, Marcella L; Ellwart, Thomas; Langner, Oliver; Rinck, Mike; Becker, Eni S
2011-09-01
In two studies, the Single Target Implicit Association Test (STIAT) was used to investigate automatic associations toward spiders. In both experiments, we measured the strength of associations between pictures of spiders and either threat-related words or pleasant words. Unlike previous studies, we administered a STIAT version in which stimulus contents was task-irrelevant: The target spider pictures were categorized according to the label picture, irrespective of what they showed. In Study 1, spider-fearful individuals versus non-fearful controls were tested, Study 2 compared spider enthusiasts to non-fearful controls. Results revealed that the novel STIAT version was sensitive to group differences in automatic associations toward spiders. In Study 1, it successfully distinguished between spider-fearful individuals and non-fearful controls. Moreover, STIAT scores predicted automatic fear responses best, whereas controlled avoidance behavior was best predicted by the FAS (German translation of the Fear of Spiders Questionnaire). The results of Study 2 demonstrated that the novel STIAT version was also able to differentiate between spider enthusiasts and non-fearful controls. Copyright © 2010 Elsevier Ltd. All rights reserved.
Cross-Modal Multivariate Pattern Analysis
Meyer, Kaspar; Kaplan, Jonas T.
2011-01-01
Multivariate pattern analysis (MVPA) is an increasingly popular method of analyzing functional magnetic resonance imaging (fMRI) data1-4. Typically, the method is used to identify a subject's perceptual experience from neural activity in certain regions of the brain. For instance, it has been employed to predict the orientation of visual gratings a subject perceives from activity in early visual cortices5 or, analogously, the content of speech from activity in early auditory cortices6. Here, we present an extension of the classical MVPA paradigm, according to which perceptual stimuli are not predicted within, but across sensory systems. Specifically, the method we describe addresses the question of whether stimuli that evoke memory associations in modalities other than the one through which they are presented induce content-specific activity patterns in the sensory cortices of those other modalities. For instance, seeing a muted video clip of a glass vase shattering on the ground automatically triggers in most observers an auditory image of the associated sound; is the experience of this image in the "mind's ear" correlated with a specific neural activity pattern in early auditory cortices? Furthermore, is this activity pattern distinct from the pattern that could be observed if the subject were, instead, watching a video clip of a howling dog? In two previous studies7,8, we were able to predict sound- and touch-implying video clips based on neural activity in early auditory and somatosensory cortices, respectively. Our results are in line with a neuroarchitectural framework proposed by Damasio9,10, according to which the experience of mental images that are based on memories - such as hearing the shattering sound of a vase in the "mind's ear" upon seeing the corresponding video clip - is supported by the re-construction of content-specific neural activity patterns in early sensory cortices. PMID:22105246
Takahashi, M; Hoshino, H; Kushida, K; Inoue, T
1995-12-10
Pyridinoline (Pyr) and deoxypyridinoline (Dpyr) are mature crosslinks which maintain the structure of the collagen fibril. Pentosidine (Pen) is a senescent crosslink and one of the advanced glycation end products. We developed a direct and one-injection method to measure Pyr, Dpyr, and Pen in the hydrolysate of tissues using reversed-phase high-performance liquid chromatography. Using a linear gradient of acetonitrile and a cleaning step, the objective crosslinks were well separated and continuously and automatically assayed. Recovery rates of Pyr, Dpyr, and Pen were 95-116, 94-110, and 92-120%, respectively (n = 5). The intraassay coefficients of variation for Pyr, Dpyr, and Pen were 5.3, 5.8, and 4.3%, respectively (n = 5), and the interassay coefficients of variation for Pyr, Dpyr, and Pen were 3.5, 4.6, and 5.7%, respectively (n = 5). Linear regression analysis showed the linearity (r = 0.999) of calibration line for each Pyr, Dpyr, and Pen. We measured the content of these crosslinks in the tissues from the young and old subjects. There was no difference in the content of Pyr and Dpyr between the young and the old group. On the other hand, the content of Pen in the old group was extremely higher than that in the young group. We demonstrated the direct method for measuring two kinds of major crosslinks which have different characters and believe that this method will be useful in determining the content of these crosslinks in tissues under various conditions.
Digital signal processing algorithms for automatic voice recognition
NASA Technical Reports Server (NTRS)
Botros, Nazeih M.
1987-01-01
The current digital signal analysis algorithms are investigated that are implemented in automatic voice recognition algorithms. Automatic voice recognition means, the capability of a computer to recognize and interact with verbal commands. The digital signal is focused on, rather than the linguistic, analysis of speech signal. Several digital signal processing algorithms are available for voice recognition. Some of these algorithms are: Linear Predictive Coding (LPC), Short-time Fourier Analysis, and Cepstrum Analysis. Among these algorithms, the LPC is the most widely used. This algorithm has short execution time and do not require large memory storage. However, it has several limitations due to the assumptions used to develop it. The other 2 algorithms are frequency domain algorithms with not many assumptions, but they are not widely implemented or investigated. However, with the recent advances in the digital technology, namely signal processors, these 2 frequency domain algorithms may be investigated in order to implement them in voice recognition. This research is concerned with real time, microprocessor based recognition algorithms.
Yoon, Jun-Hee; Kim, Thomas W; Mendez, Pedro; Jablons, David M; Kim, Il-Jin
2017-01-01
The development of next-generation sequencing (NGS) technology allows to sequence whole exomes or genome. However, data analysis is still the biggest bottleneck for its wide implementation. Most laboratories still depend on manual procedures for data handling and analyses, which translates into a delay and decreased efficiency in the delivery of NGS results to doctors and patients. Thus, there is high demand for developing an automatic and an easy-to-use NGS data analyses system. We developed comprehensive, automatic genetic analyses controller named Mobile Genome Express (MGE) that works in smartphones or other mobile devices. MGE can handle all the steps for genetic analyses, such as: sample information submission, sequencing run quality check from the sequencer, secured data transfer and results review. We sequenced an Actrometrix control DNA containing multiple proven human mutations using a targeted sequencing panel, and the whole analysis was managed by MGE, and its data reviewing program called ELECTRO. All steps were processed automatically except for the final sequencing review procedure with ELECTRO to confirm mutations. The data analysis process was completed within several hours. We confirmed the mutations that we have identified were consistent with our previous results obtained by using multi-step, manual pipelines.
Automated Assessment of Child Vocalization Development Using LENA
ERIC Educational Resources Information Center
Richards, Jeffrey A.; Xu, Dongxin; Gilkerson, Jill; Yapanel, Umit; Gray, Sharmistha; Paul, Terrance
2017-01-01
Purpose: To produce a novel, efficient measure of children's expressive vocal development on the basis of automatic vocalization assessment (AVA), child vocalizations were automatically identified and extracted from audio recordings using Language Environment Analysis (LENA) System technology. Method: Assessment was based on full-day audio…
A device for automatic photoelectric control of the analytical gap for emission spectrographs
Dietrich, John A.; Cooley, Elmo F.; Curry, Kenneth J.
1977-01-01
A photoelectric device has been built that automatically controls the analytical gap between electrodes during excitation period. The control device allows for precise control of the analytical gap during the arcing process of samples, resulting in better precision of analysis.
Automatic Processing of Current Affairs Queries
ERIC Educational Resources Information Center
Salton, G.
1973-01-01
The SMART system is used for the analysis, search and retrieval of news stories appearing in Time'' magazine. A comparison is made between the automatic text processing methods incorporated into the SMART system and a manual search using the classified index to Time.'' (14 references) (Author)
Gloger, Oliver; Kühn, Jens; Stanski, Adam; Völzke, Henry; Puls, Ralf
2010-07-01
Automatic 3D liver segmentation in magnetic resonance (MR) data sets has proven to be a very challenging task in the domain of medical image analysis. There exist numerous approaches for automatic 3D liver segmentation on computer tomography data sets that have influenced the segmentation of MR images. In contrast to previous approaches to liver segmentation in MR data sets, we use all available MR channel information of different weightings and formulate liver tissue and position probabilities in a probabilistic framework. We apply multiclass linear discriminant analysis as a fast and efficient dimensionality reduction technique and generate probability maps then used for segmentation. We develop a fully automatic three-step 3D segmentation approach based upon a modified region growing approach and a further threshold technique. Finally, we incorporate characteristic prior knowledge to improve the segmentation results. This novel 3D segmentation approach is modularized and can be applied for normal and fat accumulated liver tissue properties. Copyright 2010 Elsevier Inc. All rights reserved.
Eccles, B A; Klevecz, R R
1986-06-01
Mitotic frequency in a synchronous culture of mammalian cells was determined fully automatically and in real time using low-intensity phase-contrast microscopy and a newvicon video camera connected to an EyeCom III image processor. Image samples, at a frequency of one per minute for 50 hours, were analyzed by first extracting the high-frequency picture components, then thresholding and probing for annular objects indicative of putative mitotic cells. Both the extraction of high-frequency components and the recognition of rings of varying radii and discontinuities employed novel algorithms. Spatial and temporal relationships between annuli were examined to discern the occurrences of mitoses, and such events were recorded in a computer data file. At present, the automatic analysis is suited for random cell proliferation rate measurements or cell cycle studies. The automatic identification of mitotic cells as described here provides a measure of the average proliferative activity of the cell population as a whole and eliminates more than eight hours of manual review per time-lapse video recording.
Retrieving definitional content for ontology development.
Smith, L; Wilbur, W J
2004-12-01
Ontology construction requires an understanding of the meaning and usage of its encoded concepts. While definitions found in dictionaries or glossaries may be adequate for many concepts, the actual usage in expert writing could be a better source of information for many others. The goal of this paper is to describe an automated procedure for finding definitional content in expert writing. The approach uses machine learning on phrasal features to learn when sentences in a book contain definitional content, as determined by their similarity to glossary definitions provided in the same book. The end result is not a concise definition of a given concept, but for each sentence, a predicted probability that it contains information relevant to a definition. The approach is evaluated automatically for terms with explicit definitions, and manually for terms with no available definition.
Collective Intelligence Generation from User Contributed Content
NASA Astrophysics Data System (ADS)
Solachidis, Vassilios; Mylonas, Phivos; Geyer-Schulz, Andreas; Hoser, Bettina; Chapman, Sam; Ciravegna, Fabio; Lanfranchi, Vita; Scherp, Ansgar; Staab, Steffen; Contopoulos, Costis; Gkika, Ioanna; Bakaimis, Byron; Smrz, Pavel; Kompatsiaris, Yiannis; Avrithis, Yannis
In this paper we provide a foundation for a new generation of services and tools. We define new ways of capturing, sharing and reusing information and intelligence provided by single users and communities, as well as organizations by enabling the extraction, generation, interpretation and management of Collective Intelligence from user generated digital multimedia content. Different layers of intelligence are generated, which together constitute the notion of Collective Intelligence. The automatic generation of Collective Intelligence constitutes a departure from traditional methods for information sharing, since information from both the multimedia content and social aspects will be merged, while at the same time the social dynamics will be taken into account. In the context of this work, we present two case studies: an Emergency Response and a Consumers Social Group case study.
On the Wrong Track: Process and Content in Moral Psychology
Kahane, Guy
2012-01-01
According to Joshua Greene's influential dual process model of moral judgment, different modes of processing are associated with distinct moral outputs: automatic processing with deontological judgment, and controlled processing with utilitarian judgment. This article aims to clarify and assess Greene's model. I argue that the proposed tie between process and content is based on a misinterpretation of the evidence, and that the supposed evidence for controlled processing in utilitarian judgment is actually likely to reflect, not ‘utilitarian reasoning’, but a form of moral deliberation which, ironically, is actually in serious tension with a utilitarian outlook. This alternative account is further supported by the results of a neuroimaging study showing that intuitive and counterintuitive judgments have similar neural correlates whether or not their content is utilitarian or deontological. PMID:23335831
Inoue, Kentaro; Maeda, Kazuhiro; Miyabe, Takaaki; Matsuoka, Yu; Kurata, Hiroyuki
2014-09-01
Mathematical modeling has become a standard technique to understand the dynamics of complex biochemical systems. To promote the modeling, we had developed the CADLIVE dynamic simulator that automatically converted a biochemical map into its associated mathematical model, simulated its dynamic behaviors and analyzed its robustness. To enhance the feasibility by CADLIVE and extend its functions, we propose the CADLIVE toolbox available for MATLAB, which implements not only the existing functions of the CADLIVE dynamic simulator, but also the latest tools including global parameter search methods with robustness analysis. The seamless, bottom-up processes consisting of biochemical network construction, automatic construction of its dynamic model, simulation, optimization, and S-system analysis greatly facilitate dynamic modeling, contributing to the research of systems biology and synthetic biology. This application can be freely downloaded from http://www.cadlive.jp/CADLIVE_MATLAB/ together with an instruction.
Automatic Real Time Ionogram Scaler with True Height Analysis - Artist
1983-07-01
scaled. The corresponding autoscaled values were compared with the manual scaled h’F, h’F2, fminF, foE, foEs, h’E and hlEs. The ARTIST program...I ... , ·~ J .,\\; j~~·n! I:\\’~ .. IC HT:/\\L rritw!E I ONOGI\\AM SCALER ’:!"[’!’if T:\\!_1!: H~:IGHT ANALYSIS - ARTIST P...S. TYPE OF REPORT & PERiCO COVERED Scientific Report No. 7 AUTOMATIC REAL TIME IONOGRAM SCALER WITH TRUE HEIGHT ANALYSIS - ARTIST 6. PERFORMING OG
Onyśk, Agnieszka; Boczkowska, Maja
2017-01-01
Simple Sequence Repeat (SSR) markers are one of the most frequently used molecular markers in studies of crop diversity and population structure. This is due to their uniform distribution in the genome, the high polymorphism, reproducibility, and codominant character. Additional advantages are the possibility of automatic analysis and simple interpretation of the results. The M13 tagged PCR reaction significantly reduces the costs of analysis by the automatic genetic analyzers. Here, we also disclose a short protocol of SSR data analysis.
[Advances in automatic detection technology for images of thin blood film of malaria parasite].
Juan-Sheng, Zhang; Di-Qiang, Zhang; Wei, Wang; Xiao-Guang, Wei; Zeng-Guo, Wang
2017-05-05
This paper reviews the computer vision and image analysis studies aiming at automated diagnosis or screening of malaria in microscope images of thin blood film smears. On the basis of introducing the background and significance of automatic detection technology, the existing detection technologies are summarized and divided into several steps, including image acquisition, pre-processing, morphological analysis, segmentation, count, and pattern classification components. Then, the principles and implementation methods of each step are given in detail. In addition, the promotion and application in automatic detection technology of thick blood film smears are put forwarded as questions worthy of study, and a perspective of the future work for realization of automated microscopy diagnosis of malaria is provided.
Morphological feature extraction for the classification of digital images of cancerous tissues.
Thiran, J P; Macq, B
1996-10-01
This paper presents a new method for automatic recognition of cancerous tissues from an image of a microscopic section. Based on the shape and the size analysis of the observed cells, this method provides the physician with nonsubjective numerical values for four criteria of malignancy. This automatic approach is based on mathematical morphology, and more specifically on the use of Geodesy. This technique is used first to remove the background noise from the image and then to operate a segmentation of the nuclei of the cells and an analysis of their shape, their size, and their texture. From the values of the extracted criteria, an automatic classification of the image (cancerous or not) is finally operated.
Automatic analysis and classification of surface electromyography.
Abou-Chadi, F E; Nashar, A; Saad, M
2001-01-01
In this paper, parametric modeling of surface electromyography (EMG) algorithms that facilitates automatic SEMG feature extraction and artificial neural networks (ANN) are combined for providing an integrated system for the automatic analysis and diagnosis of myopathic disorders. Three paradigms of ANN were investigated: the multilayer backpropagation algorithm, the self-organizing feature map algorithm and a probabilistic neural network model. The performance of the three classifiers was compared with that of the old Fisher linear discriminant (FLD) classifiers. The results have shown that the three ANN models give higher performance. The percentage of correct classification reaches 90%. Poorer diagnostic performance was obtained from the FLD classifier. The system presented here indicates that surface EMG, when properly processed, can be used to provide the physician with a diagnostic assist device.
Image-based red cell counting for wild animals blood.
Mauricio, Claudio R M; Schneider, Fabio K; Dos Santos, Leonilda Correia
2010-01-01
An image-based red blood cell (RBC) automatic counting system is presented for wild animals blood analysis. Images with 2048×1536-pixel resolution acquired on an optical microscope using Neubauer chambers are used to evaluate RBC counting for three animal species (Leopardus pardalis, Cebus apella and Nasua nasua) and the error found using the proposed method is similar to that obtained for inter observer visual counting method, i.e., around 10%. Smaller errors (e.g., 3%) can be obtained in regions with less grid artifacts. These promising results allow the use of the proposed method either as a complete automatic counting tool in laboratories for wild animal's blood analysis or as a first counting stage in a semi-automatic counting tool.
An automatic editing algorithm for GPS data
NASA Technical Reports Server (NTRS)
Blewitt, Geoffrey
1990-01-01
An algorithm has been developed to edit automatically Global Positioning System data such that outlier deletion, cycle slip identification, and correction are independent of clock instability, selective availability, receiver-satellite kinematics, and tropospheric conditions. This algorithm, called TurboEdit, operates on undifferenced, dual frequency carrier phase data, and requires the use of P code pseudorange data and a smoothly varying ionospheric electron content. TurboEdit was tested on the large data set from the CASA Uno experiment, which contained over 2500 cycle slips.Analyst intervention was required on 1 percent of the station-satellite passes, almost all of these problems being due to difficulties in extrapolating variations in the ionospheric delay. The algorithm is presently being adapted for real time data editing in the Rogue receiver for continuous monitoring applications.
Automatic identification of artifacts in electrodermal activity data.
Taylor, Sara; Jaques, Natasha; Chen, Weixuan; Fedor, Szymon; Sano, Akane; Picard, Rosalind
2015-01-01
Recently, wearable devices have allowed for long term, ambulatory measurement of electrodermal activity (EDA). Despite the fact that ambulatory recording can be noisy, and recording artifacts can easily be mistaken for a physiological response during analysis, to date there is no automatic method for detecting artifacts. This paper describes the development of a machine learning algorithm for automatically detecting EDA artifacts, and provides an empirical evaluation of classification performance. We have encoded our results into a freely available web-based tool for artifact and peak detection.
Terminal Sliding Mode Tracking Controller Design for Automatic Guided Vehicle
NASA Astrophysics Data System (ADS)
Chen, Hongbin
2018-03-01
Based on sliding mode variable structure control theory, the path tracking problem of automatic guided vehicle is studied, proposed a controller design method based on the terminal sliding mode. First of all, through analyzing the characteristics of the automatic guided vehicle movement, the kinematics model is presented. Then to improve the traditional expression of terminal sliding mode, design a nonlinear sliding mode which the convergence speed is faster than the former, verified by theoretical analysis, the design of sliding mode is steady and fast convergence in the limited time. Finally combining Lyapunov method to design the tracking control law of automatic guided vehicle, the controller can make the automatic guided vehicle track the desired trajectory in the global sense as well as in finite time. The simulation results verify the correctness and effectiveness of the control law.
Kuretzki, Carlos Henrique; Campos, Antônio Carlos Ligocki; Malafaia, Osvaldo; Soares, Sandramara Scandelari Kusano de Paula; Tenório, Sérgio Bernardo; Timi, Jorge Rufino Ribas
2016-03-01
The use of information technology is often applied in healthcare. With regard to scientific research, the SINPE(c) - Integrated Electronic Protocols was created as a tool to support researchers, offering clinical data standardization. By the time, SINPE(c) lacked statistical tests obtained by automatic analysis. Add to SINPE(c) features for automatic realization of the main statistical methods used in medicine . The study was divided into four topics: check the interest of users towards the implementation of the tests; search the frequency of their use in health care; carry out the implementation; and validate the results with researchers and their protocols. It was applied in a group of users of this software in their thesis in the strict sensu master and doctorate degrees in one postgraduate program in surgery. To assess the reliability of the statistics was compared the data obtained both automatically by SINPE(c) as manually held by a professional in statistics with experience with this type of study. There was concern for the use of automatic statistical tests, with good acceptance. The chi-square, Mann-Whitney, Fisher and t-Student were considered as tests frequently used by participants in medical studies. These methods have been implemented and thereafter approved as expected. The incorporation of the automatic SINPE (c) Statistical Analysis was shown to be reliable and equal to the manually done, validating its use as a research tool for medical research.
Seismpol_ a visual-basic computer program for interactive and automatic earthquake waveform analysis
NASA Astrophysics Data System (ADS)
Patanè, Domenico; Ferrari, Ferruccio
1997-11-01
A Microsoft Visual-Basic computer program for waveform analysis of seismic signals is presented. The program combines interactive and automatic processing of digital signals using data recorded by three-component seismic stations. The analysis procedure can be used in either an interactive earthquake analysis or an automatic on-line processing of seismic recordings. The algorithm works in the time domain using the Covariance Matrix Decomposition method (CMD), so that polarization characteristics may be computed continuously in real time and seismic phases can be identified and discriminated. Visual inspection of the particle motion in hortogonal planes of projection (hodograms) reduces the danger of misinterpretation derived from the application of the polarization filter. The choice of time window and frequency intervals improves the quality of the extracted polarization information. In fact, the program uses a band-pass Butterworth filter to process the signals in the frequency domain by analysis of a selected signal window into a series of narrow frequency bands. Significant results supported by well defined polarizations and source azimuth estimates for P and S phases are also obtained for short-period seismic events (local microearthquakes).
NASA Astrophysics Data System (ADS)
Moghimi, Saba; Kushki, Azadeh; Power, Sarah; Guerguerian, Anne Marie; Chau, Tom
2012-04-01
Emotional responses can be induced by external sensory stimuli. For severely disabled nonverbal individuals who have no means of communication, the decoding of emotion may offer insight into an individual’s state of mind and his/her response to events taking place in the surrounding environment. Near-infrared spectroscopy (NIRS) provides an opportunity for bed-side monitoring of emotions via measurement of hemodynamic activity in the prefrontal cortex, a brain region known to be involved in emotion processing. In this paper, prefrontal cortex activity of ten able-bodied participants was monitored using NIRS as they listened to 78 music excerpts with different emotional content and a control acoustic stimuli consisting of the Brown noise. The participants rated their emotional state after listening to each excerpt along the dimensions of valence (positive versus negative) and arousal (intense versus neutral). These ratings were used to label the NIRS trial data. Using a linear discriminant analysis-based classifier and a two-dimensional time-domain feature set, trials with positive and negative emotions were discriminated with an average accuracy of 71.94% ± 8.19%. Trials with audible Brown noise representing a neutral response were differentiated from high arousal trials with an average accuracy of 71.93% ± 9.09% using a two-dimensional feature set. In nine out of the ten participants, response to the neutral Brown noise was differentiated from high arousal trials with accuracies exceeding chance level, and positive versus negative emotional differentiation accuracies exceeded the chance level in seven out of the ten participants. These results illustrate that NIRS recordings of the prefrontal cortex during presentation of music with emotional content can be automatically decoded in terms of both valence and arousal encouraging future investigation of NIRS-based emotion detection in individuals with severe disabilities.