Techniques for information extraction from compressed GPS traces : final report.
DOT National Transportation Integrated Search
2015-12-31
Developing techniques for extracting information requires a good understanding of methods used to compress the traces. Many techniques for compressing trace data : consisting of position (i.e., latitude/longitude) and time values have been developed....
Place in Perspective: Extracting Online Information about Points of Interest
NASA Astrophysics Data System (ADS)
Alves, Ana O.; Pereira, Francisco C.; Rodrigues, Filipe; Oliveirinha, João
During the last few years, the amount of online descriptive information about places has reached reasonable dimensions for many cities in the world. Being such information mostly in Natural Language text, Information Extraction techniques are needed for obtaining the meaning of places that underlies these massive amounts of commonsense and user made sources. In this article, we show how we automatically label places using Information Extraction techniques applied to online resources such as Wikipedia, Yellow Pages and Yahoo!.
Knowledge Discovery and Data Mining: An Overview
NASA Technical Reports Server (NTRS)
Fayyad, U.
1995-01-01
The process of knowledge discovery and data mining is the process of information extraction from very large databases. Its importance is described along with several techniques and considerations for selecting the most appropriate technique for extracting information from a particular data set.
Developing a hybrid dictionary-based bio-entity recognition technique.
Song, Min; Yu, Hwanjo; Han, Wook-Shin
2015-01-01
Bio-entity extraction is a pivotal component for information extraction from biomedical literature. The dictionary-based bio-entity extraction is the first generation of Named Entity Recognition (NER) techniques. This paper presents a hybrid dictionary-based bio-entity extraction technique. The approach expands the bio-entity dictionary by combining different data sources and improves the recall rate through the shortest path edit distance algorithm. In addition, the proposed technique adopts text mining techniques in the merging stage of similar entities such as Part of Speech (POS) expansion, stemming, and the exploitation of the contextual cues to further improve the performance. The experimental results show that the proposed technique achieves the best or at least equivalent performance among compared techniques, GENIA, MESH, UMLS, and combinations of these three resources in F-measure. The results imply that the performance of dictionary-based extraction techniques is largely influenced by information resources used to build the dictionary. In addition, the edit distance algorithm shows steady performance with three different dictionaries in precision whereas the context-only technique achieves a high-end performance with three difference dictionaries in recall.
Developing a hybrid dictionary-based bio-entity recognition technique
2015-01-01
Background Bio-entity extraction is a pivotal component for information extraction from biomedical literature. The dictionary-based bio-entity extraction is the first generation of Named Entity Recognition (NER) techniques. Methods This paper presents a hybrid dictionary-based bio-entity extraction technique. The approach expands the bio-entity dictionary by combining different data sources and improves the recall rate through the shortest path edit distance algorithm. In addition, the proposed technique adopts text mining techniques in the merging stage of similar entities such as Part of Speech (POS) expansion, stemming, and the exploitation of the contextual cues to further improve the performance. Results The experimental results show that the proposed technique achieves the best or at least equivalent performance among compared techniques, GENIA, MESH, UMLS, and combinations of these three resources in F-measure. Conclusions The results imply that the performance of dictionary-based extraction techniques is largely influenced by information resources used to build the dictionary. In addition, the edit distance algorithm shows steady performance with three different dictionaries in precision whereas the context-only technique achieves a high-end performance with three difference dictionaries in recall. PMID:26043907
NASA Astrophysics Data System (ADS)
Loredana Soran, Maria; Codruta Cobzac, Simona; Varodi, Codruta; Lung, Ildiko; Surducan, Emanoil; Surducan, Vasile
2009-08-01
Three different techniques (maceration, sonication and extraction in microwave field) were used for extraction of essential oils from Ocimum basilicum L. The extracts were analyzed by TLC/HPTLC technique and the fingerprint informations were obtained. The GC-FID was used to characterized the extraction efficiency and for identify the terpenic bioactive compounds. The most efficient extraction technique was maceration followed by microwave and ultrasound. The best extraction solvent system was ethyl ether + ethanol (1:1, v/v). The main compounds identified in Ocimum basilicum L. extracts were: α and β-pinene (mixture), limonene, citronellol, and geraniol.
Tahara, Tatsuki; Mori, Ryota; Kikunaga, Shuhei; Arai, Yasuhiko; Takaki, Yasuhiro
2015-06-15
Dual-wavelength phase-shifting digital holography that selectively extracts wavelength information from five wavelength-multiplexed holograms is presented. Specific phase shifts for respective wavelengths are introduced to remove the crosstalk components and extract only the object wave at the desired wavelength from the holograms. Object waves in multiple wavelengths are selectively extracted by utilizing 2π ambiguity and the subtraction procedures based on phase-shifting interferometry. Numerical results show the validity of the proposed technique. The proposed technique is also experimentally demonstrated.
Information Hiding In Digital Video Using DCT, DWT and CvT
NASA Astrophysics Data System (ADS)
Abed Shukur, Wisam; Najah Abdullah, Wathiq; Kareem Qurban, Luheb
2018-05-01
The type of video that used in this proposed hiding a secret information technique is .AVI; the proposed technique of a data hiding to embed a secret information into video frames by using Discrete Cosine Transform (DCT), Discrete Wavelet Transform (DWT) and Curvelet Transform (CvT). An individual pixel consists of three color components (RGB), the secret information is embedded in Red (R) color channel. On the receiver side, the secret information is extracted from received video. After extracting secret information, robustness of proposed hiding a secret information technique is measured and obtained by computing the degradation of the extracted secret information by comparing it with the original secret information via calculating the Normalized cross Correlation (NC). The experiments shows the error ratio of the proposed technique is (8%) while accuracy ratio is (92%) when the Curvelet Transform (CvT) is used, but compared with Discrete Wavelet Transform (DWT) and Discrete Cosine Transform (DCT), the error rates are 11% and 14% respectively, while the accuracy ratios are (89%) and (86%) respectively. So, the experiments shows the Poisson noise gives better results than other types of noises, while the speckle noise gives worst results compared with other types of noises. The proposed technique has been established by using MATLAB R2016a programming language.
Tagline: Information Extraction for Semi-Structured Text Elements in Medical Progress Notes
ERIC Educational Resources Information Center
Finch, Dezon Kile
2012-01-01
Text analysis has become an important research activity in the Department of Veterans Affairs (VA). Statistical text mining and natural language processing have been shown to be very effective for extracting useful information from medical documents. However, neither of these techniques is effective at extracting the information stored in…
Information extraction and transmission techniques for spaceborne synthetic aperture radar images
NASA Technical Reports Server (NTRS)
Frost, V. S.; Yurovsky, L.; Watson, E.; Townsend, K.; Gardner, S.; Boberg, D.; Watson, J.; Minden, G. J.; Shanmugan, K. S.
1984-01-01
Information extraction and transmission techniques for synthetic aperture radar (SAR) imagery were investigated. Four interrelated problems were addressed. An optimal tonal SAR image classification algorithm was developed and evaluated. A data compression technique was developed for SAR imagery which is simple and provides a 5:1 compression with acceptable image quality. An optimal textural edge detector was developed. Several SAR image enhancement algorithms have been proposed. The effectiveness of each algorithm was compared quantitatively.
Extraction of CT dose information from DICOM metadata: automated Matlab-based approach.
Dave, Jaydev K; Gingold, Eric L
2013-01-01
The purpose of this study was to extract exposure parameters and dose-relevant indexes of CT examinations from information embedded in DICOM metadata. DICOM dose report files were identified and retrieved from a PACS. An automated software program was used to extract from these files information from the structured elements in the DICOM metadata relevant to exposure. Extracting information from DICOM metadata eliminated potential errors inherent in techniques based on optical character recognition, yielding 100% accuracy.
Information Extraction Using Controlled English to Support Knowledge-Sharing and Decision-Making
2012-06-01
or language variants. CE-based information extraction will greatly facilitate the processes in the cognitive and social domains that enable forces...terminology or language variants. CE-based information extraction will greatly facilitate the processes in the cognitive and social domains that...processor is run to turn the atomic CE into a more “ stylistically felicitous” CE, using techniques such as: aggregating all information about an entity
The Science and Art of Eyebrow Transplantation by Follicular Unit Extraction
Gupta, Jyoti; Kumar, Amrendra; Chouhan, Kavish; Ariganesh, C; Nandal, Vinay
2017-01-01
Eyebrows constitute a very important and prominent feature of the face. With growing information, eyebrow transplant has become a popular procedure. However, though it is a small area it requires a lot of precision and knowledge regarding anatomy, designing of brows, extraction and implantation technique. This article gives a comprehensive view regarding eyebrow transplant with special emphasis on follicular unit extraction technique, which has become the most popular technique. PMID:28852290
ERIC Educational Resources Information Center
Ramsey-Klee, Diane M.; Richman, Vivian
The purpose of this research is to develop content analytic techniques capable of extracting the differentiating information in narrative performance evaluations for enlisted personnel in order to aid in the process of selecting personnel for advancement, duty assignment, training, or quality retention. Four tasks were performed. The first task…
Review of Extracting Information From the Social Web for Health Personalization
Karlsen, Randi; Bonander, Jason
2011-01-01
In recent years the Web has come into its own as a social platform where health consumers are actively creating and consuming Web content. Moreover, as the Web matures, consumers are gaining access to personalized applications adapted to their health needs and interests. The creation of personalized Web applications relies on extracted information about the users and the content to personalize. The Social Web itself provides many sources of information that can be used to extract information for personalization apart from traditional Web forms and questionnaires. This paper provides a review of different approaches for extracting information from the Social Web for health personalization. We reviewed research literature across different fields addressing the disclosure of health information in the Social Web, techniques to extract that information, and examples of personalized health applications. In addition, the paper includes a discussion of technical and socioethical challenges related to the extraction of information for health personalization. PMID:21278049
Rice, M K; Henry, T J
2018-01-01
Diseased cheek teeth in horses often require invasive extraction techniques that carry a high rate of complications. Techniques and instrumentation were developed to perform partial crown removal to aid standing intraoral extraction of diseased cheek teeth in horses. To analyse success rates and post-surgical complications in horses undergoing cheek teeth extraction assisted by partial crown removal. Retrospective cohort study. This study included 165 horses with 194 diseased cheek teeth that were extracted orally assisted by partial crown removal between 2010 and 2016. Medical records were analysed, including case details, obtained radiographs, surgical reports and follow-up information. Follow-up information (≥2 months) was obtained for 151 horses (91.5%). There were 95 horses examined post-operatively by the authors and, 16 horses by the referring veterinarian; in 40 horses, post-operative follow up was obtained by informal telephone interviews with the owner. Successful standing intraoral extraction of cheek teeth was obtained in 164/165 horses (99.4%). Twenty-five of these horses (15.2%) required additional intraoral extraction methods to complete the extraction, including minimally invasive transbuccal approach (n = 21) and tooth sectioning (n = 4). There was one (0.6%) horse with intraoral extraction failure that required standing repulsion to complete the extraction. The intraoperative complication of fractured root tips occurred in 11/165 horses (6.7%). Post-operative complications occurred in 6/165 horses (3.6%), including alveolar sequestra (n = 4), mild delay of alveolar healing at 2 months (n = 1), and development of a persistent draining tract secondary to a retained root tip (n = 1). Specialised instrumentation and additional training in the technique are recommended to perform partial crown removal in horses. Horses with cheek teeth extraction by partial crown removal have an excellent prognosis for a positive outcome. The term partial coronectomy is proposed for this technique. © 2017 EVJ Ltd.
Evaluation of Ultrasonic Fiber Structure Extraction Technique Using Autopsy Specimens of Liver
NASA Astrophysics Data System (ADS)
Yamaguchi, Tadashi; Hirai, Kazuki; Yamada, Hiroyuki; Ebara, Masaaki; Hachiya, Hiroyuki
2005-06-01
It is very important to diagnose liver cirrhosis noninvasively and correctly. In our previous studies, we proposed a processing technique to detect changes in liver tissue in vivo. In this paper, we propose the evaluation of the relationship between liver disease and echo information using autopsy specimens of a human liver in vitro. It is possible to verify the function of a processing parameter clearly and to compare the processing result and the actual human liver tissue structure by in vitro experiment. In the results of our processing technique, information that did not obey a Rayleigh distribution from the echo signal of the autopsy liver specimens was extracted depending on changes in a particular processing parameter. The fiber tissue structure of the same specimen was extracted from a number of histological images of stained tissue. We constructed 3D structures using the information extracted from the echo signal and the fiber structure of the stained tissue and compared the two. By comparing the 3D structures, it is possible to evaluate the relationship between the information that does not obey a Rayleigh distribution of the echo signal and the fibrosis structure.
Nikfarjam, Azadeh; Sarker, Abeed; O'Connor, Karen; Ginn, Rachel; Gonzalez, Graciela
2015-05-01
Social media is becoming increasingly popular as a platform for sharing personal health-related information. This information can be utilized for public health monitoring tasks, particularly for pharmacovigilance, via the use of natural language processing (NLP) techniques. However, the language in social media is highly informal, and user-expressed medical concepts are often nontechnical, descriptive, and challenging to extract. There has been limited progress in addressing these challenges, and thus far, advanced machine learning-based NLP techniques have been underutilized. Our objective is to design a machine learning-based approach to extract mentions of adverse drug reactions (ADRs) from highly informal text in social media. We introduce ADRMine, a machine learning-based concept extraction system that uses conditional random fields (CRFs). ADRMine utilizes a variety of features, including a novel feature for modeling words' semantic similarities. The similarities are modeled by clustering words based on unsupervised, pretrained word representation vectors (embeddings) generated from unlabeled user posts in social media using a deep learning technique. ADRMine outperforms several strong baseline systems in the ADR extraction task by achieving an F-measure of 0.82. Feature analysis demonstrates that the proposed word cluster features significantly improve extraction performance. It is possible to extract complex medical concepts, with relatively high performance, from informal, user-generated content. Our approach is particularly scalable, suitable for social media mining, as it relies on large volumes of unlabeled data, thus diminishing the need for large, annotated training data sets. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association.
A novel murmur-based heart sound feature extraction technique using envelope-morphological analysis
NASA Astrophysics Data System (ADS)
Yao, Hao-Dong; Ma, Jia-Li; Fu, Bin-Bin; Wang, Hai-Yang; Dong, Ming-Chui
2015-07-01
Auscultation of heart sound (HS) signals serves as an important primary approach to diagnose cardiovascular diseases (CVDs) for centuries. Confronting the intrinsic drawbacks of traditional HS auscultation, computer-aided automatic HS auscultation based on feature extraction technique has witnessed explosive development. Yet, most existing HS feature extraction methods adopt acoustic or time-frequency features which exhibit poor relationship with diagnostic information, thus restricting the performance of further interpretation and analysis. Tackling such a bottleneck problem, this paper innovatively proposes a novel murmur-based HS feature extraction method since murmurs contain massive pathological information and are regarded as the first indications of pathological occurrences of heart valves. Adapting discrete wavelet transform (DWT) and Shannon envelope, the envelope-morphological characteristics of murmurs are obtained and three features are extracted accordingly. Validated by discriminating normal HS and 5 various abnormal HS signals with extracted features, the proposed method provides an attractive candidate in automatic HS auscultation.
Kim, Seokyeon; Jeong, Seongmin; Woo, Insoo; Jang, Yun; Maciejewski, Ross; Ebert, David S
2018-03-01
Geographic visualization research has focused on a variety of techniques to represent and explore spatiotemporal data. The goal of those techniques is to enable users to explore events and interactions over space and time in order to facilitate the discovery of patterns, anomalies and relationships within the data. However, it is difficult to extract and visualize data flow patterns over time for non-directional statistical data without trajectory information. In this work, we develop a novel flow analysis technique to extract, represent, and analyze flow maps of non-directional spatiotemporal data unaccompanied by trajectory information. We estimate a continuous distribution of these events over space and time, and extract flow fields for spatial and temporal changes utilizing a gravity model. Then, we visualize the spatiotemporal patterns in the data by employing flow visualization techniques. The user is presented with temporal trends of geo-referenced discrete events on a map. As such, overall spatiotemporal data flow patterns help users analyze geo-referenced temporal events, such as disease outbreaks, crime patterns, etc. To validate our model, we discard the trajectory information in an origin-destination dataset and apply our technique to the data and compare the derived trajectories and the original. Finally, we present spatiotemporal trend analysis for statistical datasets including twitter data, maritime search and rescue events, and syndromic surveillance.
aCGH-MAS: Analysis of aCGH by means of Multiagent System
Benito, Rocío; Bajo, Javier; Rodríguez, Ana Eugenia; Abáigar, María
2015-01-01
There are currently different techniques, such as CGH arrays, to study genetic variations in patients. CGH arrays analyze gains and losses in different regions in the chromosome. Regions with gains or losses in pathologies are important for selecting relevant genes or CNVs (copy-number variations) associated with the variations detected within chromosomes. Information corresponding to mutations, genes, proteins, variations, CNVs, and diseases can be found in different databases and it would be of interest to incorporate information of different sources to extract relevant information. This work proposes a multiagent system to manage the information of aCGH arrays, with the aim of providing an intuitive and extensible system to analyze and interpret the results. The agent roles integrate statistical techniques to select relevant variations and visualization techniques for the interpretation of the final results and to extract relevant information from different sources of information by applying a CBR system. PMID:25874203
ERIC Educational Resources Information Center
Chen, Hsinchun
2003-01-01
Discusses information retrieval techniques used on the World Wide Web. Topics include machine learning in information extraction; relevance feedback; information filtering and recommendation; text classification and text clustering; Web mining, based on data mining techniques; hyperlink structure; and Web size. (LRW)
Extraction of Data from a Hospital Information System to Perform Process Mining.
Neira, Ricardo Alfredo Quintano; de Vries, Gert-Jan; Caffarel, Jennifer; Stretton, Erin
2017-01-01
The aim of this work is to share our experience in relevant data extraction from a hospital information system in preparation for a research study using process mining techniques. The steps performed were: research definition, mapping the normative processes, identification of tables and fields names of the database, and extraction of data. We then offer lessons learned during data extraction phase. Any errors made in the extraction phase will propagate and have implications on subsequent analyses. Thus, it is essential to take the time needed and devote sufficient attention to detail to perform all activities with the goal of ensuring high quality of the extracted data. We hope this work will be informative for other researchers to plan and execute extraction of data for process mining research studies.
FIR: An Effective Scheme for Extracting Useful Metadata from Social Media.
Chen, Long-Sheng; Lin, Zue-Cheng; Chang, Jing-Rong
2015-11-01
Recently, the use of social media for health information exchange is expanding among patients, physicians, and other health care professionals. In medical areas, social media allows non-experts to access, interpret, and generate medical information for their own care and the care of others. Researchers paid much attention on social media in medical educations, patient-pharmacist communications, adverse drug reactions detection, impacts of social media on medicine and healthcare, and so on. However, relatively few papers discuss how to extract useful knowledge from a huge amount of textual comments in social media effectively. Therefore, this study aims to propose a Fuzzy adaptive resonance theory network based Information Retrieval (FIR) scheme by combining Fuzzy adaptive resonance theory (ART) network, Latent Semantic Indexing (LSI), and association rules (AR) discovery to extract knowledge from social media. In our FIR scheme, Fuzzy ART network firstly has been employed to segment comments. Next, for each customer segment, we use LSI technique to retrieve important keywords. Then, in order to make the extracted keywords understandable, association rules mining is presented to organize these extracted keywords to build metadata. These extracted useful voices of customers will be transformed into design needs by using Quality Function Deployment (QFD) for further decision making. Unlike conventional information retrieval techniques which acquire too many keywords to get key points, our FIR scheme can extract understandable metadata from social media.
NASA Astrophysics Data System (ADS)
Dogon-yaro, M. A.; Kumar, P.; Rahman, A. Abdul; Buyuksalih, G.
2016-10-01
Timely and accurate acquisition of information on the condition and structural changes of urban trees serves as a tool for decision makers to better appreciate urban ecosystems and their numerous values which are critical to building up strategies for sustainable development. The conventional techniques used for extracting tree features include; ground surveying and interpretation of the aerial photography. However, these techniques are associated with some constraint, such as labour intensive field work, a lot of financial requirement, influences by weather condition and topographical covers which can be overcome by means of integrated airborne based LiDAR and very high resolution digital image datasets. This study presented a semi-automated approach for extracting urban trees from integrated airborne based LIDAR and multispectral digital image datasets over Istanbul city of Turkey. The above scheme includes detection and extraction of shadow free vegetation features based on spectral properties of digital images using shadow index and NDVI techniques and automated extraction of 3D information about vegetation features from the integrated processing of shadow free vegetation image and LiDAR point cloud datasets. The ability of the developed algorithms shows a promising result as an automated and cost effective approach to estimating and delineated 3D information of urban trees. The research also proved that integrated datasets is a suitable technology and a viable source of information for city managers to be used in urban trees management.
Effects of band selection on endmember extraction for forestry applications
NASA Astrophysics Data System (ADS)
Karathanassi, Vassilia; Andreou, Charoula; Andronis, Vassilis; Kolokoussis, Polychronis
2014-10-01
In spectral unmixing theory, data reduction techniques play an important role as hyperspectral imagery contains an immense amount of data, posing many challenging problems such as data storage, computational efficiency, and the so called "curse of dimensionality". Feature extraction and feature selection are the two main approaches for dimensionality reduction. Feature extraction techniques are used for reducing the dimensionality of the hyperspectral data by applying transforms on hyperspectral data. Feature selection techniques retain the physical meaning of the data by selecting a set of bands from the input hyperspectral dataset, which mainly contain the information needed for spectral unmixing. Although feature selection techniques are well-known for their dimensionality reduction potentials they are rarely used in the unmixing process. The majority of the existing state-of-the-art dimensionality reduction methods set criteria to the spectral information, which is derived by the whole wavelength, in order to define the optimum spectral subspace. These criteria are not associated with any particular application but with the data statistics, such as correlation and entropy values. However, each application is associated with specific land c over materials, whose spectral characteristics present variations in specific wavelengths. In forestry for example, many applications focus on tree leaves, in which specific pigments such as chlorophyll, xanthophyll, etc. determine the wavelengths where tree species, diseases, etc., can be detected. For such applications, when the unmixing process is applied, the tree species, diseases, etc., are considered as the endmembers of interest. This paper focuses on investigating the effects of band selection on the endmember extraction by exploiting the information of the vegetation absorbance spectral zones. More precisely, it is explored whether endmember extraction can be optimized when specific sets of initial bands related to leaf spectral characteristics are selected. Experiments comprise application of well-known signal subspace estimation and endmember extraction methods on a hyperspectral imagery that presents a forest area. Evaluation of the extracted endmembers showed that more forest species can be extracted as endmembers using selected bands.
Integrating Information Extraction Agents into a Tourism Recommender System
NASA Astrophysics Data System (ADS)
Esparcia, Sergio; Sánchez-Anguix, Víctor; Argente, Estefanía; García-Fornes, Ana; Julián, Vicente
Recommender systems face some problems. On the one hand information needs to be maintained updated, which can result in a costly task if it is not performed automatically. On the other hand, it may be interesting to include third party services in the recommendation since they improve its quality. In this paper, we present an add-on for the Social-Net Tourism Recommender System that uses information extraction and natural language processing techniques in order to automatically extract and classify information from the Web. Its goal is to maintain the system updated and obtain information about third party services that are not offered by service providers inside the system.
Tuinman, Albert A; Lewis, Linda A; Lewis, Samuel A
2003-06-01
The application of electrospray ionization mass spectrometry (ESI-MS) to trace-fiber color analysis is explored using acidic dyes commonly employed to color nylon-based fibers, as well as extracts from dyed nylon fibers. Qualitative information about constituent dyes and quantitative information about the relative amounts of those dyes present on a single fiber become readily available using this technique. Sample requirements for establishing the color identity of different samples (i.e., comparative trace-fiber analysis) are shown to be submillimeter. Absolute verification of dye mixture identity (beyond the comparison of molecular weights derived from ESI-MS) can be obtained by expanding the technique to include tandem mass spectrometry (ESI-MS/MS). For dyes of unknown origin, the ESI-MS/MS analyses may offer insights into the chemical structure of the compound-information not available from chromatographic techniques alone. This research demonstrates that ESI-MS is viable as a sensitive technique for distinguishing dye constituents extracted from a minute amount of trace-fiber evidence. A protocol is suggested to establish/refute the proposition that two fibers--one of which is available in minute quantity only--are of the same origin.
ERIC Educational Resources Information Center
Mangina, Eleni; Kilbride, John
2008-01-01
The research presented in this paper is an examination of the applicability of IUI techniques in an online e-learning environment. In particular we make use of user modeling techniques, information retrieval and extraction mechanisms and collaborative filtering methods. The domains of e-learning, web-based training and instruction and intelligent…
NASA Astrophysics Data System (ADS)
Rüther, Heinz; Martine, Hagai M.; Mtalo, E. G.
This paper presents a novel approach to semiautomatic building extraction in informal settlement areas from aerial photographs. The proposed approach uses a strategy of delineating buildings by optimising their approximate building contour position. Approximate building contours are derived automatically by locating elevation blobs in digital surface models. Building extraction is then effected by means of the snakes algorithm and the dynamic programming optimisation technique. With dynamic programming, the building contour optimisation problem is realized through a discrete multistage process and solved by the "time-delayed" algorithm, as developed in this work. The proposed building extraction approach is a semiautomatic process, with user-controlled operations linking fully automated subprocesses. Inputs into the proposed building extraction system are ortho-images and digital surface models, the latter being generated through image matching techniques. Buildings are modeled as "lumps" or elevation blobs in digital surface models, which are derived by altimetric thresholding of digital surface models. Initial windows for building extraction are provided by projecting the elevation blobs centre points onto an ortho-image. In the next step, approximate building contours are extracted from the ortho-image by region growing constrained by edges. Approximate building contours thus derived are inputs into the dynamic programming optimisation process in which final building contours are established. The proposed system is tested on two study areas: Marconi Beam in Cape Town, South Africa, and Manzese in Dar es Salaam, Tanzania. Sixty percent of buildings in the study areas have been extracted and verified and it is concluded that the proposed approach contributes meaningfully to the extraction of buildings in moderately complex and crowded informal settlement areas.
PKDE4J: Entity and relation extraction for public knowledge discovery.
Song, Min; Kim, Won Chul; Lee, Dahee; Heo, Go Eun; Kang, Keun Young
2015-10-01
Due to an enormous number of scientific publications that cannot be handled manually, there is a rising interest in text-mining techniques for automated information extraction, especially in the biomedical field. Such techniques provide effective means of information search, knowledge discovery, and hypothesis generation. Most previous studies have primarily focused on the design and performance improvement of either named entity recognition or relation extraction. In this paper, we present PKDE4J, a comprehensive text-mining system that integrates dictionary-based entity extraction and rule-based relation extraction in a highly flexible and extensible framework. Starting with the Stanford CoreNLP, we developed the system to cope with multiple types of entities and relations. The system also has fairly good performance in terms of accuracy as well as the ability to configure text-processing components. We demonstrate its competitive performance by evaluating it on many corpora and found that it surpasses existing systems with average F-measures of 85% for entity extraction and 81% for relation extraction. Copyright © 2015 Elsevier Inc. All rights reserved.
Kumar, Shiu; Sharma, Alok; Tsunoda, Tatsuhiko
2017-12-28
Common spatial pattern (CSP) has been an effective technique for feature extraction in electroencephalography (EEG) based brain computer interfaces (BCIs). However, motor imagery EEG signal feature extraction using CSP generally depends on the selection of the frequency bands to a great extent. In this study, we propose a mutual information based frequency band selection approach. The idea of the proposed method is to utilize the information from all the available channels for effectively selecting the most discriminative filter banks. CSP features are extracted from multiple overlapping sub-bands. An additional sub-band has been introduced that cover the wide frequency band (7-30 Hz) and two different types of features are extracted using CSP and common spatio-spectral pattern techniques, respectively. Mutual information is then computed from the extracted features of each of these bands and the top filter banks are selected for further processing. Linear discriminant analysis is applied to the features extracted from each of the filter banks. The scores are fused together, and classification is done using support vector machine. The proposed method is evaluated using BCI Competition III dataset IVa, BCI Competition IV dataset I and BCI Competition IV dataset IIb, and it outperformed all other competing methods achieving the lowest misclassification rate and the highest kappa coefficient on all three datasets. Introducing a wide sub-band and using mutual information for selecting the most discriminative sub-bands, the proposed method shows improvement in motor imagery EEG signal classification.
Direct Estimation of Structure and Motion from Multiple Frames
1990-03-01
sequential frames in an image sequence. As a consequence, the information that can be extracted from a single optical flow field is limited to a snapshot of...researchers have developed techniques that extract motion and structure inform.4tion without computation of the optical flow. Best known are the "direct...operated iteratively on a sequence of images to recover structure. It required feature extraction and matching. Broida and Chellappa [9] suggested the use of
NASA Technical Reports Server (NTRS)
Brackett, Robert A.; Arvidson, Raymond E.
1993-01-01
A technique is presented that allows extraction of compositional and textural information from visible, near and thermal infrared remotely sensed data. Using a library of both emissivity and reflectance spectra, endmember abundances and endmember thermal inertias are extracted from AVIRIS (Airborne Visible and Infrared Imaging Spectrometer) and TIMS (Thermal Infrared Mapping Spectrometer) data over Lunar Crater Volcanic Field, Nevada, using a dual inversion. The inversion technique is motivated by upcoming Mars Observer data and the need for separation of composition and texture parameters from sub pixel mixtures of bedrock and dust. The model employed offers the opportunity to extract compositional and textural information for a variety of endmembers within a given pixel. Geologic inferences concerning grain size, abundance, and source of endmembers can be made directly from the inverted data. These parameters are of direct relevance to Mars exploration, both for Mars Observer and for follow-on missions.
Mašković, Pavle Z; Veličković, Vesna; Đurović, Saša; Zeković, Zoran; Radojković, Marija; Cvetanović, Aleksandra; Švarc-Gajić, Jaroslava; Mitić, Milan; Vujić, Jelena
2018-01-01
Lavatera thuringiaca L. is herbaceous perennial plant from Malvaceae family, which is known for its biological activity and richness in polyphenolic compounds. Despite this, the information regarding the biological activity and chemical profile is still insufficient. Aim of this study was to investigate biological potential and chemical profile of Lavatera thuringiaca L., as well as influence of applied extraction technique on them. Two conventional and four non-conventional extraction techniques were applied in order to obtain extracts rich in bioactive compound. Extracts were further tested for total phenolics, flavonoids, condensed tannins, gallotannins and anthocyanins contents using spectrophotometric assays. Polyphenolic profile was established using HPLC-DAD analysis. Biological activity was investigated regarding antioxidant, cytotoxic and antibacterial activities. Four antioxidant assays were applied as well as three different cell lines for cytotoxic and fifteen bacterial strain for antibacterial activity. Results showed that subcritical water extraction (SCW) dominated over the other extraction techniques, where SCW extract exhibited the highest biological activity. Study indicates that plant Lavatera thuringiaca L. may be used as a potential source of biologically compounds. Copyright © 2017 Elsevier GmbH. All rights reserved.
Can we replace curation with information extraction software?
Karp, Peter D
2016-01-01
Can we use programs for automated or semi-automated information extraction from scientific texts as practical alternatives to professional curation? I show that error rates of current information extraction programs are too high to replace professional curation today. Furthermore, current IEP programs extract single narrow slivers of information, such as individual protein interactions; they cannot extract the large breadth of information extracted by professional curators for databases such as EcoCyc. They also cannot arbitrate among conflicting statements in the literature as curators can. Therefore, funding agencies should not hobble the curation efforts of existing databases on the assumption that a problem that has stymied Artificial Intelligence researchers for more than 60 years will be solved tomorrow. Semi-automated extraction techniques appear to have significantly more potential based on a review of recent tools that enhance curator productivity. But a full cost-benefit analysis for these tools is lacking. Without such analysis it is possible to expend significant effort developing information-extraction tools that automate small parts of the overall curation workflow without achieving a significant decrease in curation costs.Database URL. © The Author(s) 2016. Published by Oxford University Press.
PLAN2L: a web tool for integrated text mining and literature-based bioentity relation extraction.
Krallinger, Martin; Rodriguez-Penagos, Carlos; Tendulkar, Ashish; Valencia, Alfonso
2009-07-01
There is an increasing interest in using literature mining techniques to complement information extracted from annotation databases or generated by bioinformatics applications. Here we present PLAN2L, a web-based online search system that integrates text mining and information extraction techniques to access systematically information useful for analyzing genetic, cellular and molecular aspects of the plant model organism Arabidopsis thaliana. Our system facilitates a more efficient retrieval of information relevant to heterogeneous biological topics, from implications in biological relationships at the level of protein interactions and gene regulation, to sub-cellular locations of gene products and associations to cellular and developmental processes, i.e. cell cycle, flowering, root, leaf and seed development. Beyond single entities, also predefined pairs of entities can be provided as queries for which literature-derived relations together with textual evidences are returned. PLAN2L does not require registration and is freely accessible at http://zope.bioinfo.cnio.es/plan2l.
Using text mining techniques to extract phenotypic information from the PhenoCHF corpus
2015-01-01
Background Phenotypic information locked away in unstructured narrative text presents significant barriers to information accessibility, both for clinical practitioners and for computerised applications used for clinical research purposes. Text mining (TM) techniques have previously been applied successfully to extract different types of information from text in the biomedical domain. They have the potential to be extended to allow the extraction of information relating to phenotypes from free text. Methods To stimulate the development of TM systems that are able to extract phenotypic information from text, we have created a new corpus (PhenoCHF) that is annotated by domain experts with several types of phenotypic information relating to congestive heart failure. To ensure that systems developed using the corpus are robust to multiple text types, it integrates text from heterogeneous sources, i.e., electronic health records (EHRs) and scientific articles from the literature. We have developed several different phenotype extraction methods to demonstrate the utility of the corpus, and tested these methods on a further corpus, i.e., ShARe/CLEF 2013. Results Evaluation of our automated methods showed that PhenoCHF can facilitate the training of reliable phenotype extraction systems, which are robust to variations in text type. These results have been reinforced by evaluating our trained systems on the ShARe/CLEF corpus, which contains clinical records of various types. Like other studies within the biomedical domain, we found that solutions based on conditional random fields produced the best results, when coupled with a rich feature set. Conclusions PhenoCHF is the first annotated corpus aimed at encoding detailed phenotypic information. The unique heterogeneous composition of the corpus has been shown to be advantageous in the training of systems that can accurately extract phenotypic information from a range of different text types. Although the scope of our annotation is currently limited to a single disease, the promising results achieved can stimulate further work into the extraction of phenotypic information for other diseases. The PhenoCHF annotation guidelines and annotations are publicly available at https://code.google.com/p/phenochf-corpus. PMID:26099853
Using text mining techniques to extract phenotypic information from the PhenoCHF corpus.
Alnazzawi, Noha; Thompson, Paul; Batista-Navarro, Riza; Ananiadou, Sophia
2015-01-01
Phenotypic information locked away in unstructured narrative text presents significant barriers to information accessibility, both for clinical practitioners and for computerised applications used for clinical research purposes. Text mining (TM) techniques have previously been applied successfully to extract different types of information from text in the biomedical domain. They have the potential to be extended to allow the extraction of information relating to phenotypes from free text. To stimulate the development of TM systems that are able to extract phenotypic information from text, we have created a new corpus (PhenoCHF) that is annotated by domain experts with several types of phenotypic information relating to congestive heart failure. To ensure that systems developed using the corpus are robust to multiple text types, it integrates text from heterogeneous sources, i.e., electronic health records (EHRs) and scientific articles from the literature. We have developed several different phenotype extraction methods to demonstrate the utility of the corpus, and tested these methods on a further corpus, i.e., ShARe/CLEF 2013. Evaluation of our automated methods showed that PhenoCHF can facilitate the training of reliable phenotype extraction systems, which are robust to variations in text type. These results have been reinforced by evaluating our trained systems on the ShARe/CLEF corpus, which contains clinical records of various types. Like other studies within the biomedical domain, we found that solutions based on conditional random fields produced the best results, when coupled with a rich feature set. PhenoCHF is the first annotated corpus aimed at encoding detailed phenotypic information. The unique heterogeneous composition of the corpus has been shown to be advantageous in the training of systems that can accurately extract phenotypic information from a range of different text types. Although the scope of our annotation is currently limited to a single disease, the promising results achieved can stimulate further work into the extraction of phenotypic information for other diseases. The PhenoCHF annotation guidelines and annotations are publicly available at https://code.google.com/p/phenochf-corpus.
PREDOSE: A Semantic Web Platform for Drug Abuse Epidemiology using Social Media
Cameron, Delroy; Smith, Gary A.; Daniulaityte, Raminta; Sheth, Amit P.; Dave, Drashti; Chen, Lu; Anand, Gaurish; Carlson, Robert; Watkins, Kera Z.; Falck, Russel
2013-01-01
Objectives The role of social media in biomedical knowledge mining, including clinical, medical and healthcare informatics, prescription drug abuse epidemiology and drug pharmacology, has become increasingly significant in recent years. Social media offers opportunities for people to share opinions and experiences freely in online communities, which may contribute information beyond the knowledge of domain professionals. This paper describes the development of a novel Semantic Web platform called PREDOSE (PREscription Drug abuse Online Surveillance and Epidemiology), which is designed to facilitate the epidemiologic study of prescription (and related) drug abuse practices using social media. PREDOSE uses web forum posts and domain knowledge, modeled in a manually created Drug Abuse Ontology (DAO) (pronounced dow), to facilitate the extraction of semantic information from User Generated Content (UGC). A combination of lexical, pattern-based and semantics-based techniques is used together with the domain knowledge to extract fine-grained semantic information from UGC. In a previous study, PREDOSE was used to obtain the datasets from which new knowledge in drug abuse research was derived. Here, we report on various platform enhancements, including an updated DAO, new components for relationship and triple extraction, and tools for content analysis, trend detection and emerging patterns exploration, which enhance the capabilities of the PREDOSE platform. Given these enhancements, PREDOSE is now more equipped to impact drug abuse research by alleviating traditional labor-intensive content analysis tasks. Methods Using custom web crawlers that scrape UGC from publicly available web forums, PREDOSE first automates the collection of web-based social media content for subsequent semantic annotation. The annotation scheme is modeled in the DAO, and includes domain specific knowledge such as prescription (and related) drugs, methods of preparation, side effects, routes of administration, etc. The DAO is also used to help recognize three types of data, namely: 1) entities, 2) relationships and 3) triples. PREDOSE then uses a combination of lexical and semantic-based techniques to extract entities and relationships from the scraped content, and a top-down approach for triple extraction that uses patterns expressed in the DAO. In addition, PREDOSE uses publicly available lexicons to identify initial sentiment expressions in text, and then a probabilistic optimization algorithm (from related research) to extract the final sentiment expressions. Together, these techniques enable the capture of fine-grained semantic information from UGC, and querying, search, trend analysis and overall content analysis of social media related to prescription drug abuse. Moreover, extracted data are also made available to domain experts for the creation of training and test sets for use in evaluation and refinements in information extraction techniques. Results A recent evaluation of the information extraction techniques applied in the PREDOSE platform indicates 85% precision and 72% recall in entity identification, on a manually created gold standard dataset. In another study, PREDOSE achieved 36% precision in relationship identification and 33% precision in triple extraction, through manual evaluation by domain experts. Given the complexity of the relationship and triple extraction tasks and the abstruse nature of social media texts, we interpret these as favorable initial results. Extracted semantic information is currently in use in an online discovery support system, by prescription drug abuse researchers at the Center for Interventions, Treatment and Addictions Research (CITAR) at Wright State University. Conclusion A comprehensive platform for entity, relationship, triple and sentiment extraction from such abstruse texts has never been developed for drug abuse research. PREDOSE has already demonstrated the importance of mining social media by providing data from which new findings in drug abuse research were uncovered. Given the recent platform enhancements, including the refined DAO, components for relationship and triple extraction, and tools for content, trend and emerging pattern analysis, it is expected that PREDOSE will play a significant role in advancing drug abuse epidemiology in future. PMID:23892295
Extraction of actionable information from crowdsourced disaster data.
Kiatpanont, Rungsun; Tanlamai, Uthai; Chongstitvatana, Prabhas
Natural disasters cause enormous damage to countries all over the world. To deal with these common problems, different activities are required for disaster management at each phase of the crisis. There are three groups of activities as follows: (1) make sense of the situation and determine how best to deal with it, (2) deploy the necessary resources, and (3) harmonize as many parties as possible, using the most effective communication channels. Current technological improvements and developments now enable people to act as real-time information sources. As a result, inundation with crowdsourced data poses a real challenge for a disaster manager. The problem is how to extract the valuable information from a gigantic data pool in the shortest possible time so that the information is still useful and actionable. This research proposed an actionable-data-extraction process to deal with the challenge. Twitter was selected as a test case because messages posted on Twitter are publicly available. Hashtag, an easy and very efficient technique, was also used to differentiate information. A quantitative approach to extract useful information from the tweets was supported and verified by interviews with disaster managers from many leading organizations in Thailand to understand their missions. The information classifications extracted from the collected tweets were first performed manually, and then the tweets were used to train a machine learning algorithm to classify future tweets. One particularly useful, significant, and primary section was the request for help category. The support vector machine algorithm was used to validate the results from the extraction process of 13,696 sample tweets, with over 74 percent accuracy. The results confirmed that the machine learning technique could significantly and practically assist with disaster management by dealing with crowdsourced data.
Analysis of Technique to Extract Data from the Web for Improved Performance
NASA Astrophysics Data System (ADS)
Gupta, Neena; Singh, Manish
2010-11-01
The World Wide Web rapidly guides the world into a newly amazing electronic world, where everyone can publish anything in electronic form and extract almost all the information. Extraction of information from semi structured or unstructured documents, such as web pages, is a useful yet complex task. Data extraction, which is important for many applications, extracts the records from the HTML files automatically. Ontologies can achieve a high degree of accuracy in data extraction. We analyze method for data extraction OBDE (Ontology-Based Data Extraction), which automatically extracts the query result records from the web with the help of agents. OBDE first constructs an ontology for a domain according to information matching between the query interfaces and query result pages from different web sites within the same domain. Then, the constructed domain ontology is used during data extraction to identify the query result section in a query result page and to align and label the data values in the extracted records. The ontology-assisted data extraction method is fully automatic and overcomes many of the deficiencies of current automatic data extraction methods.
NASA Astrophysics Data System (ADS)
Chen, Andrew A.; Meng, Frank; Morioka, Craig A.; Churchill, Bernard M.; Kangarloo, Hooshang
2005-04-01
Managing pediatric patients with neurogenic bladder (NGB) involves regular laboratory, imaging, and physiologic testing. Using input from domain experts and current literature, we identified specific data points from these tests to develop the concept of an electronic disease vector for NGB. An information extraction engine was used to extract the desired data elements from free-text and semi-structured documents retrieved from the patient"s medical record. Finally, a Java-based presentation engine created graphical visualizations of the extracted data. After precision, recall, and timing evaluation, we conclude that these tools may enable clinically useful, automatically generated, and diagnosis-specific visualizations of patient data, potentially improving compliance and ultimately, outcomes.
NASA Astrophysics Data System (ADS)
Sierra, Heidy; Brooks, Dana; Dimarzio, Charles
2010-07-01
The extraction of 3-D morphological information about thick objects is explored in this work. We extract this information from 3-D differential interference contrast (DIC) images by applying a texture detection method. Texture extraction methods have been successfully used in different applications to study biological samples. A 3-D texture image is obtained by applying a local entropy-based texture extraction method. The use of this method to detect regions of blastocyst mouse embryos that are used in assisted reproduction techniques such as in vitro fertilization is presented as an example. Results demonstrate the potential of using texture detection methods to improve morphological analysis of thick samples, which is relevant to many biomedical and biological studies. Fluorescence and optical quadrature microscope phase images are used for validation.
An Expertise Recommender using Web Mining
NASA Technical Reports Server (NTRS)
Joshi, Anupam; Chandrasekaran, Purnima; ShuYang, Michelle; Ramakrishnan, Ramya
2001-01-01
This report explored techniques to mine web pages of scientists to extract information regarding their expertise, build expertise chains and referral webs, and semi automatically combine this information with directory information services to create a recommender system that permits query by expertise. The approach included experimenting with existing techniques that have been reported in research literature in recent past , and adapted them as needed. In addition, software tools were developed to capture and use this information.
Usability-driven pruning of large ontologies: the case of SNOMED CT.
López-García, Pablo; Boeker, Martin; Illarramendi, Arantza; Schulz, Stefan
2012-06-01
To study ontology modularization techniques when applied to SNOMED CT in a scenario in which no previous corpus of information exists and to examine if frequency-based filtering using MEDLINE can reduce subset size without discarding relevant concepts. Subsets were first extracted using four graph-traversal heuristics and one logic-based technique, and were subsequently filtered with frequency information from MEDLINE. Twenty manually coded discharge summaries from cardiology patients were used as signatures and test sets. The coverage, size, and precision of extracted subsets were measured. Graph-traversal heuristics provided high coverage (71-96% of terms in the test sets of discharge summaries) at the expense of subset size (17-51% of the size of SNOMED CT). Pre-computed subsets and logic-based techniques extracted small subsets (1%), but coverage was limited (24-55%). Filtering reduced the size of large subsets to 10% while still providing 80% coverage. Extracting subsets to annotate discharge summaries is challenging when no previous corpus exists. Ontology modularization provides valuable techniques, but the resulting modules grow as signatures spread across subhierarchies, yielding a very low precision. Graph-traversal strategies and frequency data from an authoritative source can prune large biomedical ontologies and produce useful subsets that still exhibit acceptable coverage. However, a clinical corpus closer to the specific use case is preferred when available.
Extraction and fusion of spectral parameters for face recognition
NASA Astrophysics Data System (ADS)
Boisier, B.; Billiot, B.; Abdessalem, Z.; Gouton, P.; Hardeberg, J. Y.
2011-03-01
Many methods have been developed in image processing for face recognition, especially in recent years with the increase of biometric technologies. However, most of these techniques are used on grayscale images acquired in the visible range of the electromagnetic spectrum. The aims of our study are to improve existing tools and to develop new methods for face recognition. The techniques used take advantage of the different spectral ranges, the visible, optical infrared and thermal infrared, by either combining them or analyzing them separately in order to extract the most appropriate information for face recognition. We also verify the consistency of several keypoints extraction techniques in the Near Infrared (NIR) and in the Visible Spectrum.
De Los Ríos, F. A.; Paluszny, M.
2015-01-01
We consider some methods to extract information about the rotator cuff based on magnetic resonance images; the study aims to define an alternative method of display that might facilitate the detection of partial tears in the supraspinatus tendon. Specifically, we are going to use families of ellipsoidal triangular patches to cover the humerus head near the affected area. These patches are going to be textured and displayed with the information of the magnetic resonance images using the trilinear interpolation technique. For the generation of points to texture each patch, we propose a new method that guarantees the uniform distribution of its points using a random statistical method. Its computational cost, defined as the average computing time to generate a fixed number of points, is significantly lower as compared with deterministic and other standard statistical techniques. PMID:25650281
Spatial-spectral preprocessing for endmember extraction on GPU's
NASA Astrophysics Data System (ADS)
Jimenez, Luis I.; Plaza, Javier; Plaza, Antonio; Li, Jun
2016-10-01
Spectral unmixing is focused in the identification of spectrally pure signatures, called endmembers, and their corresponding abundances in each pixel of a hyperspectral image. Mainly focused on the spectral information contained in the hyperspectral images, endmember extraction techniques have recently included spatial information to achieve more accurate results. Several algorithms have been developed for automatic or semi-automatic identification of endmembers using spatial and spectral information, including the spectral-spatial endmember extraction (SSEE) where, within a preprocessing step in the technique, both sources of information are extracted from the hyperspectral image and equally used for this purpose. Previous works have implemented the SSEE technique in four main steps: 1) local eigenvectors calculation in each sub-region in which the original hyperspectral image is divided; 2) computation of the maxima and minima projection of all eigenvectors over the entire hyperspectral image in order to obtain a candidates pixels set; 3) expansion and averaging of the signatures of the candidate set; 4) ranking based on the spectral angle distance (SAD). The result of this method is a list of candidate signatures from which the endmembers can be extracted using various spectral-based techniques, such as orthogonal subspace projection (OSP), vertex component analysis (VCA) or N-FINDR. Considering the large volume of data and the complexity of the calculations, there is a need for efficient implementations. Latest- generation hardware accelerators such as commodity graphics processing units (GPUs) offer a good chance for improving the computational performance in this context. In this paper, we develop two different implementations of the SSEE algorithm using GPUs. Both are based on the eigenvectors computation within each sub-region of the first step, one using the singular value decomposition (SVD) and another one using principal component analysis (PCA). Based on our experiments with hyperspectral data sets, high computational performance is observed in both cases.
Oertel, Peter; Bergmann, Andreas; Fischer, Sina; Trefz, Phillip; Küntzel, Anne; Reinhold, Petra; Köhler, Heike; Schubert, Jochen K; Miekisch, Wolfram
2018-05-14
Volatile organic compounds (VOCs) emitted from in vitro cultures may reveal information on species and metabolism. Owing to low nmol L -1 concentration ranges, pre-concentration techniques are required for gas chromatography-mass spectrometry (GC-MS) based analyses. This study was intended to compare the efficiency of established micro-extraction techniques - solid-phase micro-extraction (SPME) and needle-trap micro-extraction (NTME) - for the analysis of complex VOC patterns. For SPME, a 75 μm Carboxen®/polydimethylsiloxane fiber was used. The NTME needle was packed with divinylbenzene, Carbopack X and Carboxen 1000. The headspace was sampled bi-directionally. Seventy-two VOCs were calibrated by reference standard mixtures in the range of 0.041-62.24 nmol L -1 by means of GC-MS. Both pre-concentration methods were applied to profile VOCs from cultures of Mycobacterium avium ssp. paratuberculosis. Limits of detection ranged from 0.004 to 3.93 nmol L -1 (median = 0.030 nmol L -1 ) for NTME and from 0.001 to 5.684 nmol L -1 (median = 0.043 nmol L -1 ) for SPME. NTME showed advantages in assessing polar compounds such as alcohols. SPME showed advantages in reproducibility but disadvantages in sensitivity for N-containing compounds. Micro-extraction techniques such as SPME and NTME are well suited for trace VOC profiling over cultures if the limitations of each technique is taken into account. Copyright © 2018 John Wiley & Sons, Ltd.
Digital image processing for information extraction.
NASA Technical Reports Server (NTRS)
Billingsley, F. C.
1973-01-01
The modern digital computer has made practical image processing techniques for handling nonlinear operations in both the geometrical and the intensity domains, various types of nonuniform noise cleanup, and the numerical analysis of pictures. An initial requirement is that a number of anomalies caused by the camera (e.g., geometric distortion, MTF roll-off, vignetting, and nonuniform intensity response) must be taken into account or removed to avoid their interference with the information extraction process. Examples illustrating these operations are discussed along with computer techniques used to emphasize details, perform analyses, classify materials by multivariate analysis, detect temporal differences, and aid in human interpretation of photos.
X-ray phase contrast tomography by tracking near field speckle
Wang, Hongchang; Berujon, Sebastien; Herzen, Julia; Atwood, Robert; Laundy, David; Hipp, Alexander; Sawhney, Kawal
2015-01-01
X-ray imaging techniques that capture variations in the x-ray phase can yield higher contrast images with lower x-ray dose than is possible with conventional absorption radiography. However, the extraction of phase information is often more difficult than the extraction of absorption information and requires a more sophisticated experimental arrangement. We here report a method for three-dimensional (3D) X-ray phase contrast computed tomography (CT) which gives quantitative volumetric information on the real part of the refractive index. The method is based on the recently developed X-ray speckle tracking technique in which the displacement of near field speckle is tracked using a digital image correlation algorithm. In addition to differential phase contrast projection images, the method allows the dark-field images to be simultaneously extracted. After reconstruction, compared to conventional absorption CT images, the 3D phase CT images show greatly enhanced contrast. This new imaging method has advantages compared to other X-ray imaging methods in simplicity of experimental arrangement, speed of measurement and relative insensitivity to beam movements. These features make the technique an attractive candidate for material imaging such as in-vivo imaging of biological systems containing soft tissue. PMID:25735237
ERIC Educational Resources Information Center
Chowdhury, Gobinda G.
2003-01-01
Discusses issues related to natural language processing, including theoretical developments; natural language understanding; tools and techniques; natural language text processing systems; abstracting; information extraction; information retrieval; interfaces; software; Internet, Web, and digital library applications; machine translation for…
PREDOSE: a semantic web platform for drug abuse epidemiology using social media.
Cameron, Delroy; Smith, Gary A; Daniulaityte, Raminta; Sheth, Amit P; Dave, Drashti; Chen, Lu; Anand, Gaurish; Carlson, Robert; Watkins, Kera Z; Falck, Russel
2013-12-01
The role of social media in biomedical knowledge mining, including clinical, medical and healthcare informatics, prescription drug abuse epidemiology and drug pharmacology, has become increasingly significant in recent years. Social media offers opportunities for people to share opinions and experiences freely in online communities, which may contribute information beyond the knowledge of domain professionals. This paper describes the development of a novel semantic web platform called PREDOSE (PREscription Drug abuse Online Surveillance and Epidemiology), which is designed to facilitate the epidemiologic study of prescription (and related) drug abuse practices using social media. PREDOSE uses web forum posts and domain knowledge, modeled in a manually created Drug Abuse Ontology (DAO--pronounced dow), to facilitate the extraction of semantic information from User Generated Content (UGC), through combination of lexical, pattern-based and semantics-based techniques. In a previous study, PREDOSE was used to obtain the datasets from which new knowledge in drug abuse research was derived. Here, we report on various platform enhancements, including an updated DAO, new components for relationship and triple extraction, and tools for content analysis, trend detection and emerging patterns exploration, which enhance the capabilities of the PREDOSE platform. Given these enhancements, PREDOSE is now more equipped to impact drug abuse research by alleviating traditional labor-intensive content analysis tasks. Using custom web crawlers that scrape UGC from publicly available web forums, PREDOSE first automates the collection of web-based social media content for subsequent semantic annotation. The annotation scheme is modeled in the DAO, and includes domain specific knowledge such as prescription (and related) drugs, methods of preparation, side effects, and routes of administration. The DAO is also used to help recognize three types of data, namely: (1) entities, (2) relationships and (3) triples. PREDOSE then uses a combination of lexical and semantic-based techniques to extract entities and relationships from the scraped content, and a top-down approach for triple extraction that uses patterns expressed in the DAO. In addition, PREDOSE uses publicly available lexicons to identify initial sentiment expressions in text, and then a probabilistic optimization algorithm (from related research) to extract the final sentiment expressions. Together, these techniques enable the capture of fine-grained semantic information, which facilitate search, trend analysis and overall content analysis using social media on prescription drug abuse. Moreover, extracted data are also made available to domain experts for the creation of training and test sets for use in evaluation and refinements in information extraction techniques. A recent evaluation of the information extraction techniques applied in the PREDOSE platform indicates 85% precision and 72% recall in entity identification, on a manually created gold standard dataset. In another study, PREDOSE achieved 36% precision in relationship identification and 33% precision in triple extraction, through manual evaluation by domain experts. Given the complexity of the relationship and triple extraction tasks and the abstruse nature of social media texts, we interpret these as favorable initial results. Extracted semantic information is currently in use in an online discovery support system, by prescription drug abuse researchers at the Center for Interventions, Treatment and Addictions Research (CITAR) at Wright State University. A comprehensive platform for entity, relationship, triple and sentiment extraction from such abstruse texts has never been developed for drug abuse research. PREDOSE has already demonstrated the importance of mining social media by providing data from which new findings in drug abuse research were uncovered. Given the recent platform enhancements, including the refined DAO, components for relationship and triple extraction, and tools for content, trend and emerging pattern analysis, it is expected that PREDOSE will play a significant role in advancing drug abuse epidemiology in future. Copyright © 2013 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Martin, Gabriel; Gonzalez-Ruiz, Vicente; Plaza, Antonio; Ortiz, Juan P.; Garcia, Inmaculada
2010-07-01
Lossy hyperspectral image compression has received considerable interest in recent years due to the extremely high dimensionality of the data. However, the impact of lossy compression on spectral unmixing techniques has not been widely studied. These techniques characterize mixed pixels (resulting from insufficient spatial resolution) in terms of a suitable combination of spectrally pure substances (called endmembers) weighted by their estimated fractional abundances. This paper focuses on the impact of JPEG2000-based lossy compression of hyperspectral images on the quality of the endmembers extracted by different algorithms. The three considered algorithms are the orthogonal subspace projection (OSP), which uses only spatial information, and the automatic morphological endmember extraction (AMEE) and spatial spectral endmember extraction (SSEE), which integrate both spatial and spectral information in the search for endmembers. The impact of compression on the resulting abundance estimation based on the endmembers derived by different methods is also substantiated. Experimental results are conducted using a hyperspectral data set collected by NASA Jet Propulsion Laboratory over the Cuprite mining district in Nevada. The experimental results are quantitatively analyzed using reference information available from U.S. Geological Survey, resulting in recommendations to specialists interested in applying endmember extraction and unmixing algorithms to compressed hyperspectral data.
Construction of Green Tide Monitoring System and Research on its Key Techniques
NASA Astrophysics Data System (ADS)
Xing, B.; Li, J.; Zhu, H.; Wei, P.; Zhao, Y.
2018-04-01
As a kind of marine natural disaster, Green Tide has been appearing every year along the Qingdao Coast, bringing great loss to this region, since the large-scale bloom in 2008. Therefore, it is of great value to obtain the real time dynamic information about green tide distribution. In this study, methods of optical remote sensing and microwave remote sensing are employed in Green Tide Monitoring Research. A specific remote sensing data processing flow and a green tide information extraction algorithm are designed, according to the optical and microwave data of different characteristics. In the aspect of green tide spatial distribution information extraction, an automatic extraction algorithm of green tide distribution boundaries is designed based on the principle of mathematical morphology dilation/erosion. And key issues in information extraction, including the division of green tide regions, the obtaining of basic distributions, the limitation of distribution boundary, and the elimination of islands, have been solved. The automatic generation of green tide distribution boundaries from the results of remote sensing information extraction is realized. Finally, a green tide monitoring system is built based on IDL/GIS secondary development in the integrated environment of RS and GIS, achieving the integration of RS monitoring and information extraction.
Application of wavelet techniques for cancer diagnosis using ultrasound images: A Review.
Sudarshan, Vidya K; Mookiah, Muthu Rama Krishnan; Acharya, U Rajendra; Chandran, Vinod; Molinari, Filippo; Fujita, Hamido; Ng, Kwan Hoong
2016-02-01
Ultrasound is an important and low cost imaging modality used to study the internal organs of human body and blood flow through blood vessels. It uses high frequency sound waves to acquire images of internal organs. It is used to screen normal, benign and malignant tissues of various organs. Healthy and malignant tissues generate different echoes for ultrasound. Hence, it provides useful information about the potential tumor tissues that can be analyzed for diagnostic purposes before therapeutic procedures. Ultrasound images are affected with speckle noise due to an air gap between the transducer probe and the body. The challenge is to design and develop robust image preprocessing, segmentation and feature extraction algorithms to locate the tumor region and to extract subtle information from isolated tumor region for diagnosis. This information can be revealed using a scale space technique such as the Discrete Wavelet Transform (DWT). It decomposes an image into images at different scales using low pass and high pass filters. These filters help to identify the detail or sudden changes in intensity in the image. These changes are reflected in the wavelet coefficients. Various texture, statistical and image based features can be extracted from these coefficients. The extracted features are subjected to statistical analysis to identify the significant features to discriminate normal and malignant ultrasound images using supervised classifiers. This paper presents a review of wavelet techniques used for preprocessing, segmentation and feature extraction of breast, thyroid, ovarian and prostate cancer using ultrasound images. Copyright © 2015 Elsevier Ltd. All rights reserved.
Ford, Lauren; Henderson, Robert L; Rayner, Christopher M; Blackburn, Richard S
2017-03-03
Madder (Rubia tinctorum L.) has been widely used as a red dye throughout history. Acid-sensitive colorants present in madder, such as glycosides (lucidin primeveroside, ruberythric acid, galiosin) and sensitive aglycons (lucidin), are degraded in the textile back extraction process; in previous literature these sensitive molecules are either absent or present in only low concentrations due to the use of acid in typical textile back extraction processes. Anthraquinone aglycons alizarin and purpurin are usually identified in analysis following harsh back extraction methods, such those using solvent mixtures with concentrated hydrochloric acid at high temperatures. Use of softer extraction techniques potentially allows for dye components present in madder to be extracted without degradation, which can potentially provide more information about the original dye profile, which varies significantly between madder varieties, species and dyeing technique. Herein, a softer extraction method involving aqueous glucose solution was developed and compared to other back extraction techniques on wool dyed with root extract from different varieties of Rubia tinctorum. Efficiencies of the extraction methods were analysed by HPLC coupled with diode array detection. Acidic literature methods were evaluated and they generally caused hydrolysis and degradation of the dye components, with alizarin, lucidin, and purpurin being the main compounds extracted. In contrast, extraction in aqueous glucose solution provides a highly effective method for extraction of madder dyed wool and is shown to efficiently extract lucidin primeveroside and ruberythric acid without causing hydrolysis and also extract aglycons that are present due to hydrolysis during processing of the plant material. Glucose solution is a favourable extraction medium due to its ability to form extensive hydrogen bonding with glycosides present in madder, and displace them from the fibre. This new glucose method offers an efficient process that preserves these sensitive molecules and is a step-change in analysis of madder dyed textiles as it can provide further information about historical dye preparation and dyeing processes that current methods cannot. The method also efficiently extracts glycosides in artificially aged samples, making it applicable for museum textile artefacts. Copyright © 2017 Elsevier B.V. All rights reserved.
Usability-driven pruning of large ontologies: the case of SNOMED CT
Boeker, Martin; Illarramendi, Arantza; Schulz, Stefan
2012-01-01
Objectives To study ontology modularization techniques when applied to SNOMED CT in a scenario in which no previous corpus of information exists and to examine if frequency-based filtering using MEDLINE can reduce subset size without discarding relevant concepts. Materials and Methods Subsets were first extracted using four graph-traversal heuristics and one logic-based technique, and were subsequently filtered with frequency information from MEDLINE. Twenty manually coded discharge summaries from cardiology patients were used as signatures and test sets. The coverage, size, and precision of extracted subsets were measured. Results Graph-traversal heuristics provided high coverage (71–96% of terms in the test sets of discharge summaries) at the expense of subset size (17–51% of the size of SNOMED CT). Pre-computed subsets and logic-based techniques extracted small subsets (1%), but coverage was limited (24–55%). Filtering reduced the size of large subsets to 10% while still providing 80% coverage. Discussion Extracting subsets to annotate discharge summaries is challenging when no previous corpus exists. Ontology modularization provides valuable techniques, but the resulting modules grow as signatures spread across subhierarchies, yielding a very low precision. Conclusion Graph-traversal strategies and frequency data from an authoritative source can prune large biomedical ontologies and produce useful subsets that still exhibit acceptable coverage. However, a clinical corpus closer to the specific use case is preferred when available. PMID:22268217
NASA Astrophysics Data System (ADS)
Abdelzaher, Tarek; Roy, Heather; Wang, Shiguang; Giridhar, Prasanna; Al Amin, Md. Tanvir; Bowman, Elizabeth K.; Kolodny, Michael A.
2016-05-01
Signal processing techniques such as filtering, detection, estimation and frequency domain analysis have long been applied to extract information from noisy sensor data. This paper describes the exploitation of these signal processing techniques to extract information from social networks, such as Twitter and Instagram. Specifically, we view social networks as noisy sensors that report events in the physical world. We then present a data processing stack for detection, localization, tracking, and veracity analysis of reported events using social network data. We show using a controlled experiment that the behavior of social sources as information relays varies dramatically depending on context. In benign contexts, there is general agreement on events, whereas in conflict scenarios, a significant amount of collective filtering is introduced by conflicted groups, creating a large data distortion. We describe signal processing techniques that mitigate such distortion, resulting in meaningful approximations of actual ground truth, given noisy reported observations. Finally, we briefly present an implementation of the aforementioned social network data processing stack in a sensor network analysis toolkit, called Apollo. Experiences with Apollo show that our techniques are successful at identifying and tracking credible events in the physical world.
An Electronic Engineering Curriculum Design Based on Concept-Mapping Techniques
ERIC Educational Resources Information Center
Toral, S. L.; Martinez-Torres, M. R.; Barrero, F.; Gallardo, S.; Duran, M. J.
2007-01-01
Curriculum design is a concern in European Universities as they face the forthcoming European Higher Education Area (EHEA). This process can be eased by the use of scientific tools such as Concept-Mapping Techniques (CMT) that extract and organize the most relevant information from experts' experience using statistics techniques, and helps a…
Conception of Self-Construction Production Scheduling System
NASA Astrophysics Data System (ADS)
Xue, Hai; Zhang, Xuerui; Shimizu, Yasuhiro; Fujimura, Shigeru
With the high speed innovation of information technology, many production scheduling systems have been developed. However, a lot of customization according to individual production environment is required, and then a large investment for development and maintenance is indispensable. Therefore now the direction to construct scheduling systems should be changed. The final objective of this research aims at developing a system which is built by it extracting the scheduling technique automatically through the daily production scheduling work, so that an investment will be reduced. This extraction mechanism should be applied for various production processes for the interoperability. Using the master information extracted by the system, production scheduling operators can be supported to accelerate the production scheduling work easily and accurately without any restriction of scheduling operations. By installing this extraction mechanism, it is easy to introduce scheduling system without a lot of expense for customization. In this paper, at first a model for expressing a scheduling problem is proposed. Then the guideline to extract the scheduling information and use the extracted information is shown and some applied functions are also proposed based on it.
E&V (Evaluation and Validation) Reference Manual, Version 1.0.
1988-07-01
references featured in the Reference Manual. G-05097a GENERAL REFERENCE INFORMATION EXTRACTED , FROM * INDEXES AND CROSS REFERENCES CHAPTER 4...at E&V techniques through many different paths, and provides a means to extract useful information along the way. /^c^^s; /r^ ^yr*•**•»» * L...electronically (preferred) to szymansk@ajpo.sei.cmu.edu or by regular mail to Mr. Raymond Szymanski . AFWAUAAAF, Wright Patterson AFB, OH 45433-6543. ES-2
Kong, Jessica; Giridharagopal, Rajiv; Harrison, Jeffrey S; Ginger, David S
2018-05-31
Correlating nanoscale chemical specificity with operational physics is a long-standing goal of functional scanning probe microscopy (SPM). We employ a data analytic approach combining multiple microscopy modes, using compositional information in infrared vibrational excitation maps acquired via photoinduced force microscopy (PiFM) with electrical information from conductive atomic force microscopy. We study a model polymer blend comprising insulating poly(methyl methacrylate) (PMMA) and semiconducting poly(3-hexylthiophene) (P3HT). We show that PiFM spectra are different from FTIR spectra, but can still be used to identify local composition. We use principal component analysis to extract statistically significant principal components and principal component regression to predict local current and identify local polymer composition. In doing so, we observe evidence of semiconducting P3HT within PMMA aggregates. These methods are generalizable to correlated SPM data and provide a meaningful technique for extracting complex compositional information that are impossible to measure from any one technique.
Information Extraction from Unstructured Text for the Biodefense Knowledge Center
DOE Office of Scientific and Technical Information (OSTI.GOV)
Samatova, N F; Park, B; Krishnamurthy, R
2005-04-29
The Bio-Encyclopedia at the Biodefense Knowledge Center (BKC) is being constructed to allow an early detection of emerging biological threats to homeland security. It requires highly structured information extracted from variety of data sources. However, the quantity of new and vital information available from every day sources cannot be assimilated by hand, and therefore reliable high-throughput information extraction techniques are much anticipated. In support of the BKC, Lawrence Livermore National Laboratory and Oak Ridge National Laboratory, together with the University of Utah, are developing an information extraction system built around the bioterrorism domain. This paper reports two important pieces ofmore » our effort integrated in the system: key phrase extraction and semantic tagging. Whereas two key phrase extraction technologies developed during the course of project help identify relevant texts, our state-of-the-art semantic tagging system can pinpoint phrases related to emerging biological threats. Also we are enhancing and tailoring the Bio-Encyclopedia by augmenting semantic dictionaries and extracting details of important events, such as suspected disease outbreaks. Some of these technologies have already been applied to large corpora of free text sources vital to the BKC mission, including ProMED-mail, PubMed abstracts, and the DHS's Information Analysis and Infrastructure Protection (IAIP) news clippings. In order to address the challenges involved in incorporating such large amounts of unstructured text, the overall system is focused on precise extraction of the most relevant information for inclusion in the BKC.« less
Pattern-Based Extraction of Argumentation from the Scientific Literature
ERIC Educational Resources Information Center
White, Elizabeth K.
2010-01-01
As the number of publications in the biomedical field continues its exponential increase, techniques for automatically summarizing information from this body of literature have become more diverse. In addition, the targets of summarization have become more subtle; initial work focused on extracting the factual assertions from full-text papers,…
Shen, Weijian; Xu, Jinzhong; Yang, Wenquan; Shen, Chongyu; Zhao, Zengyun; Ding, Tao; Wu, Bin
2007-09-01
An analytical method of solid phase extraction-gas chromatography-mass spectrometry with two different ionization techniques was established for simultaneous determination of 12 acetanilide herbicide residues in tea-leaves. Herbicides were extracted from tea-leaf samples with ethyl acetate. The extract was cleaned-up on an active carbon SPE column connected to a Florisil SPE column. Analytical screening was determined by the technique of gas chromatography (GC)-mass spectrometry (MS) in the selected ion monitoring (SIM) mode with either electron impact ionization (EI) or negative chemical ionization (NCI). It is reliable and stable that the recoveries of all herbicides were in the range from 50% to 110% at three spiked levels, 10 microg/kg, 20 microg/kg and 40 microg/kg, and the relative standard deviations (RSDs) were no more than 10.9%. The two different ionization techniques are complementary as more ion fragmentation information can be obtained from the EI mode while more molecular ion information from the NCI mode. By comparison of the two techniques, the selectivity of NCI-SIM was much better than that of EI-SIM method. The sensitivities of the both techniques were high, the limit of quantitative (LOQ) for each herbicide was no more than 2.0 microg/kg, and the limit of detection (LOD) with NCI-SIM technique was much lower than that of EI-SIM when analyzing herbicides with several halogen atoms in the molecule.
Enriching a document collection by integrating information extraction and PDF annotation
NASA Astrophysics Data System (ADS)
Powley, Brett; Dale, Robert; Anisimoff, Ilya
2009-01-01
Modern digital libraries offer all the hyperlinking possibilities of the World Wide Web: when a reader finds a citation of interest, in many cases she can now click on a link to be taken to the cited work. This paper presents work aimed at providing the same ease of navigation for legacy PDF document collections that were created before the possibility of integrating hyperlinks into documents was ever considered. To achieve our goal, we need to carry out two tasks: first, we need to identify and link citations and references in the text with high reliability; and second, we need the ability to determine physical PDF page locations for these elements. We demonstrate the use of a high-accuracy citation extraction algorithm which significantly improves on earlier reported techniques, and a technique for integrating PDF processing with a conventional text-stream based information extraction pipeline. We demonstrate these techniques in the context of a particular document collection, this being the ACL Anthology; but the same approach can be applied to other document sets.
NASA Astrophysics Data System (ADS)
Setiyoko, A.; Dharma, I. G. W. S.; Haryanto, T.
2017-01-01
Multispectral data and hyperspectral data acquired from satellite sensor have the ability in detecting various objects on the earth ranging from low scale to high scale modeling. These data are increasingly being used to produce geospatial information for rapid analysis by running feature extraction or classification process. Applying the most suited model for this data mining is still challenging because there are issues regarding accuracy and computational cost. This research aim is to develop a better understanding regarding object feature extraction and classification applied for satellite image by systematically reviewing related recent research projects. A method used in this research is based on PRISMA statement. After deriving important points from trusted sources, pixel based and texture-based feature extraction techniques are promising technique to be analyzed more in recent development of feature extraction and classification.
NASA Astrophysics Data System (ADS)
Attallah, Bilal; Serir, Amina; Chahir, Youssef; Boudjelal, Abdelwahhab
2017-11-01
Palmprint recognition systems are dependent on feature extraction. A method of feature extraction using higher discrimination information was developed to characterize palmprint images. In this method, two individual feature extraction techniques are applied to a discrete wavelet transform of a palmprint image, and their outputs are fused. The two techniques used in the fusion are the histogram of gradient and the binarized statistical image features. They are then evaluated using an extreme learning machine classifier before selecting a feature based on principal component analysis. Three palmprint databases, the Hong Kong Polytechnic University (PolyU) Multispectral Palmprint Database, Hong Kong PolyU Palmprint Database II, and the Delhi Touchless (IIDT) Palmprint Database, are used in this study. The study shows that our method effectively identifies and verifies palmprints and outperforms other methods based on feature extraction.
NASA Technical Reports Server (NTRS)
Thorley, G. A.; Draeger, W. C.; Lauer, D. T.; Lent, J.; Roberts, E.
1971-01-01
The four problem are as being investigated are: (1) determination of the feasibility of providing the resource manager with operationally useful information through the use of remote sensing techniques; (2) definition of the spectral characteristics of earth resources and the optimum procedures for calibrating tone and color characteristics of multispectral imagery (3) determination of the extent to which humans can extract useful earth resource information through remote sensing imagery; (4) determination of the extent to which automatic classification and data processing can extract useful information from remote sensing data.
Natural Language Processing in Radiology: A Systematic Review.
Pons, Ewoud; Braun, Loes M M; Hunink, M G Myriam; Kors, Jan A
2016-05-01
Radiological reporting has generated large quantities of digital content within the electronic health record, which is potentially a valuable source of information for improving clinical care and supporting research. Although radiology reports are stored for communication and documentation of diagnostic imaging, harnessing their potential requires efficient and automated information extraction: they exist mainly as free-text clinical narrative, from which it is a major challenge to obtain structured data. Natural language processing (NLP) provides techniques that aid the conversion of text into a structured representation, and thus enables computers to derive meaning from human (ie, natural language) input. Used on radiology reports, NLP techniques enable automatic identification and extraction of information. By exploring the various purposes for their use, this review examines how radiology benefits from NLP. A systematic literature search identified 67 relevant publications describing NLP methods that support practical applications in radiology. This review takes a close look at the individual studies in terms of tasks (ie, the extracted information), the NLP methodology and tools used, and their application purpose and performance results. Additionally, limitations, future challenges, and requirements for advancing NLP in radiology will be discussed. (©) RSNA, 2016 Online supplemental material is available for this article.
Text mining and its potential applications in systems biology.
Ananiadou, Sophia; Kell, Douglas B; Tsujii, Jun-ichi
2006-12-01
With biomedical literature increasing at a rate of several thousand papers per week, it is impossible to keep abreast of all developments; therefore, automated means to manage the information overload are required. Text mining techniques, which involve the processes of information retrieval, information extraction and data mining, provide a means of solving this. By adding meaning to text, these techniques produce a more structured analysis of textual knowledge than simple word searches, and can provide powerful tools for the production and analysis of systems biology models.
Parallel Visualization Co-Processing of Overnight CFD Propulsion Applications
NASA Technical Reports Server (NTRS)
Edwards, David E.; Haimes, Robert
1999-01-01
An interactive visualization system pV3 is being developed for the investigation of advanced computational methodologies employing visualization and parallel processing for the extraction of information contained in large-scale transient engineering simulations. Visual techniques for extracting information from the data in terms of cutting planes, iso-surfaces, particle tracing and vector fields are included in this system. This paper discusses improvements to the pV3 system developed under NASA's Affordable High Performance Computing project.
Pattern recognition of satellite cloud imagery for improved weather prediction
NASA Technical Reports Server (NTRS)
Gautier, Catherine; Somerville, Richard C. J.; Volfson, Leonid B.
1986-01-01
The major accomplishment was the successful development of a method for extracting time derivative information from geostationary meteorological satellite imagery. This research is a proof-of-concept study which demonstrates the feasibility of using pattern recognition techniques and a statistical cloud classification method to estimate time rate of change of large-scale meteorological fields from remote sensing data. The cloud classification methodology is based on typical shape function analysis of parameter sets characterizing the cloud fields. The three specific technical objectives, all of which were successfully achieved, are as follows: develop and test a cloud classification technique based on pattern recognition methods, suitable for the analysis of visible and infrared geostationary satellite VISSR imagery; develop and test a methodology for intercomparing successive images using the cloud classification technique, so as to obtain estimates of the time rate of change of meteorological fields; and implement this technique in a testbed system incorporating an interactive graphics terminal to determine the feasibility of extracting time derivative information suitable for comparison with numerical weather prediction products.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zheng, Xueyun; Wojcik, Roza; Zhang, Xing
Ion mobility spectrometry (IMS) is a widely used analytical technique for rapid molecular separations in the gas phase. IMS alone is useful, but its coupling with mass spectrometry (MS) and front-end separations has been extremely beneficial for increasing measurement sensitivity, peak capacity of complex mixtures, and the scope of molecular information in biological and environmental sample analyses. Multiple studies in disease screening and environmental evaluations have even shown these IMS-based multidimensional separations extract information not possible with each technique individually. This review highlights 3-dimensional separations using IMS-MS in conjunction with a range of front-end techniques, such as gas chromatography (GC),more » supercritical fluid chromatography (SFC), liquid chromatography (LC), solid phase extractions (SPE), capillary electrophoresis (CE), field asymmetric ion mobility spectrometry (FAIMS), and microfluidic devices. The origination, current state, various applications, and future capabilities for these multidimensional approaches are described to provide insight into the utility and potential of each technique.« less
Research on Crowdsourcing Emergency Information Extraction of Based on Events' Frame
NASA Astrophysics Data System (ADS)
Yang, Bo; Wang, Jizhou; Ma, Weijun; Mao, Xi
2018-01-01
At present, the common information extraction method cannot extract the structured emergency event information accurately; the general information retrieval tool cannot completely identify the emergency geographic information; these ways also do not have an accurate assessment of these results of distilling. So, this paper proposes an emergency information collection technology based on event framework. This technique is to solve the problem of emergency information picking. It mainly includes emergency information extraction model (EIEM), complete address recognition method (CARM) and the accuracy evaluation model of emergency information (AEMEI). EIEM can be structured to extract emergency information and complements the lack of network data acquisition in emergency mapping. CARM uses a hierarchical model and the shortest path algorithm and allows the toponomy pieces to be joined as a full address. AEMEI analyzes the results of the emergency event and summarizes the advantages and disadvantages of the event framework. Experiments show that event frame technology can solve the problem of emergency information drawing and provides reference cases for other applications. When the emergency disaster is about to occur, the relevant departments query emergency's data that has occurred in the past. They can make arrangements ahead of schedule which defense and reducing disaster. The technology decreases the number of casualties and property damage in the country and world. This is of great significance to the state and society.
Extracting the Textual and Temporal Structure of Supercomputing Logs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jain, S; Singh, I; Chandra, A
2009-05-26
Supercomputers are prone to frequent faults that adversely affect their performance, reliability and functionality. System logs collected on these systems are a valuable resource of information about their operational status and health. However, their massive size, complexity, and lack of standard format makes it difficult to automatically extract information that can be used to improve system management. In this work we propose a novel method to succinctly represent the contents of supercomputing logs, by using textual clustering to automatically find the syntactic structures of log messages. This information is used to automatically classify messages into semantic groups via an onlinemore » clustering algorithm. Further, we describe a methodology for using the temporal proximity between groups of log messages to identify correlated events in the system. We apply our proposed methods to two large, publicly available supercomputing logs and show that our technique features nearly perfect accuracy for online log-classification and extracts meaningful structural and temporal message patterns that can be used to improve the accuracy of other log analysis techniques.« less
Hoogendoorn, Mark; Szolovits, Peter; Moons, Leon M G; Numans, Mattijs E
2016-05-01
Machine learning techniques can be used to extract predictive models for diseases from electronic medical records (EMRs). However, the nature of EMRs makes it difficult to apply off-the-shelf machine learning techniques while still exploiting the rich content of the EMRs. In this paper, we explore the usage of a range of natural language processing (NLP) techniques to extract valuable predictors from uncoded consultation notes and study whether they can help to improve predictive performance. We study a number of existing techniques for the extraction of predictors from the consultation notes, namely a bag of words based approach and topic modeling. In addition, we develop a dedicated technique to match the uncoded consultation notes with a medical ontology. We apply these techniques as an extension to an existing pipeline to extract predictors from EMRs. We evaluate them in the context of predictive modeling for colorectal cancer (CRC), a disease known to be difficult to diagnose before performing an endoscopy. Our results show that we are able to extract useful information from the consultation notes. The predictive performance of the ontology-based extraction method moves significantly beyond the benchmark of age and gender alone (area under the receiver operating characteristic curve (AUC) of 0.870 versus 0.831). We also observe more accurate predictive models by adding features derived from processing the consultation notes compared to solely using coded data (AUC of 0.896 versus 0.882) although the difference is not significant. The extracted features from the notes are shown be equally predictive (i.e. there is no significant difference in performance) compared to the coded data of the consultations. It is possible to extract useful predictors from uncoded consultation notes that improve predictive performance. Techniques linking text to concepts in medical ontologies to derive these predictors are shown to perform best for predicting CRC in our EMR dataset. Copyright © 2016 Elsevier B.V. All rights reserved.
Rapid System to Quantitatively Characterize the Airborne Microbial Community
NASA Technical Reports Server (NTRS)
Macnaughton, Sarah J.
1998-01-01
Bioaerosols have been linked to a wide range of different allergies and respiratory illnesses. Currently, microorganism culture is the most commonly used method for exposure assessment. Such culture techniques, however, generally fail to detect between 90-99% of the actual viable biomass. Consequently, an unbiased technique for detecting airborne microorganisms is essential. In this Phase II proposal, a portable air sampling device his been developed for the collection of airborne microbial biomass from indoor (and outdoor) environments. Methods were evaluated for extracting and identifying lipids that provide information on indoor air microbial biomass, and automation of these procedures was investigated. Also, techniques to automate the extraction of DNA were explored.
Valero, E; Sanz, J; Martínez-Castro, I
2001-06-01
Direct thermal desorption (DTD) has been used as a technique for extracting volatile components of cheese as a preliminary step to their gas chromatographic (GC) analysis. In this study, it is applied to different cheese varieties: Camembert, blue, Chaumes, and La Serena. Volatiles are also extracted using other techniques such as simultaneous distillation-extraction and dynamic headspace. Separation and identification of the cheese components are carried out by GC-mass spectrometry. Approximately 100 compounds are detected in the examined cheeses. The described results show that DTD is fast, simple, and easy to automate; requires only a small amount of sample (approximately 50 mg); and affords quantitative information about the main groups of compounds present in cheeses.
Considering context: reliable entity networks through contextual relationship extraction
NASA Astrophysics Data System (ADS)
David, Peter; Hawes, Timothy; Hansen, Nichole; Nolan, James J.
2016-05-01
Existing information extraction techniques can only partially address the problem of exploiting unreadable-large amounts text. When discussion of events and relationships is limited to simple, past-tense, factual descriptions of events, current NLP-based systems can identify events and relationships and extract a limited amount of additional information. But the simple subset of available information that existing tools can extract from text is only useful to a small set of users and problems. Automated systems need to find and separate information based on what is threatened or planned to occur, has occurred in the past, or could potentially occur. We address the problem of advanced event and relationship extraction with our event and relationship attribute recognition system, which labels generic, planned, recurring, and potential events. The approach is based on a combination of new machine learning methods, novel linguistic features, and crowd-sourced labeling. The attribute labeler closes the gap between structured event and relationship models and the complicated and nuanced language that people use to describe them. Our operational-quality event and relationship attribute labeler enables Warfighters and analysts to more thoroughly exploit information in unstructured text. This is made possible through 1) More precise event and relationship interpretation, 2) More detailed information about extracted events and relationships, and 3) More reliable and informative entity networks that acknowledge the different attributes of entity-entity relationships.
Multispectral system analysis through modeling and simulation
NASA Technical Reports Server (NTRS)
Malila, W. A.; Gleason, J. M.; Cicone, R. C.
1977-01-01
The design and development of multispectral remote sensor systems and associated information extraction techniques should be optimized under the physical and economic constraints encountered and yet be effective over a wide range of scene and environmental conditions. Direct measurement of the full range of conditions to be encountered can be difficult, time consuming, and costly. Simulation of multispectral data by modeling scene, atmosphere, sensor, and data classifier characteristics is set forth as a viable alternative, particularly when coupled with limited sets of empirical measurements. A multispectral system modeling capability is described. Use of the model is illustrated for several applications - interpretation of remotely sensed data from agricultural and forest scenes, evaluating atmospheric effects in Landsat data, examining system design and operational configuration, and development of information extraction techniques.
Multispectral system analysis through modeling and simulation
NASA Technical Reports Server (NTRS)
Malila, W. A.; Gleason, J. M.; Cicone, R. C.
1977-01-01
The design and development of multispectral remote sensor systems and associated information extraction techniques should be optimized under the physical and economic constraints encountered and yet be effective over a wide range of scene and environmental conditions. Direct measurement of the full range of conditions to be encountered can be difficult, time consuming, and costly. Simulation of multispectral data by modeling scene, atmosphere, sensor, and data classifier characteristics is set forth as a viable alternative, particularly when coupled with limited sets of empirical measurements. A multispectral system modeling capability is described. Use of the model is illustrated for several applications - interpretation of remotely sensed data from agricultural and forest scenes, evaluating atmospheric effects in LANDSAT data, examining system design and operational configuration, and development of information extraction techniques.
Using decision-tree classifier systems to extract knowledge from databases
NASA Technical Reports Server (NTRS)
St.clair, D. C.; Sabharwal, C. L.; Hacke, Keith; Bond, W. E.
1990-01-01
One difficulty in applying artificial intelligence techniques to the solution of real world problems is that the development and maintenance of many AI systems, such as those used in diagnostics, require large amounts of human resources. At the same time, databases frequently exist which contain information about the process(es) of interest. Recently, efforts to reduce development and maintenance costs of AI systems have focused on using machine learning techniques to extract knowledge from existing databases. Research is described in the area of knowledge extraction using a class of machine learning techniques called decision-tree classifier systems. Results of this research suggest ways of performing knowledge extraction which may be applied in numerous situations. In addition, a measurement called the concept strength metric (CSM) is described which can be used to determine how well the resulting decision tree can differentiate between the concepts it has learned. The CSM can be used to determine whether or not additional knowledge needs to be extracted from the database. An experiment involving real world data is presented to illustrate the concepts described.
Information sciences experiment system
NASA Technical Reports Server (NTRS)
Katzberg, Stephen J.; Murray, Nicholas D.; Benz, Harry F.; Bowker, David E.; Hendricks, Herbert D.
1990-01-01
The rapid expansion of remote sensing capability over the last two decades will take another major leap forward with the advent of the Earth Observing System (Eos). An approach is presented that will permit experiments and demonstrations in onboard information extraction. The approach is a non-intrusive, eavesdropping mode in which a small amount of spacecraft real estate is allocated to an onboard computation resource. How such an approach allows the evaluation of advanced technology in the space environment, advanced techniques in information extraction for both Earth science and information science studies, direct to user data products, and real-time response to events, all without affecting other on-board instrumentation is discussed.
NASA Astrophysics Data System (ADS)
Proux, Denys; Segond, Frédérique; Gerbier, Solweig; Metzger, Marie Hélène
Hospital Acquired Infections (HAI) is a real burden for doctors and risk surveillance experts. The impact on patients' health and related healthcare cost is very significant and a major concern even for rich countries. Furthermore required data to evaluate the threat is generally not available to experts and that prevents from fast reaction. However, recent advances in Computational Intelligence Techniques such as Information Extraction, Risk Patterns Detection in documents and Decision Support Systems allow now to address this problem.
NASA Technical Reports Server (NTRS)
Sowers, J.; Mehrotra, R.; Sethi, I. K.
1989-01-01
A method for extracting road boundaries using the monochrome image of a visual road scene is presented. The statistical information regarding the intensity levels present in the image along with some geometrical constraints concerning the road are the basics of this approach. Results and advantages of this technique compared to others are discussed. The major advantages of this technique, when compared to others, are its ability to process the image in only one pass, to limit the area searched in the image using only knowledge concerning the road geometry and previous boundary information, and dynamically adjust for inconsistencies in the located boundary information, all of which helps to increase the efficacy of this technique.
Spietelun, Agata; Marcinkowski, Łukasz; de la Guardia, Miguel; Namieśnik, Jacek
2013-12-20
Solid phase microextraction find increasing applications in the sample preparation step before chromatographic determination of analytes in samples with a complex composition. These techniques allow for integrating several operations, such as sample collection, extraction, analyte enrichment above the detection limit of a given measuring instrument and the isolation of analytes from sample matrix. In this work the information about novel methodological and instrumental solutions in relation to different variants of solid phase extraction techniques, solid-phase microextraction (SPME), stir bar sorptive extraction (SBSE) and magnetic solid phase extraction (MSPE) is presented, including practical applications of these techniques and a critical discussion about their advantages and disadvantages. The proposed solutions fulfill the requirements resulting from the concept of sustainable development, and specifically from the implementation of green chemistry principles in analytical laboratories. Therefore, particular attention was paid to the description of possible uses of novel, selective stationary phases in extraction techniques, inter alia, polymeric ionic liquids, carbon nanotubes, and silica- and carbon-based sorbents. The methodological solutions, together with properly matched sampling devices for collecting analytes from samples with varying matrix composition, enable us to reduce the number of errors during the sample preparation prior to chromatographic analysis as well as to limit the negative impact of this analytical step on the natural environment and the health of laboratory employees. Copyright © 2013 Elsevier B.V. All rights reserved.
Extracting laboratory test information from biomedical text
Kang, Yanna Shen; Kayaalp, Mehmet
2013-01-01
Background: No previous study reported the efficacy of current natural language processing (NLP) methods for extracting laboratory test information from narrative documents. This study investigates the pathology informatics question of how accurately such information can be extracted from text with the current tools and techniques, especially machine learning and symbolic NLP methods. The study data came from a text corpus maintained by the U.S. Food and Drug Administration, containing a rich set of information on laboratory tests and test devices. Methods: The authors developed a symbolic information extraction (SIE) system to extract device and test specific information about four types of laboratory test entities: Specimens, analytes, units of measures and detection limits. They compared the performance of SIE and three prominent machine learning based NLP systems, LingPipe, GATE and BANNER, each implementing a distinct supervised machine learning method, hidden Markov models, support vector machines and conditional random fields, respectively. Results: Machine learning systems recognized laboratory test entities with moderately high recall, but low precision rates. Their recall rates were relatively higher when the number of distinct entity values (e.g., the spectrum of specimens) was very limited or when lexical morphology of the entity was distinctive (as in units of measures), yet SIE outperformed them with statistically significant margins on extracting specimen, analyte and detection limit information in both precision and F-measure. Its high recall performance was statistically significant on analyte information extraction. Conclusions: Despite its shortcomings against machine learning methods, a well-tailored symbolic system may better discern relevancy among a pile of information of the same type and may outperform a machine learning system by tapping into lexically non-local contextual information such as the document structure. PMID:24083058
Bending of an Aspirated Pin During Rigid Bronchoscopy: Safeguards and Pitfalls.
Elsayed, Abdelrahman A A; Mansour, Albaraa A; Amin, Ahmed A A; Ahmed, Mohsen S M
2018-04-13
Pin aspiration is a common problem in Muslim countries, where many women wear veils (hijab). This condition is usually treated using either a rigid or a flexible bronchoscope, and yet occasionally requires surgical approach. Pin bending may be necessary to extract impacted pins during the therapeutic rigid bronchoscopy. Medical records of patients who had pins extracted with a bending technique during the period from January 2012 to December 2016 in 1 institution were analyzed. Information on intraoperative and postoperative complications was collected. Between 2012 and 2016, 315 rigid bronchoscopies were performed for pin extraction; in 38 cases, bending of the pin was required for the extraction because they were in a position that did not allow simple extraction. The procedure was successful in cases and there were no major complications. The extraction of visible, distally located or impacted pins can be safely performed by experienced bronchoscopists using the bending technique. Some safeguards and pitfalls must be noted to ensure maximum safety.
Profiling of poorly stratified smoky atmospheres with scanning lidar
Vladimir Kovalev; Cyle Wold; Alexander Petkov; Wei Min Hao
2012-01-01
The multiangle data processing technique is considered based on using the signal measured in zenith (or close to zenith) as a core source for extracting the information about the vertical atmospheric aerosol loading. The multiangle signals are used as the auxiliary data to extract the vertical transmittance profile from the zenith signal. Simulated and experimental...
An automatic method for retrieving and indexing catalogues of biomedical courses.
Maojo, Victor; de la Calle, Guillermo; García-Remesal, Miguel; Bankauskaite, Vaida; Crespo, Jose
2008-11-06
Although there is wide information about Biomedical Informatics education and courses in different Websites, information is usually not exhaustive and difficult to update. We propose a new methodology based on information retrieval techniques for extracting, indexing and retrieving automatically information about educational offers. A web application has been developed to make available such information in an inventory of courses and educational offers.
NASA Astrophysics Data System (ADS)
Sacchi, Elisa; Michelot, Jean-Luc; Pitsch, Helmut; Lalieux, Philippe; Aranyossy, Jean-François
2001-01-01
This paper summarises the results of a comprehensive critical review, initiated by the OECD/NEA "Clay Club," of the extraction techniques available to obtain water and solutes from argillaceous rocks. The paper focuses on the mechanisms involved in the extraction processes, the consequences on the isotopic and chemical composition of the extracted pore water and the attempts made to reconstruct its original composition. Finally, it provides some examples of reliable techniques and information, as a function of the purpose of the geochemical study. Résumé. Cet article résume les résultats d'une synthèse critique d'ensemble, lancée par le OECD/NEA "Clay Club", sur les techniques d'extraction disponibles pour obtenir l'eau et les solutés de roches argileuses. L'article est consacré aux mécanismes impliqués dans les processus d'extraction, aux conséquences sur la composition isotopique et chimique de l'eau porale extraite et aux tentatives faites pour reconstituer sa composition originelle. Finalement, il donne quelques exemples de techniques fiables et d'informations, en fonction du but de l'étude géochimique. Resúmen. Este artículo resume los resultados de una revisión crítica exhaustiva (iniciada por el "Clay Club" OECD/NEA) de las técnicas de extracción disponibles para obtener agua y solutos en rocas arcillosas. El artículo se centra en los mecanismos involucrados en los procesos extractivos, las consecuencias en la composición isotópica y química del agua intersticial extraída, y en los intentos realizados para reconstruir su composición original. Finalmente, se presentan algunos ejemplos de técnicas fiables e información, en función del propósito del estudio geoquímico.
Radiomics: a new application from established techniques
Parekh, Vishwa; Jacobs, Michael A.
2016-01-01
The increasing use of biomarkers in cancer have led to the concept of personalized medicine for patients. Personalized medicine provides better diagnosis and treatment options available to clinicians. Radiological imaging techniques provide an opportunity to deliver unique data on different types of tissue. However, obtaining useful information from all radiological data is challenging in the era of “big data”. Recent advances in computational power and the use of genomics have generated a new area of research termed Radiomics. Radiomics is defined as the high throughput extraction of quantitative imaging features or texture (radiomics) from imaging to decode tissue pathology and creating a high dimensional data set for feature extraction. Radiomic features provide information about the gray-scale patterns, inter-pixel relationships. In addition, shape and spectral properties can be extracted within the same regions of interest on radiological images. Moreover, these features can be further used to develop computational models using advanced machine learning algorithms that may serve as a tool for personalized diagnosis and treatment guidance. PMID:28042608
Information extraction from multi-institutional radiology reports.
Hassanpour, Saeed; Langlotz, Curtis P
2016-01-01
The radiology report is the most important source of clinical imaging information. It documents critical information about the patient's health and the radiologist's interpretation of medical findings. It also communicates information to the referring physicians and records that information for future clinical and research use. Although efforts to structure some radiology report information through predefined templates are beginning to bear fruit, a large portion of radiology report information is entered in free text. The free text format is a major obstacle for rapid extraction and subsequent use of information by clinicians, researchers, and healthcare information systems. This difficulty is due to the ambiguity and subtlety of natural language, complexity of described images, and variations among different radiologists and healthcare organizations. As a result, radiology reports are used only once by the clinician who ordered the study and rarely are used again for research and data mining. In this work, machine learning techniques and a large multi-institutional radiology report repository are used to extract the semantics of the radiology report and overcome the barriers to the re-use of radiology report information in clinical research and other healthcare applications. We describe a machine learning system to annotate radiology reports and extract report contents according to an information model. This information model covers the majority of clinically significant contents in radiology reports and is applicable to a wide variety of radiology study types. Our automated approach uses discriminative sequence classifiers for named-entity recognition to extract and organize clinically significant terms and phrases consistent with the information model. We evaluated our information extraction system on 150 radiology reports from three major healthcare organizations and compared its results to a commonly used non-machine learning information extraction method. We also evaluated the generalizability of our approach across different organizations by training and testing our system on data from different organizations. Our results show the efficacy of our machine learning approach in extracting the information model's elements (10-fold cross-validation average performance: precision: 87%, recall: 84%, F1 score: 85%) and its superiority and generalizability compared to the common non-machine learning approach (p-value<0.05). Our machine learning information extraction approach provides an effective automatic method to annotate and extract clinically significant information from a large collection of free text radiology reports. This information extraction system can help clinicians better understand the radiology reports and prioritize their review process. In addition, the extracted information can be used by researchers to link radiology reports to information from other data sources such as electronic health records and the patient's genome. Extracted information also can facilitate disease surveillance, real-time clinical decision support for the radiologist, and content-based image retrieval. Copyright © 2015 Elsevier B.V. All rights reserved.
Domino: Extracting, Comparing, and Manipulating Subsets across Multiple Tabular Datasets
Gratzl, Samuel; Gehlenborg, Nils; Lex, Alexander; Pfister, Hanspeter; Streit, Marc
2016-01-01
Answering questions about complex issues often requires analysts to take into account information contained in multiple interconnected datasets. A common strategy in analyzing and visualizing large and heterogeneous data is dividing it into meaningful subsets. Interesting subsets can then be selected and the associated data and the relationships between the subsets visualized. However, neither the extraction and manipulation nor the comparison of subsets is well supported by state-of-the-art techniques. In this paper we present Domino, a novel multiform visualization technique for effectively representing subsets and the relationships between them. By providing comprehensive tools to arrange, combine, and extract subsets, Domino allows users to create both common visualization techniques and advanced visualizations tailored to specific use cases. In addition to the novel technique, we present an implementation that enables analysts to manage the wide range of options that our approach offers. Innovative interactive features such as placeholders and live previews support rapid creation of complex analysis setups. We introduce the technique and the implementation using a simple example and demonstrate scalability and effectiveness in a use case from the field of cancer genomics. PMID:26356916
Automated measurement of birefringence - Development and experimental evaluation of the techniques
NASA Technical Reports Server (NTRS)
Voloshin, A. S.; Redner, A. S.
1989-01-01
Traditional photoelasticity has started to lose its appeal since it requires a well-trained specialist to acquire and interpret results. A spectral-contents-analysis approach may help to revive this old, but still useful technique. Light intensity of the beam passed through the stressed specimen contains all the information necessary to automatically extract the value of retardation. This is done by using a photodiode array to investigate the spectral contents of the light beam. Three different techniques to extract the value of retardation from the spectral contents of the light are discussed and evaluated. An experimental system was built which demonstrates the ability to evaluate retardation values in real time.
The use of analytical sedimentation velocity to extract thermodynamic linkage.
Cole, James L; Correia, John J; Stafford, Walter F
2011-11-01
For 25 years, the Gibbs Conference on Biothermodynamics has focused on the use of thermodynamics to extract information about the mechanism and regulation of biological processes. This includes the determination of equilibrium constants for macromolecular interactions by high precision physical measurements. These approaches further reveal thermodynamic linkages to ligand binding events. Analytical ultracentrifugation has been a fundamental technique in the determination of macromolecular reaction stoichiometry and energetics for 85 years. This approach is highly amenable to the extraction of thermodynamic couplings to small molecule binding in the overall reaction pathway. In the 1980s this approach was extended to the use of sedimentation velocity techniques, primarily by the analysis of tubulin-drug interactions by Na and Timasheff. This transport method necessarily incorporates the complexity of both hydrodynamic and thermodynamic nonideality. The advent of modern computational methods in the last 20 years has subsequently made the analysis of sedimentation velocity data for interacting systems more robust and rigorous. Here we review three examples where sedimentation velocity has been useful at extracting thermodynamic information about reaction stoichiometry and energetics. Approaches to extract linkage to small molecule binding and the influence of hydrodynamic nonideality are emphasized. These methods are shown to also apply to the collection of fluorescence data with the new Aviv FDS. Copyright © 2011 Elsevier B.V. All rights reserved.
The use of analytical sedimentation velocity to extract thermodynamic linkage
Cole, James L.; Correia, John J.; Stafford, Walter F.
2011-01-01
For 25 years, the Gibbs Conference on Biothermodynamics has focused on the use of thermodynamics to extract information about the mechanism and regulation of biological processes. This includes the determination of equilibrium constants for macromolecular interactions by high precision physical measurements. These approaches further reveal thermodynamic linkages to ligand binding events. Analytical ultracentrifugation has been a fundamental technique in the determination of macromolecular reaction stoichiometry and energetics for 85 years. This approach is highly amenable to the extraction of thermodynamic couplings to small molecule binding in the overall reaction pathway. In the 1980’s this approach was extended to the use of sedimentation velocity techniques, primarily by the analysis of tubulin-drug interactions by Na and Timasheff. This transport method necessarily incorporates the complexity of both hydrodynamic and thermodynamic nonideality. The advent of modern computational methods in the last 20 years has subsequently made the analysis of sedimentation velocity data for interacting systems more robust and rigorous. Here we review three examples where sedimentation velocity has been useful at extracting thermodynamic information about reaction stoichiometry and energetics. Approaches to extract linkage to small molecule binding and the influence of hydrodynamic nonideality are emphasized. These methods are shown to also apply to the collection of fluorescence data with the new Aviv FDS. PMID:21703752
ERIC Educational Resources Information Center
Lippert, George
1991-01-01
A lesson plan for soil study utilizes the Tullgren extraction method to illustrate biological concepts. It includes background information, equipment, collection techniques, activities, and references for identification guides about soil fauna. (MCO)
Simultaneous parameter optimization of x-ray and neutron reflectivity data using genetic algorithms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Singh, Surendra, E-mail: surendra@barc.gov.in; Basu, Saibal
2016-05-23
X-ray and neutron reflectivity are two non destructive techniques which provide a wealth of information on thickness, structure and interracial properties in nanometer length scale. Combination of X-ray and neutron reflectivity is well suited for obtaining physical parameters of nanostructured thin films and superlattices. Neutrons provide a different contrast between the elements than X-rays and are also sensitive to the magnetization depth profile in thin films and superlattices. The real space information is extracted by fitting a model for the structure of the thin film sample in reflectometry experiments. We have applied a Genetic Algorithms technique to extract depth dependentmore » structure and magnetic in thin film and multilayer systems by simultaneously fitting X-ray and neutron reflectivity data.« less
Modern Techniques in Acoustical Signal and Image Processing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Candy, J V
2002-04-04
Acoustical signal processing problems can lead to some complex and intricate techniques to extract the desired information from noisy, sometimes inadequate, measurements. The challenge is to formulate a meaningful strategy that is aimed at performing the processing required even in the face of uncertainties. This strategy can be as simple as a transformation of the measured data to another domain for analysis or as complex as embedding a full-scale propagation model into the processor. The aims of both approaches are the same--to extract the desired information and reject the extraneous, that is, develop a signal processing scheme to achieve thismore » goal. In this paper, we briefly discuss this underlying philosophy from a ''bottom-up'' approach enabling the problem to dictate the solution rather than visa-versa.« less
Image processing and analysis using neural networks for optometry area
NASA Astrophysics Data System (ADS)
Netto, Antonio V.; Ferreira de Oliveira, Maria C.
2002-11-01
In this work we describe the framework of a functional system for processing and analyzing images of the human eye acquired by the Hartmann-Shack technique (HS), in order to extract information to formulate a diagnosis of eye refractive errors (astigmatism, hypermetropia and myopia). The analysis is to be carried out using an Artificial Intelligence system based on Neural Nets, Fuzzy Logic and Classifier Combination. The major goal is to establish the basis of a new technology to effectively measure ocular refractive errors that is based on methods alternative those adopted in current patented systems. Moreover, analysis of images acquired with the Hartmann-Shack technique may enable the extraction of additional information on the health of an eye under exam from the same image used to detect refraction errors.
An Estimation Approach to Extract Multimedia Information in Distributed Steganographic Images
2007-07-01
image steganography (DIS) [8] is a new method of concealing secret information in several host images , leaving...distributed image steganography , steganalysis, estimation, image quality matrix 1 Introduction Steganography is a method that hides secret information...used to sufficiently hide a secret image . Another emerging image steganographic technique is referred to as distributed image steganography
Information Fusion - Methods and Aggregation Operators
NASA Astrophysics Data System (ADS)
Torra, Vicenç
Information fusion techniques are commonly applied in Data Mining and Knowledge Discovery. In this chapter, we will give an overview of such applications considering their three main uses. This is, we consider fusion methods for data preprocessing, model building and information extraction. Some aggregation operators (i.e. particular fusion methods) and their properties are briefly described as well.
Tuberculosis diagnosis support analysis for precarious health information systems.
Orjuela-Cañón, Alvaro David; Camargo Mendoza, Jorge Eliécer; Awad García, Carlos Enrique; Vergara Vela, Erika Paola
2018-04-01
Pulmonary tuberculosis is a world emergency for the World Health Organization. Techniques and new diagnosis tools are important to battle this bacterial infection. There have been many advances in all those fields, but in developing countries such as Colombia, where the resources and infrastructure are limited, new fast and less expensive strategies are increasingly needed. Artificial neural networks are computational intelligence techniques that can be used in this kind of problems and offer additional support in the tuberculosis diagnosis process, providing a tool to medical staff to make decisions about management of subjects under suspicious of tuberculosis. A database extracted from 105 subjects with precarious information of people under suspect of pulmonary tuberculosis was used in this study. Data extracted from sex, age, diabetes, homeless, AIDS status and a variable with clinical knowledge from the medical personnel were used. Models based on artificial neural networks were used, exploring supervised learning to detect the disease. Unsupervised learning was used to create three risk groups based on available information. Obtained results are comparable with traditional techniques for detection of tuberculosis, showing advantages such as fast and low implementation costs. Sensitivity of 97% and specificity of 71% where achieved. Used techniques allowed to obtain valuable information that can be useful for physicians who treat the disease in decision making processes, especially under limited infrastructure and data. Copyright © 2018 Elsevier B.V. All rights reserved.
Lech, T
2016-05-06
Historical parchments in the form of documents, manuscripts, books, or letters, make up a large portion of cultural heritage collections. Their priceless historical value is associated with not only their content, but also the information hidden in the DNA deposited on them. Analyses of ancient DNA (aDNA) retrieved from parchments can be used in various investigations, including, but not limited to, studying their authentication, tracing the development of the culture, diplomacy, and technology, as well as obtaining information on the usage and domestication of animals. This article proposes and verifies a procedure for aDNA recovery from historical parchments and its appropriate preparation for further analyses. This study involved experimental selection of an aDNA extraction method with the highest efficiency and quality of extracted genetic material, from among the multi-stage phenol-chloroform extraction methods, and the modern, column-based techniques that use selective DNA-binding membranes. Moreover, current techniques to amplify entire genetic material were questioned, and the possibility of using mitochondrial DNA for species identification was analyzed. The usefulness of the proposed procedure was successfully confirmed in identification tests of historical parchments dating back to the 13-16th century AD.
The effects of solar incidence angle over digital processing of LANDSAT data
NASA Technical Reports Server (NTRS)
Parada, N. D. J. (Principal Investigator); Novo, E. M. L. M.
1983-01-01
A technique to extract the topography modulation component from digital data is described. The enhancement process is based on the fact that the pixel contains two types of information: (1) reflectance variation due to the target; (2) reflectance variation due to the topography. In order to enhance the signal variation due to topography, the technique recommends the extraction from original LANDSAT data of the component resulting from target reflectance. Considering that the role of topographic modulation over the pixel information will vary with solar incidence angle, the results of this technique of digital processing will differ from one season to another, mainly in highly dissected topography. In this context, the effects of solar incidence angle over the topographic modulation technique were evaluated. Two sets of MSS/LANDSAT data, with solar elevation angles varying from 22 to 41 deg were selected to implement the digital processing at the Image-100 System. A secondary watershed (Rio Bocaina) draining into Rio Paraiba do Sul (Sao Paulo State) was selected as a test site. The results showed that the technique used was more appropriate to MSS data acquired under higher Sun elevation angles. Topographic modulation components applied to low Sun elevation angles lessens rather than enhances topography.
Fusion of monocular cues to detect man-made structures in aerial imagery
NASA Technical Reports Server (NTRS)
Shufelt, Jefferey; Mckeown, David M.
1991-01-01
The extraction of buildings from aerial imagery is a complex problem for automated computer vision. It requires locating regions in a scene that possess properties distinguishing them as man-made objects as opposed to naturally occurring terrain features. It is reasonable to assume that no single detection method can correctly delineate or verify buildings in every scene. A cooperative-methods paradigm is useful in approaching the building extraction problem. Using this paradigm, each extraction technique provides information which can be added or assimilated into an overall interpretation of the scene. Thus, the main objective is to explore the development of computer vision system that integrates the results of various scene analysis techniques into an accurate and robust interpretation of the underlying three dimensional scene. The problem of building hypothesis fusion in aerial imagery is discussed. Building extraction techniques are briefly surveyed, including four building extraction, verification, and clustering systems. A method for fusing the symbolic data generated by these systems is described, and applied to monocular image and stereo image data sets. Evaluation methods for the fusion results are described, and the fusion results are analyzed using these methods.
Data base management system configuration specification. [computer storage devices
NASA Technical Reports Server (NTRS)
Neiers, J. W.
1979-01-01
The functional requirements and the configuration of the data base management system are described. Techniques and technology which will enable more efficient and timely transfer of useful data from the sensor to the user, extraction of information by the user, and exchange of information among the users are demonstrated.
Multi-scale statistical analysis of coronal solar activity
Gamborino, Diana; del-Castillo-Negrete, Diego; Martinell, Julio J.
2016-07-08
Multi-filter images from the solar corona are used to obtain temperature maps that are analyzed using techniques based on proper orthogonal decomposition (POD) in order to extract dynamical and structural information at various scales. Exploring active regions before and after a solar flare and comparing them with quiet regions, we show that the multi-scale behavior presents distinct statistical properties for each case that can be used to characterize the level of activity in a region. Information about the nature of heat transport is also to be extracted from the analysis.
Three-dimensional tracking for efficient fire fighting in complex situations
NASA Astrophysics Data System (ADS)
Akhloufi, Moulay; Rossi, Lucile
2009-05-01
Each year, hundred millions hectares of forests burn causing human and economic losses. For efficient fire fighting, the personnel in the ground need tools permitting the prediction of fire front propagation. In this work, we present a new technique for automatically tracking fire spread in three-dimensional space. The proposed approach uses a stereo system to extract a 3D shape from fire images. A new segmentation technique is proposed and permits the extraction of fire regions in complex unstructured scenes. It works in the visible spectrum and combines information extracted from YUV and RGB color spaces. Unlike other techniques, our algorithm does not require previous knowledge about the scene. The resulting fire regions are classified into different homogenous zones using clustering techniques. Contours are then extracted and a feature detection algorithm is used to detect interest points like local maxima and corners. Extracted points from stereo images are then used to compute the 3D shape of the fire front. The resulting data permits to build the fire volume. The final model is used to compute important spatial and temporal fire characteristics like: spread dynamics, local orientation, heading direction, etc. Tests conducted on the ground show the efficiency of the proposed scheme. This scheme is being integrated with a fire spread mathematical model in order to predict and anticipate the fire behaviour during fire fighting. Also of interest to fire-fighters, is the proposed automatic segmentation technique that can be used in early detection of fire in complex scenes.
FEX: A Knowledge-Based System For Planimetric Feature Extraction
NASA Astrophysics Data System (ADS)
Zelek, John S.
1988-10-01
Topographical planimetric features include natural surfaces (rivers, lakes) and man-made surfaces (roads, railways, bridges). In conventional planimetric feature extraction, a photointerpreter manually interprets and extracts features from imagery on a stereoplotter. Visual planimetric feature extraction is a very labour intensive operation. The advantages of automating feature extraction include: time and labour savings; accuracy improvements; and planimetric data consistency. FEX (Feature EXtraction) combines techniques from image processing, remote sensing and artificial intelligence for automatic feature extraction. The feature extraction process co-ordinates the information and knowledge in a hierarchical data structure. The system simulates the reasoning of a photointerpreter in determining the planimetric features. Present efforts have concentrated on the extraction of road-like features in SPOT imagery. Keywords: Remote Sensing, Artificial Intelligence (AI), SPOT, image understanding, knowledge base, apars.
Zheng, Xueyun; Wojcik, Roza; Zhang, Xing; Ibrahim, Yehia M.; Burnum-Johnson, Kristin E.; Orton, Daniel J.; Monroe, Matthew E.; Moore, Ronald J.; Smith, Richard D.; Baker, Erin S.
2017-01-01
Ion mobility spectrometry (IMS) is a widely used analytical technique for rapid molecular separations in the gas phase. Though IMS alone is useful, its coupling with mass spectrometry (MS) and front-end separations is extremely beneficial for increasing measurement sensitivity, peak capacity of complex mixtures, and the scope of molecular information available from biological and environmental sample analyses. In fact, multiple disease screening and environmental evaluations have illustrated that the IMS-based multidimensional separations extract information that cannot be acquired with each technique individually. This review highlights three-dimensional separations using IMS-MS in conjunction with a range of front-end techniques, such as gas chromatography, supercritical fluid chromatography, liquid chromatography, solid-phase extractions, capillary electrophoresis, field asymmetric ion mobility spectrometry, and microfluidic devices. The origination, current state, various applications, and future capabilities of these multidimensional approaches are described in detail to provide insight into their uses and benefits. PMID:28301728
Imaging nanoscale lattice variations by machine learning of x-ray diffraction microscopy data
Laanait, Nouamane; Zhang, Zhan; Schlepütz, Christian M.
2016-08-09
In this paper, we present a novel methodology based on machine learning to extract lattice variations in crystalline materials, at the nanoscale, from an x-ray Bragg diffraction-based imaging technique. By employing a full-field microscopy setup, we capture real space images of materials, with imaging contrast determined solely by the x-ray diffracted signal. The data sets that emanate from this imaging technique are a hybrid of real space information (image spatial support) and reciprocal lattice space information (image contrast), and are intrinsically multidimensional (5D). By a judicious application of established unsupervised machine learning techniques and multivariate analysis to this multidimensional datamore » cube, we show how to extract features that can be ascribed physical interpretations in terms of common structural distortions, such as lattice tilts and dislocation arrays. Finally, we demonstrate this 'big data' approach to x-ray diffraction microscopy by identifying structural defects present in an epitaxial ferroelectric thin-film of lead zirconate titanate.« less
Imaging nanoscale lattice variations by machine learning of x-ray diffraction microscopy data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Laanait, Nouamane; Zhang, Zhan; Schlepütz, Christian M.
In this paper, we present a novel methodology based on machine learning to extract lattice variations in crystalline materials, at the nanoscale, from an x-ray Bragg diffraction-based imaging technique. By employing a full-field microscopy setup, we capture real space images of materials, with imaging contrast determined solely by the x-ray diffracted signal. The data sets that emanate from this imaging technique are a hybrid of real space information (image spatial support) and reciprocal lattice space information (image contrast), and are intrinsically multidimensional (5D). By a judicious application of established unsupervised machine learning techniques and multivariate analysis to this multidimensional datamore » cube, we show how to extract features that can be ascribed physical interpretations in terms of common structural distortions, such as lattice tilts and dislocation arrays. Finally, we demonstrate this 'big data' approach to x-ray diffraction microscopy by identifying structural defects present in an epitaxial ferroelectric thin-film of lead zirconate titanate.« less
Knowledge representation and management: transforming textual information into useful knowledge.
Rassinoux, A-M
2010-01-01
To summarize current outstanding research in the field of knowledge representation and management. Synopsis of the articles selected for the IMIA Yearbook 2010. Four interesting papers, dealing with structured knowledge, have been selected for the section knowledge representation and management. Combining the newest techniques in computational linguistics and natural language processing with the latest methods in statistical data analysis, machine learning and text mining has proved to be efficient for turning unstructured textual information into meaningful knowledge. Three of the four selected papers for the section knowledge representation and management corroborate this approach and depict various experiments conducted to .extract meaningful knowledge from unstructured free texts such as extracting cancer disease characteristics from pathology reports, or extracting protein-protein interactions from biomedical papers, as well as extracting knowledge for the support of hypothesis generation in molecular biology from the Medline literature. Finally, the last paper addresses the level of formally representing and structuring information within clinical terminologies in order to render such information easily available and shareable among the health informatics community. Delivering common powerful tools able to automatically extract meaningful information from the huge amount of electronically unstructured free texts is an essential step towards promoting sharing and reusability across applications, domains, and institutions thus contributing to building capacities worldwide.
Valdez, Joshua; Rueschman, Michael; Kim, Matthew; Redline, Susan; Sahoo, Satya S
2016-10-01
Extraction of structured information from biomedical literature is a complex and challenging problem due to the complexity of biomedical domain and lack of appropriate natural language processing (NLP) techniques. High quality domain ontologies model both data and metadata information at a fine level of granularity, which can be effectively used to accurately extract structured information from biomedical text. Extraction of provenance metadata, which describes the history or source of information, from published articles is an important task to support scientific reproducibility. Reproducibility of results reported by previous research studies is a foundational component of scientific advancement. This is highlighted by the recent initiative by the US National Institutes of Health called "Principles of Rigor and Reproducibility". In this paper, we describe an effective approach to extract provenance metadata from published biomedical research literature using an ontology-enabled NLP platform as part of the Provenance for Clinical and Healthcare Research (ProvCaRe). The ProvCaRe-NLP tool extends the clinical Text Analysis and Knowledge Extraction System (cTAKES) platform using both provenance and biomedical domain ontologies. We demonstrate the effectiveness of ProvCaRe-NLP tool using a corpus of 20 peer-reviewed publications. The results of our evaluation demonstrate that the ProvCaRe-NLP tool has significantly higher recall in extracting provenance metadata as compared to existing NLP pipelines such as MetaMap.
Text Detection, Tracking and Recognition in Video: A Comprehensive Survey.
Yin, Xu-Cheng; Zuo, Ze-Yu; Tian, Shu; Liu, Cheng-Lin
2016-04-14
Intelligent analysis of video data is currently in wide demand because video is a major source of sensory data in our lives. Text is a prominent and direct source of information in video, while recent surveys of text detection and recognition in imagery [1], [2] focus mainly on text extraction from scene images. Here, this paper presents a comprehensive survey of text detection, tracking and recognition in video with three major contributions. First, a generic framework is proposed for video text extraction that uniformly describes detection, tracking, recognition, and their relations and interactions. Second, within this framework, a variety of methods, systems and evaluation protocols of video text extraction are summarized, compared, and analyzed. Existing text tracking techniques, tracking based detection and recognition techniques are specifically highlighted. Third, related applications, prominent challenges, and future directions for video text extraction (especially from scene videos and web videos) are also thoroughly discussed.
Modern Hardware Technologies and Software Techniques for On-Line Database Storage and Access.
1985-12-01
of the information in a message narrative. This method employs artificial intelligence techniques to extract information, In simalest terms, an...disf ribif ion (tape replacemenf) systemns Database distribution On-fine mass storage Videogame ROM (luke-box I Media Cost Mt $2-10/438 $10-SO/G38...trajninq ot tne great intelligence for the analyst would be required. If, on’ the other hand, a sentence analysis scneme siTole enouq,. for the low-level
Neural network explanation using inversion.
Saad, Emad W; Wunsch, Donald C
2007-01-01
An important drawback of many artificial neural networks (ANN) is their lack of explanation capability [Andrews, R., Diederich, J., & Tickle, A. B. (1996). A survey and critique of techniques for extracting rules from trained artificial neural networks. Knowledge-Based Systems, 8, 373-389]. This paper starts with a survey of algorithms which attempt to explain the ANN output. We then present HYPINV, a new explanation algorithm which relies on network inversion; i.e. calculating the ANN input which produces a desired output. HYPINV is a pedagogical algorithm, that extracts rules, in the form of hyperplanes. It is able to generate rules with arbitrarily desired fidelity, maintaining a fidelity-complexity tradeoff. To our knowledge, HYPINV is the only pedagogical rule extraction method, which extracts hyperplane rules from continuous or binary attribute neural networks. Different network inversion techniques, involving gradient descent as well as an evolutionary algorithm, are presented. An information theoretic treatment of rule extraction is presented. HYPINV is applied to example synthetic problems, to a real aerospace problem, and compared with similar algorithms using benchmark problems.
Accuracy assessment of airborne LIDAR data and automated extraction of features
NASA Astrophysics Data System (ADS)
Cetin, Ali Fuat
Airborne LIDAR technology is becoming more widely used since it provides fast and dense irregularly spaced 3D point clouds. The coordinates produced as a result of calibration of the system are used for surface modeling and information extraction. In this research a new idea of LIDAR detectable targets is introduced. In the second part of this research, a new technique to delineate the edge of road pavements automatically using only LIDAR is presented. The accuracy of LIDAR data should be determined before exploitation for any information extraction to support a Geographic Information System (GIS) database. Until recently there was no definitive research to provide a methodology for common and practical assessment of both horizontal and vertical accuracy of LIDAR data for end users. The idea used in this research was to use targets of such a size and design so that the position of each target can be determined using the Least Squares Image Matching Technique. The technique used in this research can provide end users and data providers an easy way to evaluate the quality of the product, especially when there are accessible hard surfaces to install the targets. The results of the technique are determined to be in a reasonable range when the point spacing of the data is sufficient. To delineate the edge of pavements, trees and buildings are removed from the point cloud, and the road surfaces are segmented from the remaining terrain data. This is accomplished using the homogeneous nature of road surfaces in intensity and height. There are not many studies to delineate the edge of road pavement after the road surfaces are extracted. In this research, template matching techniques are used with criteria computed by Gray Level Co-occurrence Matrix (GLCM) properties, in order to locate seed pixels in the image. The seed pixels are then used for placement of the matched templates along the road. The accuracy of the delineated edge of pavement is determined by comparing the coordinates of reference points collected via photogrammetry with the coordinates of the nearest points along the delineated edge.
Analysis of atomic force microscopy data for surface characterization using fuzzy logic
DOE Office of Scientific and Technical Information (OSTI.GOV)
Al-Mousa, Amjed, E-mail: aalmousa@vt.edu; Niemann, Darrell L.; Niemann, Devin J.
2011-07-15
In this paper we present a methodology to characterize surface nanostructures of thin films. The methodology identifies and isolates nanostructures using Atomic Force Microscopy (AFM) data and extracts quantitative information, such as their size and shape. The fuzzy logic based methodology relies on a Fuzzy Inference Engine (FIE) to classify the data points as being top, bottom, uphill, or downhill. The resulting data sets are then further processed to extract quantitative information about the nanostructures. In the present work we introduce a mechanism which can consistently distinguish crowded surfaces from those with sparsely distributed structures and present an omni-directional searchmore » technique to improve the structural recognition accuracy. In order to demonstrate the effectiveness of our approach we present a case study which uses our approach to quantitatively identify particle sizes of two specimens each with a unique gold nanoparticle size distribution. - Research Highlights: {yields} A Fuzzy logic analysis technique capable of characterizing AFM images of thin films. {yields} The technique is applicable to different surfaces regardless of their densities. {yields} Fuzzy logic technique does not require manual adjustment of the algorithm parameters. {yields} The technique can quantitatively capture differences between surfaces. {yields} This technique yields more realistic structure boundaries compared to other methods.« less
Impact of JPEG2000 compression on spatial-spectral endmember extraction from hyperspectral data
NASA Astrophysics Data System (ADS)
Martín, Gabriel; Ruiz, V. G.; Plaza, Antonio; Ortiz, Juan P.; García, Inmaculada
2009-08-01
Hyperspectral image compression has received considerable interest in recent years. However, an important issue that has not been investigated in the past is the impact of lossy compression on spectral mixture analysis applications, which characterize mixed pixels in terms of a suitable combination of spectrally pure spectral substances (called endmembers) weighted by their estimated fractional abundances. In this paper, we specifically investigate the impact of JPEG2000 compression of hyperspectral images on the quality of the endmembers extracted by algorithms that incorporate both the spectral and the spatial information (useful for incorporating contextual information in the spectral endmember search). The two considered algorithms are the automatic morphological endmember extraction (AMEE) and the spatial spectral endmember extraction (SSEE) techniques. Experimental results are conducted using a well-known data set collected by AVIRIS over the Cuprite mining district in Nevada and with detailed ground-truth information available from U. S. Geological Survey. Our experiments reveal some interesting findings that may be useful to specialists applying spatial-spectral endmember extraction algorithms to compressed hyperspectral imagery.
NASA Technical Reports Server (NTRS)
Smith, Michael A.; Kanade, Takeo
1997-01-01
Digital video is rapidly becoming important for education, entertainment, and a host of multimedia applications. With the size of the video collections growing to thousands of hours, technology is needed to effectively browse segments in a short time without losing the content of the video. We propose a method to extract the significant audio and video information and create a "skim" video which represents a very short synopsis of the original. The goal of this work is to show the utility of integrating language and image understanding techniques for video skimming by extraction of significant information, such as specific objects, audio keywords and relevant video structure. The resulting skim video is much shorter, where compaction is as high as 20:1, and yet retains the essential content of the original segment.
Parallel Key Frame Extraction for Surveillance Video Service in a Smart City.
Zheng, Ran; Yao, Chuanwei; Jin, Hai; Zhu, Lei; Zhang, Qin; Deng, Wei
2015-01-01
Surveillance video service (SVS) is one of the most important services provided in a smart city. It is very important for the utilization of SVS to provide design efficient surveillance video analysis techniques. Key frame extraction is a simple yet effective technique to achieve this goal. In surveillance video applications, key frames are typically used to summarize important video content. It is very important and essential to extract key frames accurately and efficiently. A novel approach is proposed to extract key frames from traffic surveillance videos based on GPU (graphics processing units) to ensure high efficiency and accuracy. For the determination of key frames, motion is a more salient feature in presenting actions or events, especially in surveillance videos. The motion feature is extracted in GPU to reduce running time. It is also smoothed to reduce noise, and the frames with local maxima of motion information are selected as the final key frames. The experimental results show that this approach can extract key frames more accurately and efficiently compared with several other methods.
Validation assessment of shoreline extraction on medium resolution satellite image
NASA Astrophysics Data System (ADS)
Manaf, Syaifulnizam Abd; Mustapha, Norwati; Sulaiman, Md Nasir; Husin, Nor Azura; Shafri, Helmi Zulhaidi Mohd
2017-10-01
Monitoring coastal zones helps provide information about the conditions of the coastal zones, such as erosion or accretion. Moreover, monitoring the shorelines can help measure the severity of such conditions. Such measurement can be performed accurately by using Earth observation satellite images rather than by using traditional ground survey. To date, shorelines can be extracted from satellite images with a high degree of accuracy by using satellite image classification techniques based on machine learning to identify the land and water classes of the shorelines. In this study, the researchers validated the results of extracted shorelines of 11 classifiers using a reference shoreline provided by the local authority. Specifically, the validation assessment was performed to examine the difference between the extracted shorelines and the reference shorelines. The research findings showed that the SVM Linear was the most effective image classification technique, as evidenced from the lowest mean distance between the extracted shoreline and the reference shoreline. Furthermore, the findings showed that the accuracy of the extracted shoreline was not directly proportional to the accuracy of the image classification.
Zhou, Yangbo; Fox, Daniel S; Maguire, Pierce; O’Connell, Robert; Masters, Robert; Rodenburg, Cornelia; Wu, Hanchun; Dapor, Maurizio; Chen, Ying; Zhang, Hongzhou
2016-01-01
Two-dimensional (2D) materials usually have a layer-dependent work function, which require fast and accurate detection for the evaluation of their device performance. A detection technique with high throughput and high spatial resolution has not yet been explored. Using a scanning electron microscope, we have developed and implemented a quantitative analytical technique which allows effective extraction of the work function of graphene. This technique uses the secondary electron contrast and has nanometre-resolved layer information. The measurement of few-layer graphene flakes shows the variation of work function between graphene layers with a precision of less than 10 meV. It is expected that this technique will prove extremely useful for researchers in a broad range of fields due to its revolutionary throughput and accuracy. PMID:26878907
Current trends in geomorphological mapping
NASA Astrophysics Data System (ADS)
Seijmonsbergen, A. C.
2012-04-01
Geomorphological mapping is a world currently in motion, driven by technological advances and the availability of new high resolution data. As a consequence, classic (paper) geomorphological maps which were the standard for more than 50 years are rapidly being replaced by digital geomorphological information layers. This is witnessed by the following developments: 1. the conversion of classic paper maps into digital information layers, mainly performed in a digital mapping environment such as a Geographical Information System, 2. updating the location precision and the content of the converted maps, by adding more geomorphological details, taken from high resolution elevation data and/or high resolution image data, 3. (semi) automated extraction and classification of geomorphological features from digital elevation models, broadly separated into unsupervised and supervised classification techniques and 4. New digital visualization / cartographic techniques and reading interfaces. Newly digital geomorphological information layers can be based on manual digitization of polygons using DEMs and/or aerial photographs, or prepared through (semi) automated extraction and delineation of geomorphological features. DEMs are often used as basis to derive Land Surface Parameter information which is used as input for (un) supervised classification techniques. Especially when using high-res data, object-based classification is used as an alternative to traditional pixel-based classifications, to cluster grid cells into homogeneous objects, which can be classified as geomorphological features. Classic map content can also be used as training material for the supervised classification of geomorphological features. In the classification process, rule-based protocols, including expert-knowledge input, are used to map specific geomorphological features or entire landscapes. Current (semi) automated classification techniques are increasingly able to extract morphometric, hydrological, and in the near future also morphogenetic information. As a result, these new opportunities have changed the workflows for geomorphological mapmaking, and their focus have shifted from field-based techniques to using more computer-based techniques: for example, traditional pre-field air-photo based maps are now replaced by maps prepared in a digital mapping environment, and designated field visits using mobile GIS / digital mapping devices now focus on gathering location information and attribute inventories and are strongly time efficient. The resulting 'modern geomorphological maps' are digital collections of geomorphological information layers consisting of georeferenced vector, raster and tabular data which are stored in a digital environment such as a GIS geodatabase, and are easily visualized as e.g. 'birds' eye' views, as animated 3D displays, on virtual globes, or stored as GeoPDF maps in which georeferenced attribute information can be easily exchanged over the internet. Digital geomorphological information layers are increasingly accessed via web-based services distributed through remote servers. Information can be consulted - or even build using remote geoprocessing servers - by the end user. Therefore, it will not only be the geomorphologist anymore, but also the professional end user that dictates the applied use of digital geomorphological information layers.
Fine-grained information extraction from German transthoracic echocardiography reports.
Toepfer, Martin; Corovic, Hamo; Fette, Georg; Klügl, Peter; Störk, Stefan; Puppe, Frank
2015-11-12
Information extraction techniques that get structured representations out of unstructured data make a large amount of clinically relevant information about patients accessible for semantic applications. These methods typically rely on standardized terminologies that guide this process. Many languages and clinical domains, however, lack appropriate resources and tools, as well as evaluations of their applications, especially if detailed conceptualizations of the domain are required. For instance, German transthoracic echocardiography reports have not been targeted sufficiently before, despite of their importance for clinical trials. This work therefore aimed at development and evaluation of an information extraction component with a fine-grained terminology that enables to recognize almost all relevant information stated in German transthoracic echocardiography reports at the University Hospital of Würzburg. A domain expert validated and iteratively refined an automatically inferred base terminology. The terminology was used by an ontology-driven information extraction system that outputs attribute value pairs. The final component has been mapped to the central elements of a standardized terminology, and it has been evaluated according to documents with different layouts. The final system achieved state-of-the-art precision (micro average.996) and recall (micro average.961) on 100 test documents that represent more than 90 % of all reports. In particular, principal aspects as defined in a standardized external terminology were recognized with f 1=.989 (micro average) and f 1=.963 (macro average). As a result of keyword matching and restraint concept extraction, the system obtained high precision also on unstructured or exceptionally short documents, and documents with uncommon layout. The developed terminology and the proposed information extraction system allow to extract fine-grained information from German semi-structured transthoracic echocardiography reports with very high precision and high recall on the majority of documents at the University Hospital of Würzburg. Extracted results populate a clinical data warehouse which supports clinical research.
Ares, Ana M; Nozal, María J; Bernal, José
2013-10-25
Broccoli (Brassica oleracea L. var. Italica) contains substantial amount of health-promoting compounds such as vitamins, glucosinolates, phenolic compounds, and dietary essential minerals; thus, it benefits health beyond providing just basic nutrition, and consumption of broccoli has been increasing over the years. This review gives an overview on the extraction and separation techniques, as well as the biological activity of some of the above mentioned compounds which have been published in the period January 2008 to January 2013. The work has been distributed according to the different families of health promoting compounds discussing the extraction procedures and the analytical techniques employed for their characterization. Finally, information about the different biological activities of these compounds has been also provided. Copyright © 2013 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Imhoff, M. L.; Vermillion, C. H.; Khan, F. A.
1984-01-01
An investigation to examine the utility of spaceborne radar image data to malaria vector control programs is described. Specific tasks involve an analysis of radar illumination geometry vs information content, the synergy of radar and multispectral data mergers, and automated information extraction techniques.
Raisutis, Renaldas; Samaitis, Vykintas
2017-01-01
This work proposes a novel hybrid signal processing technique to extract information on disbond-type defects from a single B-scan in the process of non-destructive testing (NDT) of glass fiber reinforced plastic (GFRP) material using ultrasonic guided waves (GW). The selected GFRP sample has been a segment of wind turbine blade, which possessed an aerodynamic shape. Two disbond type defects having diameters of 15 mm and 25 mm were artificially constructed on its trailing edge. The experiment has been performed using the low-frequency ultrasonic system developed at the Ultrasound Institute of Kaunas University of Technology and only one side of the sample was accessed. A special configuration of the transmitting and receiving transducers fixed on a movable panel with a separation distance of 50 mm was proposed for recording the ultrasonic guided wave signals at each one-millimeter step along the scanning distance up to 500 mm. Finally, the hybrid signal processing technique comprising the valuable features of the three most promising signal processing techniques: cross-correlation, wavelet transform, and Hilbert–Huang transform has been applied to the received signals for the extraction of defects information from a single B-scan image. The wavelet transform and cross-correlation techniques have been combined in order to extract the approximated size and location of the defects and measurements of time delays. Thereafter, Hilbert–Huang transform has been applied to the wavelet transformed signal to compare the variation of instantaneous frequencies and instantaneous amplitudes of the defect-free and defective signals. PMID:29232845
Road Extraction from AVIRIS Using Spectral Mixture and Q-Tree Filter Techniques
NASA Technical Reports Server (NTRS)
Gardner, Margaret E.; Roberts, Dar A.; Funk, Chris; Noronha, Val
2001-01-01
Accurate road location and condition information are of primary importance in road infrastructure management. Additionally, spatially accurate and up-to-date road networks are essential in ambulance and rescue dispatch in emergency situations. However, accurate road infrastructure databases do not exist for vast areas, particularly in areas with rapid expansion. Currently, the US Department of Transportation (USDOT) extends great effort in field Global Positioning System (GPS) mapping and condition assessment to meet these informational needs. This methodology, though effective, is both time-consuming and costly, because every road within a DOT's jurisdiction must be field-visited to obtain accurate information. Therefore, the USDOT is interested in identifying new technologies that could help meet road infrastructure informational needs more effectively. Remote sensing provides one means by which large areas may be mapped with a high standard of accuracy and is a technology with great potential in infrastructure mapping. The goal of our research is to develop accurate road extraction techniques using high spatial resolution, fine spectral resolution imagery. Additionally, our research will explore the use of hyperspectral data in assessing road quality. Finally, this research aims to define the spatial and spectral requirements for remote sensing data to be used successfully for road feature extraction and road quality mapping. Our findings will facilitate the USDOT in assessing remote sensing as a new resource in infrastructure studies.
Oil-Water Flow Investigations using Planar-Laser Induced Fluorescence and Particle Velocimetry
NASA Astrophysics Data System (ADS)
Ibarra, Roberto; Matar, Omar K.; Markides, Christos N.
2017-11-01
The study of the complex behaviour of immiscible liquid-liquid flow in pipes requires the implementation of advanced measurement techniques in order to extract detailed in situ information. Laser-based diagnostic techniques allow the extraction of high-resolution space- and time resolve phase and velocity information, which aims to improve the fundamental understanding of these flows and to validate closure relations for advanced multiphase flow models. This work shows a novel simultaneous planar-laser induced fluorescence and particle velocimetry in stratified oil-water flows using two laser light sheets at two different wavelengths for fluids with different refractive indices at horizontal and upward pipe inclinations (<5°) in stratified flow conditions (i.e. separated layers). Complex flow structures are extracted from 2-D instantaneous velocity fields, which are strongly dependent on the pipe inclination at low velocities. The analysis of mean wall-normal velocity profiles and velocity fluctuations suggests the presence of single- and counter-rotating vortices in the azimuthal direction, especially in the oil layer, which can be attributed to the influence of the interfacial waves. Funding from BP, and the TMF Consortium is gratefully acknowledged.
NASA Technical Reports Server (NTRS)
Smetana, F. O.; Summery, D. C.; Johnson, W. D.
1972-01-01
Techniques quoted in the literature for the extraction of stability derivative information from flight test records are reviewed. A recent technique developed at NASA's Langley Research Center was regarded as the most productive yet developed. Results of tests of the sensitivity of this procedure to various types of data noise and to the accuracy of the estimated values of the derivatives are reported. Computer programs for providing these initial estimates are given. The literature review also includes a discussion of flight test measuring techniques, instrumentation, and piloting techniques.
NASA Astrophysics Data System (ADS)
Cao, Qiong; Gu, Lingjia; Ren, Ruizhi; Wang, Lang
2016-09-01
Building extraction currently is important in the application of high-resolution remote sensing imagery. At present, quite a few algorithms are available for detecting building information, however, most of them still have some obvious disadvantages, such as the ignorance of spectral information, the contradiction between extraction rate and extraction accuracy. The purpose of this research is to develop an effective method to detect building information for Chinese GF-1 data. Firstly, the image preprocessing technique is used to normalize the image and image enhancement is used to highlight the useful information in the image. Secondly, multi-spectral information is analyzed. Subsequently, an improved morphological building index (IMBI) based on remote sensing imagery is proposed to get the candidate building objects. Furthermore, in order to refine building objects and further remove false objects, the post-processing (e.g., the shape features, the vegetation index and the water index) is employed. To validate the effectiveness of the proposed algorithm, the omission errors (OE), commission errors (CE), the overall accuracy (OA) and Kappa are used at final. The proposed method can not only effectively use spectral information and other basic features, but also avoid extracting excessive interference details from high-resolution remote sensing images. Compared to the original MBI algorithm, the proposed method reduces the OE by 33.14% .At the same time, the Kappa increase by 16.09%. In experiments, IMBI achieved satisfactory results and outperformed other algorithms in terms of both accuracies and visual inspection
MRI and unilateral NMR study of reindeer skin tanning processes.
Zhu, Lizheng; Del Federico, Eleonora; Ilott, Andrew J; Klokkernes, Torunn; Kehlet, Cindie; Jerschow, Alexej
2015-04-07
The study of arctic or subarctic indigenous skin clothing material, known for its design and ability to keep the body warm, provides information about the tanning materials and techniques. The study also provides clues about the culture that created it, since tanning processes are often specific to certain indigenous groups. Untreated skin samples and samples treated with willow (Salix sp) bark extract and cod liver oil are compared in this study using both MRI and unilateral NMR techniques. The two types of samples show different proton spatial distributions and different relaxation times, which may also provide information about the tanning technique and aging behavior.
Basaruddin, T.
2016-01-01
One essential task in information extraction from the medical corpus is drug name recognition. Compared with text sources come from other domains, the medical text mining poses more challenges, for example, more unstructured text, the fast growing of new terms addition, a wide range of name variation for the same drug, the lack of labeled dataset sources and external knowledge, and the multiple token representations for a single drug name. Although many approaches have been proposed to overwhelm the task, some problems remained with poor F-score performance (less than 0.75). This paper presents a new treatment in data representation techniques to overcome some of those challenges. We propose three data representation techniques based on the characteristics of word distribution and word similarities as a result of word embedding training. The first technique is evaluated with the standard NN model, that is, MLP. The second technique involves two deep network classifiers, that is, DBN and SAE. The third technique represents the sentence as a sequence that is evaluated with a recurrent NN model, that is, LSTM. In extracting the drug name entities, the third technique gives the best F-score performance compared to the state of the art, with its average F-score being 0.8645. PMID:27843447
Contour metrology using critical dimension atomic force microscopy
NASA Astrophysics Data System (ADS)
Orji, Ndubuisi G.; Dixson, Ronald G.; Vladár, András E.; Ming, Bin; Postek, Michael T.
2012-03-01
The critical dimension atomic force microscope (CD-AFM), which is used as a reference instrument in lithography metrology, has been proposed as a complementary instrument for contour measurement and verification. Although data from CD-AFM is inherently three dimensional, the planar two-dimensional data required for contour metrology is not easily extracted from the top-down CD-AFM data. This is largely due to the limitations of the CD-AFM method for controlling the tip position and scanning. We describe scanning techniques and profile extraction methods to obtain contours from CD-AFM data. We also describe how we validated our technique, and explain some of its limitations. Potential sources of error for this approach are described, and a rigorous uncertainty model is presented. Our objective is to show which data acquisition and analysis methods could yield optimum contour information while preserving some of the strengths of CD-AFM metrology. We present comparison of contours extracted using our technique to those obtained from the scanning electron microscope (SEM), and the helium ion microscope (HIM).
Using Web-Based Knowledge Extraction Techniques to Support Cultural Modeling
NASA Astrophysics Data System (ADS)
Smart, Paul R.; Sieck, Winston R.; Shadbolt, Nigel R.
The World Wide Web is a potentially valuable source of information about the cognitive characteristics of cultural groups. However, attempts to use the Web in the context of cultural modeling activities are hampered by the large-scale nature of the Web and the current dominance of natural language formats. In this paper, we outline an approach to support the exploitation of the Web for cultural modeling activities. The approach begins with the development of qualitative cultural models (which describe the beliefs, concepts and values of cultural groups), and these models are subsequently used to develop an ontology-based information extraction capability. Our approach represents an attempt to combine conventional approaches to information extraction with epidemiological perspectives of culture and network-based approaches to cultural analysis. The approach can be used, we suggest, to support the development of models providing a better understanding of the cognitive characteristics of particular cultural groups.
Passive Polarimetric Information Processing for Target Classification
NASA Astrophysics Data System (ADS)
Sadjadi, Firooz; Sadjadi, Farzad
Polarimetric sensing is an area of active research in a variety of applications. In particular, the use of polarization diversity has been shown to improve performance in automatic target detection and recognition. Within the diverse scope of polarimetric sensing, the field of passive polarimetric sensing is of particular interest. This chapter presents several new methods for gathering in formation using such passive techniques. One method extracts three-dimensional (3D) information and surface properties using one or more sensors. Another method extracts scene-specific algebraic expressions that remain unchanged under polariza tion transformations (such as along the transmission path to the sensor).
Lee, Jihye; Kang, Min Hwa; Lee, Kang-Bong; Lee, Yeonhee
2013-05-15
Time-of-flight secondary ion mass spectrometry (TOF-SIMS) and X-ray photoelectron spectroscopy (XPS) are well established surface techniques that provide both elemental and organic information from several monolayers of a sample surface, while also allowing depth profiling or image mapping to be carried out. The static TOF-SIMS with improved performances has expanded the application of TOF-SIMS to the study of a variety of organic, polymeric and biological materials. In this work, TOF-SIMS, XPS and Fourier Transform Infrared (FTIR) measurements were used to characterize commercial natural dyes and traditional silk fabric dyed with plant extracts dyes avoiding the time-consuming and destructive extraction procedures necessary for the spectrophotometric and chromatographic methods previously used. Silk textiles dyed with plant extracts were then analyzed for chemical and functional group identification of their dye components and mordants. TOF-SIMS spectra for the dyed silk fabric showed element ions from metallic mordants, specific fragment ions and molecular ions from plant-extracted dyes. The results of TOF-SIMS, XPS and FTIR are very useful as a reference database for comparison with data about traditional Korean silk fabric and to provide an understanding of traditional dyeing materials. Therefore, this study shows that surface techniques are useful for micro-destructive analysis of plant-extracted dyes and Korean dyed silk fabric.
Analysis of soil moisture extraction algorithm using data from aircraft experiments
NASA Technical Reports Server (NTRS)
Burke, H. H. K.; Ho, J. H.
1981-01-01
A soil moisture extraction algorithm is developed using a statistical parameter inversion method. Data sets from two aircraft experiments are utilized for the test. Multifrequency microwave radiometric data surface temperature, and soil moisture information are contained in the data sets. The surface and near surface ( or = 5 cm) soil moisture content can be extracted with accuracy of approximately 5% to 6% for bare fields and fields with grass cover by using L, C, and X band radiometer data. This technique is used for handling large amounts of remote sensing data from space.
Using nano- and micro-particles of silver in lignin analysis
Umesh P. Agarwal; Richard S. Reiner; Sally A. Ralph
2006-01-01
Although there are numerous techniques available to study lignin in its various states, they all have limitations and to extract most amount of information, a number of analytical techniques have to be jointly used. We have recently started applying a new approach to lignin analysis - namely using nano- and micro-particles of silver for study of native and residual...
NASA Astrophysics Data System (ADS)
Arunachalam, M. S.; Puli, Anil; Anuradha, B.
2016-07-01
In the present work continuous extraction of convective cloud optical information and reflectivity (MAX(Z) in dBZ) using online retrieval technique for time series data production from Doppler Weather Radar (DWR) located at Indian Meteorological Department, Chennai has been developed in MATLAB. Reflectivity measurements for different locations within the DWR range of 250 Km radii of circular disc area can be retrieved using this technique. It gives both time series reflectivity of point location and also Range Time Intensity (RTI) maps of reflectivity for the corresponding location. The Graphical User Interface (GUI) developed for the cloud reflectivity is user friendly; it also provides the convective cloud optical information such as cloud base height (CBH), cloud top height (CTH) and cloud optical depth (COD). This technique is also applicable for retrieving other DWR products such as Plan Position Indicator (Z, in dBZ), Plan Position Indicator (Z, in dBZ)-Close Range, Volume Velocity Processing (V, in knots), Plan Position Indicator (V, in m/s), Surface Rainfall Intensity (SRI, mm/hr), Precipitation Accumulation (PAC) 24 hrs at 0300UTC. Keywords: Reflectivity, cloud top height, cloud base, cloud optical depth
Following the Social Media: Aspect Evolution of Online Discussion
NASA Astrophysics Data System (ADS)
Tang, Xuning; Yang, Christopher C.
Due to the advance of Internet and Web 2.0 technologies, it is easy to extract thousands of threads about a topic of interest from an online forum but it is nontrivial to capture the blueprint of different aspects (i.e., subtopic, or facet) associated with the topic. To better understand and analyze a forum discussion given topic, it is important to uncover the evolution relationships (temporal dependencies) between different topic aspects (i.e. how the discussion topic is evolving). Traditional Topic Detection and Tracking (TDT) techniques usually organize topics as a flat structure but it does not present the evolution relationships between topic aspects. In addition, the properties of short and sparse messages make the content-based TDT techniques difficult to perform well in identifying evolution relationships. The contributions in this paper are two-folded. We formally define a topic aspect evolution graph modeling framework and propose to utilize social network information, content similarity and temporal proximity to model evolution relationships between topic aspects. The experimental results showed that, by incorporating social network information, our technique significantly outperformed content-based technique in the task of extracting evolution relationships between topic aspects.
Extraction of decision rules via imprecise probabilities
NASA Astrophysics Data System (ADS)
Abellán, Joaquín; López, Griselda; Garach, Laura; Castellano, Javier G.
2017-05-01
Data analysis techniques can be applied to discover important relations among features. This is the main objective of the Information Root Node Variation (IRNV) technique, a new method to extract knowledge from data via decision trees. The decision trees used by the original method were built using classic split criteria. The performance of new split criteria based on imprecise probabilities and uncertainty measures, called credal split criteria, differs significantly from the performance obtained using the classic criteria. This paper extends the IRNV method using two credal split criteria: one based on a mathematical parametric model, and other one based on a non-parametric model. The performance of the method is analyzed using a case study of traffic accident data to identify patterns related to the severity of an accident. We found that a larger number of rules is generated, significantly supplementing the information obtained using the classic split criteria.
Exploring patterns of epigenetic information with data mining techniques.
Aguiar-Pulido, Vanessa; Seoane, José A; Gestal, Marcos; Dorado, Julián
2013-01-01
Data mining, a part of the Knowledge Discovery in Databases process (KDD), is the process of extracting patterns from large data sets by combining methods from statistics and artificial intelligence with database management. Analyses of epigenetic data have evolved towards genome-wide and high-throughput approaches, thus generating great amounts of data for which data mining is essential. Part of these data may contain patterns of epigenetic information which are mitotically and/or meiotically heritable determining gene expression and cellular differentiation, as well as cellular fate. Epigenetic lesions and genetic mutations are acquired by individuals during their life and accumulate with ageing. Both defects, either together or individually, can result in losing control over cell growth and, thus, causing cancer development. Data mining techniques could be then used to extract the previous patterns. This work reviews some of the most important applications of data mining to epigenetics.
NASA Technical Reports Server (NTRS)
Parada, N. D. J.; Novo, E. M. L. M.
1983-01-01
Two sets of MSS/LANDSAT data with solar elevation ranging from 22 deg to 41 deg were used at the Image-100 System to implement the Eliason et alii technique for extracting the topographic modulation component. An unsupervised cluster analysis was used to obtain an average brightness image for each channel. Analysis of the enhanced imaged shows that the technique for extracting topographic modulation component is more appropriated to MSS data obtained under high sun elevation ngles. Low sun elevation increases the variance of each cluster so that the average brightness doesn't represent its albedo proprties. The topographic modulation component applied to low sun elevation angle damages rather than enhance topographic information. Better results were produced for channels 4 and 5 than for channels 6 and 7.
Technique and cue selection for graphical presentation of generic hyperdimensional data
NASA Astrophysics Data System (ADS)
Howard, Lee M.; Burton, Robert P.
2013-12-01
Several presentation techniques have been created for visualization of data with more than three variables. Packages have been written, each of which implements a subset of these techniques. However, these packages generally fail to provide all the features needed by the user during the visualization process. Further, packages generally limit support for presentation techniques to a few techniques. A new package called Petrichor accommodates all necessary and useful features together in one system. Any presentation technique may be added easily through an extensible plugin system. Features are supported by a user interface that allows easy interaction with data. Annotations allow users to mark up visualizations and share information with others. By providing a hyperdimensional graphics package that easily accommodates presentation techniques and includes a complete set of features, including those that are rarely or never supported elsewhere, the user is provided with a tool that facilitates improved interaction with multivariate data to extract and disseminate information.
1982-10-01
and time-to-go (T60) are provided from the Estimation Algorithm. The gimbal angle commands used in the first two phases are applied to the gimbal...lighting techniques are also used to simplify image understanding or to extract additional information about position, range, or shape of objects in the...motion or firing dis- turbances. Since useful muzzle position and rate information is difficult to obtain, conventional feedback techniques 447 cannot
Using Open Web APIs in Teaching Web Mining
ERIC Educational Resources Information Center
Chen, Hsinchun; Li, Xin; Chau, M.; Ho, Yi-Jen; Tseng, Chunju
2009-01-01
With the advent of the World Wide Web, many business applications that utilize data mining and text mining techniques to extract useful business information on the Web have evolved from Web searching to Web mining. It is important for students to acquire knowledge and hands-on experience in Web mining during their education in information systems…
Supercritical fluid extraction. Principles and practice
DOE Office of Scientific and Technical Information (OSTI.GOV)
McHugh, M.A.; Krukonis, V.J.
This book is a presentation of the fundamentals and application of super-critical fluid solvents (SCF). The authors cover virtually every facet of SCF technology: the history of SCF extraction, its underlying thermodynamic principles, process principles, industrial applications, and analysis of SCF research and development efforts. The thermodynamic principles governing SCF extraction are covered in depth. The often complex three-dimensional pressure-temperature composition (PTx) phase diagrams for SCF-solute mixtures are constructed in a coherent step-by-step manner using the more familiar two-dimensional Px diagrams. The experimental techniques used to obtain high pressure phase behavior information are described in detail and the advantages andmore » disadvantages of each technique are explained. Finally, the equations used to model SCF-solute mixtures are developed, and modeling results are presented to highlight the correlational strengths of a cubic equation of state.« less
Planetary Gears Feature Extraction and Fault Diagnosis Method Based on VMD and CNN.
Liu, Chang; Cheng, Gang; Chen, Xihui; Pang, Yusong
2018-05-11
Given local weak feature information, a novel feature extraction and fault diagnosis method for planetary gears based on variational mode decomposition (VMD), singular value decomposition (SVD), and convolutional neural network (CNN) is proposed. VMD was used to decompose the original vibration signal to mode components. The mode matrix was partitioned into a number of submatrices and local feature information contained in each submatrix was extracted as a singular value vector using SVD. The singular value vector matrix corresponding to the current fault state was constructed according to the location of each submatrix. Finally, by training a CNN using singular value vector matrices as inputs, planetary gear fault state identification and classification was achieved. The experimental results confirm that the proposed method can successfully extract local weak feature information and accurately identify different faults. The singular value vector matrices of different fault states have a distinct difference in element size and waveform. The VMD-based partition extraction method is better than ensemble empirical mode decomposition (EEMD), resulting in a higher CNN total recognition rate of 100% with fewer training times (14 times). Further analysis demonstrated that the method can also be applied to the degradation recognition of planetary gears. Thus, the proposed method is an effective feature extraction and fault diagnosis technique for planetary gears.
Planetary Gears Feature Extraction and Fault Diagnosis Method Based on VMD and CNN
Cheng, Gang; Chen, Xihui
2018-01-01
Given local weak feature information, a novel feature extraction and fault diagnosis method for planetary gears based on variational mode decomposition (VMD), singular value decomposition (SVD), and convolutional neural network (CNN) is proposed. VMD was used to decompose the original vibration signal to mode components. The mode matrix was partitioned into a number of submatrices and local feature information contained in each submatrix was extracted as a singular value vector using SVD. The singular value vector matrix corresponding to the current fault state was constructed according to the location of each submatrix. Finally, by training a CNN using singular value vector matrices as inputs, planetary gear fault state identification and classification was achieved. The experimental results confirm that the proposed method can successfully extract local weak feature information and accurately identify different faults. The singular value vector matrices of different fault states have a distinct difference in element size and waveform. The VMD-based partition extraction method is better than ensemble empirical mode decomposition (EEMD), resulting in a higher CNN total recognition rate of 100% with fewer training times (14 times). Further analysis demonstrated that the method can also be applied to the degradation recognition of planetary gears. Thus, the proposed method is an effective feature extraction and fault diagnosis technique for planetary gears. PMID:29751671
On-board data management study for EOPAP
NASA Technical Reports Server (NTRS)
Davisson, L. D.
1975-01-01
The requirements, implementation techniques, and mission analysis associated with on-board data management for EOPAP were studied. SEASAT-A was used as a baseline, and the storage requirements, data rates, and information extraction requirements were investigated for each of the following proposed SEASAT sensors: a short pulse 13.9 GHz radar, a long pulse 13.9 GHz radar, a synthetic aperture radar, a multispectral passive microwave radiometer facility, and an infrared/visible very high resolution radiometer (VHRR). Rate distortion theory was applied to determine theoretical minimum data rates and compared with the rates required by practical techniques. It was concluded that practical techniques can be used which approach the theoretically optimum based upon an empirically determined source random process model. The results of the preceding investigations were used to recommend an on-board data management system for (1) data compression through information extraction, optimal noiseless coding, source coding with distortion, data buffering, and data selection under command or as a function of data activity, (2) for command handling, (3) for spacecraft operation and control, and (4) for experiment operation and monitoring.
Model of experts for decision support in the diagnosis of leukemia patients.
Corchado, Juan M; De Paz, Juan F; Rodríguez, Sara; Bajo, Javier
2009-07-01
Recent advances in the field of biomedicine, specifically in the field of genomics, have led to an increase in the information available for conducting expression analysis. Expression analysis is a technique used in transcriptomics, a branch of genomics that deals with the study of messenger ribonucleic acid (mRNA) and the extraction of information contained in the genes. This increase in information is reflected in the exon arrays, which require the use of new techniques in order to extract the information. The purpose of this study is to provide a tool based on a mixture of experts model that allows the analysis of the information contained in the exon arrays, from which automatic classifications for decision support in diagnoses of leukemia patients can be made. The proposed model integrates several cooperative algorithms characterized for their efficiency for data processing, filtering, classification and knowledge extraction. The Cancer Institute of the University of Salamanca is making an effort to develop tools to automate the evaluation of data and to facilitate de analysis of information. This proposal is a step forward in this direction and the first step toward the development of a mixture of experts tool that integrates different cognitive and statistical approaches to deal with the analysis of exon arrays. The mixture of experts model presented within this work provides great capacities for learning and adaptation to the characteristics of the problem in consideration, using novel algorithms in each of the stages of the analysis process that can be easily configured and combined, and provides results that notably improve those provided by the existing methods for exon arrays analysis. The material used consists of data from exon arrays provided by the Cancer Institute that contain samples from leukemia patients. The methodology used consists of a system based on a mixture of experts. Each one of the experts incorporates novel artificial intelligence techniques that improve the process of carrying out various tasks such as pre-processing, filtering, classification and extraction of knowledge. This article will detail the manner in which individual experts are combined so that together they generate a system capable of extracting knowledge, thus permitting patients to be classified in an automatic and efficient manner that is also comprehensible for medical personnel. The system has been tested in a real setting and has been used for classifying patients who suffer from different forms of leukemia at various stages. Personnel from the Cancer Institute supervised and participated throughout the testing period. Preliminary results are promising, notably improving the results obtained with previously used tools. The medical staff from the Cancer Institute considers the tools that have been developed to be positive and very useful in a supporting capacity for carrying out their daily tasks. Additionally the mixture of experts supplies a tool for the extraction of necessary information in order to explain the associations that have been made in simple terms. That is, it permits the extraction of knowledge for each classification made and generalized in order to be used in subsequent classifications. This allows for a large amount of learning and adaptation within the proposed system.
NASA Astrophysics Data System (ADS)
Dogon-Yaro, M. A.; Kumar, P.; Rahman, A. Abdul; Buyuksalih, G.
2016-09-01
Mapping of trees plays an important role in modern urban spatial data management, as many benefits and applications inherit from this detailed up-to-date data sources. Timely and accurate acquisition of information on the condition of urban trees serves as a tool for decision makers to better appreciate urban ecosystems and their numerous values which are critical to building up strategies for sustainable development. The conventional techniques used for extracting trees include ground surveying and interpretation of the aerial photography. However, these techniques are associated with some constraints, such as labour intensive field work and a lot of financial requirement which can be overcome by means of integrated LiDAR and digital image datasets. Compared to predominant studies on trees extraction mainly in purely forested areas, this study concentrates on urban areas, which have a high structural complexity with a multitude of different objects. This paper presented a workflow about semi-automated approach for extracting urban trees from integrated processing of airborne based LiDAR point cloud and multispectral digital image datasets over Istanbul city of Turkey. The paper reveals that the integrated datasets is a suitable technology and viable source of information for urban trees management. As a conclusion, therefore, the extracted information provides a snapshot about location, composition and extent of trees in the study area useful to city planners and other decision makers in order to understand how much canopy cover exists, identify new planting, removal, or reforestation opportunities and what locations have the greatest need or potential to maximize benefits of return on investment. It can also help track trends or changes to the urban trees over time and inform future management decisions.
Avian Semen Collection by Cloacal Massage and Isolation of DNA from Sperm.
Kucera, Aurelia C; Heidinger, Britt J
2018-02-05
Collection of semen may be useful for a wide range of applications including studies involving sperm quality, sperm telomere dynamics, and epigenetics. Birds are widely used subjects in biological research and are ideal for studies involving repeated sperm samples. However, few resources are currently available for those wishing to learn how to collect and extract DNA from avian sperm. Here we describe cloacal massage, a gentle, non-invasive manual technique for collecting avian sperm. Although this technique is established in the literature, it can be difficult to learn from the available descriptions. We also provide information for extracting DNA from avian semen using a commercial extraction kit with modifications. Cloacal massage can be easily used on any small- to medium-sized male bird in reproductive condition. Following collection, the semen can be used immediately for motility assays, or frozen for DNA extraction following the protocol described herein. This extraction protocol was refined for avian sperm and has been successfully used on samples collected from several passerine species (Passer domesticus, Spizella passerina, Haemorhous mexicanus, and Turdus migratorius) and one columbid (Columba livia).
Taming Big Data: An Information Extraction Strategy for Large Clinical Text Corpora.
Gundlapalli, Adi V; Divita, Guy; Carter, Marjorie E; Redd, Andrew; Samore, Matthew H; Gupta, Kalpana; Trautner, Barbara
2015-01-01
Concepts of interest for clinical and research purposes are not uniformly distributed in clinical text available in electronic medical records. The purpose of our study was to identify filtering techniques to select 'high yield' documents for increased efficacy and throughput. Using two large corpora of clinical text, we demonstrate the identification of 'high yield' document sets in two unrelated domains: homelessness and indwelling urinary catheters. For homelessness, the high yield set includes homeless program and social work notes. For urinary catheters, concepts were more prevalent in notes from hospitalized patients; nursing notes accounted for a majority of the high yield set. This filtering will enable customization and refining of information extraction pipelines to facilitate extraction of relevant concepts for clinical decision support and other uses.
Comparison of extraction techniques of robenidine from poultry feed samples.
Wilga, Joanna; Wasik, Agata Kot-; Namieśnik, Jacek
2007-10-31
In this paper, effectiveness of six different commonly applied extraction techniques for the determination of robenidine in poultry feed has been compared. The sample preparation techniques included shaking, Soxhlet, Soxtec, ultrasonically assisted extraction, microwave - assisted extraction and accelerated solvent extraction. Comparison of these techniques was done with respect to the recovery extraction, temperature and time, reproducibility and solvent consumption. Every single extract was subjected to clean - up using aluminium oxide column (Pasteur pipette filled with 1g of aluminium oxide), from which robenidine was eluted with 10ml of methanol. The eluate from the clean-up column was collected in a volumetric flask, and finally it was analysed by HPLC-DAD-MS. In general, all extraction techniques were capable of isolating of robenidine from poultry feed, but the recovery obtained using modern extraction techniques was higher than that obtained using conventional techniques. In particular, accelerated solvent extraction was more superior to other techniques, which highlights the advantages of this sample preparation technique. However, in routine analysis, shaking and ultrasonically assisted extraction is still the preferred method for the solution of robenidine and other coccidiostatics.
Built-Up Area Feature Extraction: Second Year Technical Progress Report
1990-02-01
Contract DACA 72-87-C-001. During this year we have built on previous research, in road network extraction and in the detection and delineation of buildings...methods to perform stereo analysis using loosely coupled techniques where comparison is deferred until each method has performed a complete estimate...or missing information. A course of action may be suggested to the user depending on the error. Although the checks do not guarantee the correctness
Lip boundary detection techniques using color and depth information
NASA Astrophysics Data System (ADS)
Kim, Gwang-Myung; Yoon, Sung H.; Kim, Jung H.; Hur, Gi Taek
2002-01-01
This paper presents our approach to using a stereo camera to obtain 3-D image data to be used to improve existing lip boundary detection techniques. We show that depth information as provided by our approach can be used to significantly improve boundary detection systems. Our system detects the face and mouth area in the image by using color, geometric location, and additional depth information for the face. Initially, color and depth information can be used to localize the face. Then we can determine the lip region from the intensity information and the detected eye locations. The system has successfully been used to extract approximate lip regions using RGB color information of the mouth area. Merely using color information is not robust because the quality of the results may vary depending on light conditions, background, and the human race. To overcome this problem, we used a stereo camera to obtain 3-D facial images. 3-D data constructed from the depth information along with color information can provide more accurate lip boundary detection results as compared to color only based techniques.
NASA Technical Reports Server (NTRS)
Smith, Paul H.
1988-01-01
The Computer Science Program provides advanced concepts, techniques, system architectures, algorithms, and software for both space and aeronautics information sciences and computer systems. The overall goal is to provide the technical foundation within NASA for the advancement of computing technology in aerospace applications. The research program is improving the state of knowledge of fundamental aerospace computing principles and advancing computing technology in space applications such as software engineering and information extraction from data collected by scientific instruments in space. The program includes the development of special algorithms and techniques to exploit the computing power provided by high performance parallel processors and special purpose architectures. Research is being conducted in the fundamentals of data base logic and improvement techniques for producing reliable computing systems.
ERIC Educational Resources Information Center
Mitri, Michel
2012-01-01
XML has become the most ubiquitous format for exchange of data between applications running on the Internet. Most Web Services provide their information to clients in the form of XML. The ability to process complex XML documents in order to extract relevant information is becoming as important a skill for IS students to master as querying…
Advances in Spectral-Spatial Classification of Hyperspectral Images
NASA Technical Reports Server (NTRS)
Fauvel, Mathieu; Tarabalka, Yuliya; Benediktsson, Jon Atli; Chanussot, Jocelyn; Tilton, James C.
2012-01-01
Recent advances in spectral-spatial classification of hyperspectral images are presented in this paper. Several techniques are investigated for combining both spatial and spectral information. Spatial information is extracted at the object (set of pixels) level rather than at the conventional pixel level. Mathematical morphology is first used to derive the morphological profile of the image, which includes characteristics about the size, orientation and contrast of the spatial structures present in the image. Then the morphological neighborhood is defined and used to derive additional features for classification. Classification is performed with support vector machines using the available spectral information and the extracted spatial information. Spatial post-processing is next investigated to build more homogeneous and spatially consistent thematic maps. To that end, three presegmentation techniques are applied to define regions that are used to regularize the preliminary pixel-wise thematic map. Finally, a multiple classifier system is defined to produce relevant markers that are exploited to segment the hyperspectral image with the minimum spanning forest algorithm. Experimental results conducted on three real hyperspectral images with different spatial and spectral resolutions and corresponding to various contexts are presented. They highlight the importance of spectral-spatial strategies for the accurate classification of hyperspectral images and validate the proposed methods.
Characterizing rainfall in the Tenerife island
NASA Astrophysics Data System (ADS)
Díez-Sierra, Javier; del Jesus, Manuel; Losada Rodriguez, Inigo
2017-04-01
In many locations, rainfall data are collected through networks of meteorological stations. The data collection process is nowadays automated in many places, leading to the development of big databases of rainfall data covering extensive areas of territory. However, managers, decision makers and engineering consultants tend not to extract most of the information contained in these databases due to the lack of specific software tools for their exploitation. Here we present the modeling and development effort put in place in the Tenerife island in order to develop MENSEI-L, a software tool capable of automatically analyzing a complete rainfall database to simplify the extraction of information from observations. MENSEI-L makes use of weather type information derived from atmospheric conditions to separate the complete time series into homogeneous groups where statistical distributions are fitted. Normal and extreme regimes are obtained in this manner. MENSEI-L is also able to complete missing data in the time series and to generate synthetic stations by using Kriging techniques. These techniques also serve to generate the spatial regimes of precipitation, both normal and extreme ones. MENSEI-L makes use of weather type information to also provide a stochastic three-day probability forecast for rainfall.
Quantum algorithms for topological and geometric analysis of data
Lloyd, Seth; Garnerone, Silvano; Zanardi, Paolo
2016-01-01
Extracting useful information from large data sets can be a daunting task. Topological methods for analysing data sets provide a powerful technique for extracting such information. Persistent homology is a sophisticated tool for identifying topological features and for determining how such features persist as the data is viewed at different scales. Here we present quantum machine learning algorithms for calculating Betti numbers—the numbers of connected components, holes and voids—in persistent homology, and for finding eigenvectors and eigenvalues of the combinatorial Laplacian. The algorithms provide an exponential speed-up over the best currently known classical algorithms for topological data analysis. PMID:26806491
Development of Mobile Mapping System for 3D Road Asset Inventory.
Sairam, Nivedita; Nagarajan, Sudhagar; Ornitz, Scott
2016-03-12
Asset Management is an important component of an infrastructure project. A significant cost is involved in maintaining and updating the asset information. Data collection is the most time-consuming task in the development of an asset management system. In order to reduce the time and cost involved in data collection, this paper proposes a low cost Mobile Mapping System using an equipped laser scanner and cameras. First, the feasibility of low cost sensors for 3D asset inventory is discussed by deriving appropriate sensor models. Then, through calibration procedures, respective alignments of the laser scanner, cameras, Inertial Measurement Unit and GPS (Global Positioning System) antenna are determined. The efficiency of this Mobile Mapping System is experimented by mounting it on a truck and golf cart. By using derived sensor models, geo-referenced images and 3D point clouds are derived. After validating the quality of the derived data, the paper provides a framework to extract road assets both automatically and manually using techniques implementing RANSAC plane fitting and edge extraction algorithms. Then the scope of such extraction techniques along with a sample GIS (Geographic Information System) database structure for unified 3D asset inventory are discussed.
Development of Mobile Mapping System for 3D Road Asset Inventory
Sairam, Nivedita; Nagarajan, Sudhagar; Ornitz, Scott
2016-01-01
Asset Management is an important component of an infrastructure project. A significant cost is involved in maintaining and updating the asset information. Data collection is the most time-consuming task in the development of an asset management system. In order to reduce the time and cost involved in data collection, this paper proposes a low cost Mobile Mapping System using an equipped laser scanner and cameras. First, the feasibility of low cost sensors for 3D asset inventory is discussed by deriving appropriate sensor models. Then, through calibration procedures, respective alignments of the laser scanner, cameras, Inertial Measurement Unit and GPS (Global Positioning System) antenna are determined. The efficiency of this Mobile Mapping System is experimented by mounting it on a truck and golf cart. By using derived sensor models, geo-referenced images and 3D point clouds are derived. After validating the quality of the derived data, the paper provides a framework to extract road assets both automatically and manually using techniques implementing RANSAC plane fitting and edge extraction algorithms. Then the scope of such extraction techniques along with a sample GIS (Geographic Information System) database structure for unified 3D asset inventory are discussed. PMID:26985897
Characterization of Natural Dyes and Traditional Korean Silk Fabric by Surface Analytical Techniques
Lee, Jihye; Kang, Min Hwa; Lee, Kang-Bong; Lee, Yeonhee
2013-01-01
Time-of-flight secondary ion mass spectrometry (TOF-SIMS) and X-ray photoelectron spectroscopy (XPS) are well established surface techniques that provide both elemental and organic information from several monolayers of a sample surface, while also allowing depth profiling or image mapping to be carried out. The static TOF-SIMS with improved performances has expanded the application of TOF-SIMS to the study of a variety of organic, polymeric and biological materials. In this work, TOF-SIMS, XPS and Fourier Transform Infrared (FTIR) measurements were used to characterize commercial natural dyes and traditional silk fabric dyed with plant extracts dyes avoiding the time-consuming and destructive extraction procedures necessary for the spectrophotometric and chromatographic methods previously used. Silk textiles dyed with plant extracts were then analyzed for chemical and functional group identification of their dye components and mordants. TOF-SIMS spectra for the dyed silk fabric showed element ions from metallic mordants, specific fragment ions and molecular ions from plant-extracted dyes. The results of TOF-SIMS, XPS and FTIR are very useful as a reference database for comparison with data about traditional Korean silk fabric and to provide an understanding of traditional dyeing materials. Therefore, this study shows that surface techniques are useful for micro-destructive analysis of plant-extracted dyes and Korean dyed silk fabric. PMID:28809257
Robust watermark technique using masking and Hermite transform.
Coronel, Sandra L Gomez; Ramírez, Boris Escalante; Mosqueda, Marco A Acevedo
2016-01-01
The following paper evaluates a watermark algorithm designed for digital images by using a perceptive mask and a normalization process, thus preventing human eye detection, as well as ensuring its robustness against common processing and geometric attacks. The Hermite transform is employed because it allows a perfect reconstruction of the image, while incorporating human visual system properties; moreover, it is based on the Gaussian functions derivates. The applied watermark represents information of the digital image proprietor. The extraction process is blind, because it does not require the original image. The following techniques were utilized in the evaluation of the algorithm: peak signal-to-noise ratio, the structural similarity index average, the normalized crossed correlation, and bit error rate. Several watermark extraction tests were performed, with against geometric and common processing attacks. It allowed us to identify how many bits in the watermark can be modified for its adequate extraction.
Artificially intelligent recognition of Arabic speaker using voice print-based local features
NASA Astrophysics Data System (ADS)
Mahmood, Awais; Alsulaiman, Mansour; Muhammad, Ghulam; Akram, Sheeraz
2016-11-01
Local features for any pattern recognition system are based on the information extracted locally. In this paper, a local feature extraction technique was developed. This feature was extracted in the time-frequency plain by taking the moving average on the diagonal directions of the time-frequency plane. This feature captured the time-frequency events producing a unique pattern for each speaker that can be viewed as a voice print of the speaker. Hence, we referred to this technique as voice print-based local feature. The proposed feature was compared to other features including mel-frequency cepstral coefficient (MFCC) for speaker recognition using two different databases. One of the databases used in the comparison is a subset of an LDC database that consisted of two short sentences uttered by 182 speakers. The proposed feature attained 98.35% recognition rate compared to 96.7% for MFCC using the LDC subset.
NASA Technical Reports Server (NTRS)
Haralick, R. H. (Principal Investigator); Bosley, R. J.
1974-01-01
The author has identified the following significant results. A procedure was developed to extract cross-band textural features from ERTS MSS imagery. Evolving from a single image texture extraction procedure which uses spatial dependence matrices to measure relative co-occurrence of nearest neighbor grey tones, the cross-band texture procedure uses the distribution of neighboring grey tone N-tuple differences to measure the spatial interrelationships, or co-occurrences, of the grey tone N-tuples present in a texture pattern. In both procedures, texture is characterized in such a way as to be invariant under linear grey tone transformations. However, the cross-band procedure complements the single image procedure by extracting texture information and spectral information contained in ERTS multi-images. Classification experiments show that when used alone, without spectral processing, the cross-band texture procedure extracts more information than the single image texture analysis. Results show an improvement in average correct classification from 86.2% to 88.8% for ERTS image no. 1021-16333 with the cross-band texture procedure. However, when used together with spectral features, the single image texture plus spectral features perform better than the cross-band texture plus spectral features, with an average correct classification of 93.8% and 91.6%, respectively.
Multichannel Doppler Processing for an Experimental Low-Angle Tracking System
1990-05-01
estimation techniques at sea. Because of clutter and noise, it is necessary to use a number of different processing algorithms to extract the required...a number of different processing algorithms to extract the required information. Consequently, the ELAT radar system is composed of multiple...corresponding to RF frequencies, f, and f2. For mode 3, the ambiguities occur at vbi = 15.186 knots and vb2 = 16.96 knots. The sea clutter, with a spectrum
Review: Magnetic resonance imaging techniques in ophthalmology
Fagan, Andrew J.
2012-01-01
Imaging the eye with magnetic resonance imaging (MRI) has proved difficult due to the eye’s propensity to move involuntarily over typical imaging timescales, obscuring the fine structure in the eye due to the resulting motion artifacts. However, advances in MRI technology help to mitigate such drawbacks, enabling the acquisition of high spatiotemporal resolution images with a variety of contrast mechanisms. This review aims to classify the MRI techniques used to date in clinical and preclinical ophthalmologic studies, describing the qualitative and quantitative information that may be extracted and how this may inform on ocular pathophysiology. PMID:23112569
Imagery Interpretation is a timed-tested technique for extracting landscape-level information from aerial photographs and other types of remotely sensed data. The U.S. Environmental Protection Agency's Environmental Photographic Interpretation Center (EPIC) has a 25+ year history...
Chemical named entities recognition: a review on approaches and applications
2014-01-01
The rapid increase in the flow rate of published digital information in all disciplines has resulted in a pressing need for techniques that can simplify the use of this information. The chemistry literature is very rich with information about chemical entities. Extracting molecules and their related properties and activities from the scientific literature to “text mine” these extracted data and determine contextual relationships helps research scientists, particularly those in drug development. One of the most important challenges in chemical text mining is the recognition of chemical entities mentioned in the texts. In this review, the authors briefly introduce the fundamental concepts of chemical literature mining, the textual contents of chemical documents, and the methods of naming chemicals in documents. We sketch out dictionary-based, rule-based and machine learning, as well as hybrid chemical named entity recognition approaches with their applied solutions. We end with an outlook on the pros and cons of these approaches and the types of chemical entities extracted. PMID:24834132
Chemical named entities recognition: a review on approaches and applications.
Eltyeb, Safaa; Salim, Naomie
2014-01-01
The rapid increase in the flow rate of published digital information in all disciplines has resulted in a pressing need for techniques that can simplify the use of this information. The chemistry literature is very rich with information about chemical entities. Extracting molecules and their related properties and activities from the scientific literature to "text mine" these extracted data and determine contextual relationships helps research scientists, particularly those in drug development. One of the most important challenges in chemical text mining is the recognition of chemical entities mentioned in the texts. In this review, the authors briefly introduce the fundamental concepts of chemical literature mining, the textual contents of chemical documents, and the methods of naming chemicals in documents. We sketch out dictionary-based, rule-based and machine learning, as well as hybrid chemical named entity recognition approaches with their applied solutions. We end with an outlook on the pros and cons of these approaches and the types of chemical entities extracted.
Recognition techniques for extracting information from semistructured documents
NASA Astrophysics Data System (ADS)
Della Ventura, Anna; Gagliardi, Isabella; Zonta, Bruna
2000-12-01
Archives of optical documents are more and more massively employed, the demand driven also by the new norms sanctioning the legal value of digital documents, provided they are stored on supports that are physically unalterable. On the supply side there is now a vast and technologically advanced market, where optical memories have solved the problem of the duration and permanence of data at costs comparable to those for magnetic memories. The remaining bottleneck in these systems is the indexing. The indexing of documents with a variable structure, while still not completely automated, can be machine supported to a large degree with evident advantages both in the organization of the work, and in extracting information, providing data that is much more detailed and potentially significant for the user. We present here a system for the automatic registration of correspondence to and from a public office. The system is based on a general methodology for the extraction, indexing, archiving, and retrieval of significant information from semi-structured documents. This information, in our prototype application, is distributed among the database fields of sender, addressee, subject, date, and body of the document.
Video indexing based on image and sound
NASA Astrophysics Data System (ADS)
Faudemay, Pascal; Montacie, Claude; Caraty, Marie-Jose
1997-10-01
Video indexing is a major challenge for both scientific and economic reasons. Information extraction can sometimes be easier from sound channel than from image channel. We first present a multi-channel and multi-modal query interface, to query sound, image and script through 'pull' and 'push' queries. We then summarize the segmentation phase, which needs information from the image channel. Detection of critical segments is proposed. It should speed-up both automatic and manual indexing. We then present an overview of the information extraction phase. Information can be extracted from the sound channel, through speaker recognition, vocal dictation with unconstrained vocabularies, and script alignment with speech. We present experiment results for these various techniques. Speaker recognition methods were tested on the TIMIT and NTIMIT database. Vocal dictation as experimented on newspaper sentences spoken by several speakers. Script alignment was tested on part of a carton movie, 'Ivanhoe'. For good quality sound segments, error rates are low enough for use in indexing applications. Major issues are the processing of sound segments with noise or music, and performance improvement through the use of appropriate, low-cost architectures or networks of workstations.
Sutha, P; Jayanthi, V E
2017-12-08
Birth defect-related demise is mainly due to congenital heart defects. In the earlier stage of pregnancy, fetus problem can be identified by finding information about the fetus to avoid stillbirths. The gold standard used to monitor the health status of the fetus is by Cardiotachography(CTG), cannot be used for long durations and continuous monitoring. There is a need for continuous and long duration monitoring of fetal ECG signals to study the progressive health status of the fetus using portable devices. The non-invasive method of electrocardiogram recording is one of the best method used to diagnose fetal cardiac problem rather than the invasive methods.The monitoring of the fECG requires development of a miniaturized hardware and a efficient signal processing algorithms to extract the fECG embedded in the mother ECG. The paper discusses a prototype hardware developed to monitor and record the raw mother ECG signal containing the fECG and a signal processing algorithm to extract the fetal Electro Cardiogram signal. We have proposed two methods of signal processing, first is based on the Least Mean Square (LMS) Adaptive Noise Cancellation technique and the other method is based on the Wavelet Transformation technique. A prototype hardware was designed and developed to acquire the raw ECG signal containing the mother and fetal ECG and the signal processing techniques were used to eliminate the noises and extract the fetal ECG and the fetal Heart Rate Variability was studied. Both the methods were evaluated with the signal acquired from a fetal ECG simulator, from the Physionet database and that acquired from the subject. Both the methods are evaluated by finding heart rate and its variability, amplitude spectrum and mean value of extracted fetal ECG. Also the accuracy, sensitivity and positive predictive value are also determined for fetal QRS detection technique. In this paper adaptive filtering technique uses Sign-sign LMS algorithm and wavelet techniques with Daubechies wavelet, employed along with de noising techniques for the extraction of fetal Electrocardiogram.Both the methods are having good sensitivity and accuracy. In adaptive method the sensitivity is 96.83, accuracy 89.87, wavelet sensitivity is 95.97 and accuracy is 88.5. Additionally, time domain parameters from the plot of heart rate variability of mother and fetus are analyzed.
Ye, Jay J
2016-01-01
Different methods have been described for data extraction from pathology reports with varying degrees of success. Here a technique for directly extracting data from relational database is described. Our department uses synoptic reports modified from College of American Pathologists (CAP) Cancer Protocol Templates to report most of our cancer diagnoses. Choosing the melanoma of skin synoptic report as an example, R scripting language extended with RODBC package was used to query the pathology information system database. Reports containing melanoma of skin synoptic report in the past 4 and a half years were retrieved and individual data elements were extracted. Using the retrieved list of the cases, the database was queried a second time to retrieve/extract the lymph node staging information in the subsequent reports from the same patients. 426 synoptic reports corresponding to unique lesions of melanoma of skin were retrieved, and data elements of interest were extracted into an R data frame. The distribution of Breslow depth of melanomas grouped by year is used as an example of intra-report data extraction and analysis. When the new pN staging information was present in the subsequent reports, 82% (77/94) was precisely retrieved (pN0, pN1, pN2 and pN3). Additional 15% (14/94) was retrieved with certain ambiguity (positive or knowing there was an update). The specificity was 100% for both. The relationship between Breslow depth and lymph node status was graphed as an example of lesion-specific multi-report data extraction and analysis. R extended with RODBC package is a simple and versatile approach well-suited for the above tasks. The success or failure of the retrieval and extraction depended largely on whether the reports were formatted and whether the contents of the elements were consistently phrased. This approach can be easily modified and adopted for other pathology information systems that use relational database for data management.
1979-12-01
required of the Army aviator. The successful accomplishment of many of these activities depends upon the aviator’s ability to extract information from maps...Cruise NOE VBI Determine Position VB2 Crew Coordination (Topographic) VB3 Radio Communication VI . TERM4INATION C. Post-Flight VIC1 Debriefing 11LA 1I...NOE FUNCTION: VBI DETERMINE POSITION INFORMATION REQUIREMENT SPECIFICS SOURCE COMMENTS See Function IIIAl ! FUNCTION: VB2 CREW COORDINATION
Liu, Bo; Wu, Huayi; Wang, Yandong; Liu, Wenming
2015-01-01
Main road features extracted from remotely sensed imagery play an important role in many civilian and military applications, such as updating Geographic Information System (GIS) databases, urban structure analysis, spatial data matching and road navigation. Current methods for road feature extraction from high-resolution imagery are typically based on threshold value segmentation. It is difficult however, to completely separate road features from the background. We present a new method for extracting main roads from high-resolution grayscale imagery based on directional mathematical morphology and prior knowledge obtained from the Volunteered Geographic Information found in the OpenStreetMap. The two salient steps in this strategy are: (1) using directional mathematical morphology to enhance the contrast between roads and non-roads; (2) using OpenStreetMap roads as prior knowledge to segment the remotely sensed imagery. Experiments were conducted on two ZiYuan-3 images and one QuickBird high-resolution grayscale image to compare our proposed method to other commonly used techniques for road feature extraction. The results demonstrated the validity and better performance of the proposed method for urban main road feature extraction. PMID:26397832
Computer-assisted techniques to evaluate fringe patterns
NASA Astrophysics Data System (ADS)
Sciammarella, Cesar A.; Bhat, Gopalakrishna K.
1992-01-01
Strain measurement using interferometry requires an efficient way to extract the desired information from interferometric fringes. Availability of digital image processing systems makes it possible to use digital techniques for the analysis of fringes. In the past, there have been several developments in the area of one dimensional and two dimensional fringe analysis techniques, including the carrier fringe method (spatial heterodyning) and the phase stepping (quasi-heterodyning) technique. This paper presents some new developments in the area of two dimensional fringe analysis, including a phase stepping technique supplemented by the carrier fringe method and a two dimensional Fourier transform method to obtain the strain directly from the discontinuous phase contour map.
Agarwalla, Swapna; Sarma, Kandarpa Kumar
2016-06-01
Automatic Speaker Recognition (ASR) and related issues are continuously evolving as inseparable elements of Human Computer Interaction (HCI). With assimilation of emerging concepts like big data and Internet of Things (IoT) as extended elements of HCI, ASR techniques are found to be passing through a paradigm shift. Oflate, learning based techniques have started to receive greater attention from research communities related to ASR owing to the fact that former possess natural ability to mimic biological behavior and that way aids ASR modeling and processing. The current learning based ASR techniques are found to be evolving further with incorporation of big data, IoT like concepts. Here, in this paper, we report certain approaches based on machine learning (ML) used for extraction of relevant samples from big data space and apply them for ASR using certain soft computing techniques for Assamese speech with dialectal variations. A class of ML techniques comprising of the basic Artificial Neural Network (ANN) in feedforward (FF) and Deep Neural Network (DNN) forms using raw speech, extracted features and frequency domain forms are considered. The Multi Layer Perceptron (MLP) is configured with inputs in several forms to learn class information obtained using clustering and manual labeling. DNNs are also used to extract specific sentence types. Initially, from a large storage, relevant samples are selected and assimilated. Next, a few conventional methods are used for feature extraction of a few selected types. The features comprise of both spectral and prosodic types. These are applied to Recurrent Neural Network (RNN) and Fully Focused Time Delay Neural Network (FFTDNN) structures to evaluate their performance in recognizing mood, dialect, speaker and gender variations in dialectal Assamese speech. The system is tested under several background noise conditions by considering the recognition rates (obtained using confusion matrices and manually) and computation time. It is found that the proposed ML based sentence extraction techniques and the composite feature set used with RNN as classifier outperform all other approaches. By using ANN in FF form as feature extractor, the performance of the system is evaluated and a comparison is made. Experimental results show that the application of big data samples has enhanced the learning of the ASR system. Further, the ANN based sample and feature extraction techniques are found to be efficient enough to enable application of ML techniques in big data aspects as part of ASR systems. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Zhang, Han; Chen, Xuefeng; Du, Zhaohui; Li, Xiang; Yan, Ruqiang
2016-04-01
Fault information of aero-engine bearings presents two particular phenomena, i.e., waveform distortion and impulsive feature frequency band dispersion, which leads to a challenging problem for current techniques of bearing fault diagnosis. Moreover, although many progresses of sparse representation theory have been made in feature extraction of fault information, the theory also confronts inevitable performance degradation due to the fact that relatively weak fault information has not sufficiently prominent and sparse representations. Therefore, a novel nonlocal sparse model (coined NLSM) and its algorithm framework has been proposed in this paper, which goes beyond simple sparsity by introducing more intrinsic structures of feature information. This work adequately exploits the underlying prior information that feature information exhibits nonlocal self-similarity through clustering similar signal fragments and stacking them together into groups. Within this framework, the prior information is transformed into a regularization term and a sparse optimization problem, which could be solved through block coordinate descent method (BCD), is formulated. Additionally, the adaptive structural clustering sparse dictionary learning technique, which utilizes k-Nearest-Neighbor (kNN) clustering and principal component analysis (PCA) learning, is adopted to further enable sufficient sparsity of feature information. Moreover, the selection rule of regularization parameter and computational complexity are described in detail. The performance of the proposed framework is evaluated through numerical experiment and its superiority with respect to the state-of-the-art method in the field is demonstrated through the vibration signals of experimental rig of aircraft engine bearings.
Biometric Authentication for Gender Classification Techniques: A Review
NASA Astrophysics Data System (ADS)
Mathivanan, P.; Poornima, K.
2017-12-01
One of the challenging biometric authentication applications is gender identification and age classification, which captures gait from far distance and analyze physical information of the subject such as gender, race and emotional state of the subject. It is found that most of the gender identification techniques have focused only with frontal pose of different human subject, image size and type of database used in the process. The study also classifies different feature extraction process such as, Principal Component Analysis (PCA) and Local Directional Pattern (LDP) that are used to extract the authentication features of a person. This paper aims to analyze different gender classification techniques that help in evaluating strength and weakness of existing gender identification algorithm. Therefore, it helps in developing a novel gender classification algorithm with less computation cost and more accuracy. In this paper, an overview and classification of different gender identification techniques are first presented and it is compared with other existing human identification system by means of their performance.
NASA Astrophysics Data System (ADS)
Mobasheri, Mohammad Reza; Ghamary-Asl, Mohsen
2011-12-01
Imaging through hyperspectral technology is a powerful tool that can be used to spectrally identify and spatially map materials based on their specific absorption characteristics in electromagnetic spectrum. A robust method called Tetracorder has shown its effectiveness at material identification and mapping, using a set of algorithms within an expert system decision-making framework. In this study, using some stages of Tetracorder, a technique called classification by diagnosing all absorption features (CDAF) is introduced. This technique enables one to assign a class to the most abundant mineral in each pixel with high accuracy. The technique is based on the derivation of information from reflectance spectra of the image. This can be done through extraction of spectral absorption features of any minerals from their respected laboratory-measured reflectance spectra, and comparing it with those extracted from the pixels in the image. The CDAF technique has been executed on the AVIRIS image where the results show an overall accuracy of better than 96%.
Extending the spectrum of DNA sequences retrieved from ancient bones and teeth
Glocke, Isabelle; Meyer, Matthias
2017-01-01
The number of DNA fragments surviving in ancient bones and teeth is known to decrease with fragment length. Recent genetic analyses of Middle Pleistocene remains have shown that the recovery of extremely short fragments can prove critical for successful retrieval of sequence information from particularly degraded ancient biological material. Current sample preparation techniques, however, are not optimized to recover DNA sequences from fragments shorter than ∼35 base pairs (bp). Here, we show that much shorter DNA fragments are present in ancient skeletal remains but lost during DNA extraction. We present a refined silica-based DNA extraction method that not only enables efficient recovery of molecules as short as 25 bp but also doubles the yield of sequences from longer fragments due to improved recovery of molecules with single-strand breaks. Furthermore, we present strategies for monitoring inefficiencies in library preparation that may result from co-extraction of inhibitory substances during DNA extraction. The combination of DNA extraction and library preparation techniques described here substantially increases the yield of DNA sequences from ancient remains and provides access to a yet unexploited source of highly degraded DNA fragments. Our work may thus open the door for genetic analyses on even older material. PMID:28408382
Isolation and Characterization of Phosphatidyl Choline from Spinach Leaves.
ERIC Educational Resources Information Center
Devor, Kenneth A.
1979-01-01
This inexpensive but informative experiment for undergraduate biochemistry students involves isolating phosphatidyl choline from spinach leaves. Emphasis is on introducing students to techniques of lipid extraction, separation of lipids, identification using thin layer chromatography, and identification of fatty acids. Three periods of three hours…
Digital image processing for photo-reconnaissance applications
NASA Technical Reports Server (NTRS)
Billingsley, F. C.
1972-01-01
Digital image-processing techniques developed for processing pictures from NASA space vehicles are analyzed in terms of enhancement, quantitative restoration, and information extraction. Digital filtering, and the action of a high frequency filter in the real and Fourier domain are discussed along with color and brightness.
Proceedings of the Third Airborne Imaging Spectrometer Data Analysis Workshop
NASA Technical Reports Server (NTRS)
Vane, Gregg (Editor)
1987-01-01
Summaries of 17 papers presented at the workshop are published. After an overview of the imaging spectrometer program, time was spent discussing AIS calibration, performance, information extraction techniques, and the application of high spectral resolution imagery to problems of geology and botany.
Kittell, David E; Mares, Jesus O; Son, Steven F
2015-04-01
Two time-frequency analysis methods based on the short-time Fourier transform (STFT) and continuous wavelet transform (CWT) were used to determine time-resolved detonation velocities with microwave interferometry (MI). The results were directly compared to well-established analysis techniques consisting of a peak-picking routine as well as a phase unwrapping method (i.e., quadrature analysis). The comparison is conducted on experimental data consisting of transient detonation phenomena observed in triaminotrinitrobenzene and ammonium nitrate-urea explosives, representing high and low quality MI signals, respectively. Time-frequency analysis proved much more capable of extracting useful and highly resolved velocity information from low quality signals than the phase unwrapping and peak-picking methods. Additionally, control of the time-frequency methods is mainly constrained to a single parameter which allows for a highly unbiased analysis method to extract velocity information. In contrast, the phase unwrapping technique introduces user based variability while the peak-picking technique does not achieve a highly resolved velocity result. Both STFT and CWT methods are proposed as improved additions to the analysis methods applied to MI detonation experiments, and may be useful in similar applications.
Chen, Hongyu; Martin, Bronwen; Daimon, Caitlin M; Maudsley, Stuart
2013-01-01
Text mining is rapidly becoming an essential technique for the annotation and analysis of large biological data sets. Biomedical literature currently increases at a rate of several thousand papers per week, making automated information retrieval methods the only feasible method of managing this expanding corpus. With the increasing prevalence of open-access journals and constant growth of publicly-available repositories of biomedical literature, literature mining has become much more effective with respect to the extraction of biomedically-relevant data. In recent years, text mining of popular databases such as MEDLINE has evolved from basic term-searches to more sophisticated natural language processing techniques, indexing and retrieval methods, structural analysis and integration of literature with associated metadata. In this review, we will focus on Latent Semantic Indexing (LSI), a computational linguistics technique increasingly used for a variety of biological purposes. It is noted for its ability to consistently outperform benchmark Boolean text searches and co-occurrence models at information retrieval and its power to extract indirect relationships within a data set. LSI has been used successfully to formulate new hypotheses, generate novel connections from existing data, and validate empirical data.
Nelson, Jonathan M.; Kinzel, Paul J.; McDonald, Richard R.; Schmeeckle, Mark
2016-01-01
Recently developed optical and videographic methods for measuring water-surface properties in a noninvasive manner hold great promise for extracting river hydraulic and bathymetric information. This paper describes such a technique, concentrating on the method of infrared videog- raphy for measuring surface velocities and both acoustic (laboratory-based) and laser-scanning (field-based) techniques for measuring water-surface elevations. In ideal laboratory situations with simple flows, appropriate spatial and temporal averaging results in accurate water-surface elevations and water-surface velocities. In test cases, this accuracy is sufficient to allow direct inversion of the governing equations of motion to produce estimates of depth and discharge. Unlike other optical techniques for determining local depth that rely on transmissivity of the water column (bathymetric lidar, multi/hyperspectral correlation), this method uses only water-surface information, so even deep and/or turbid flows can be investigated. However, significant errors arise in areas of nonhydrostatic spatial accelerations, such as those associated with flow over bedforms or other relatively steep obstacles. Using laboratory measurements for test cases, the cause of these errors is examined and both a simple semi-empirical method and computational results are presented that can potentially reduce bathymetric inversion errors.
Extracting Loop Bounds for WCET Analysis Using the Instrumentation Point Graph
NASA Astrophysics Data System (ADS)
Betts, A.; Bernat, G.
2009-05-01
Every calculation engine proposed in the literature of Worst-Case Execution Time (WCET) analysis requires upper bounds on loop iterations. Existing mechanisms to procure this information are either error prone, because they are gathered from the end-user, or limited in scope, because automatic analyses target very specific loop structures. In this paper, we present a technique that obtains bounds completely automatically for arbitrary loop structures. In particular, we show how to employ the Instrumentation Point Graph (IPG) to parse traces of execution (generated by an instrumented program) in order to extract bounds relative to any loop-nesting level. With this technique, therefore, non-rectangular dependencies between loops can be captured, allowing more accurate WCET estimates to be calculated. We demonstrate the improvement in accuracy by comparing WCET estimates computed through our HMB framework against those computed with state-of-the-art techniques.
Profiling of Sugar Nucleotides.
Rejzek, Martin; Hill, Lionel; Hems, Edward S; Kuhaudomlarp, Sakonwan; Wagstaff, Ben A; Field, Robert A
2017-01-01
Sugar nucleotides are essential building blocks for the glycobiology of all living organisms. Detailed information on the types of sugar nucleotides present in a particular cell and how they change as a function of metabolic, developmental, or disease status is vital. The extraction, identification, and quantification of sugar nucleotides in a given sample present formidable challenges. In this chapter, currently used techniques for sugar nucleotide extraction from cells, separation from complex biological matrices, and detection by optical and mass spectrometry methods are discussed. © 2017 Elsevier Inc. All rights reserved.
E&V (Evaluation and Validation) Reference Manual, Version 1.1
1988-10-20
E&V. This model will allow the user to arrive at E&V techniques through many different paths, and provides a means to extract useful information...electronically (preferred) to szymansk@ajpo.sei.cmu.edu or by regular mail to Mr. Raymond Szymanski , AFWAL/AAAF, Wright Patterson AFB, OH 45433-6543. ES-2 E&V...1, 1-3 illustrate the types of infor- mation to be extracted from each document. Chapter 2 provides a more detailed description of the structure and
Integrated Computational System for Aerodynamic Steering and Visualization
NASA Technical Reports Server (NTRS)
Hesselink, Lambertus
1999-01-01
In February of 1994, an effort from the Fluid Dynamics and Information Sciences Divisions at NASA Ames Research Center with McDonnel Douglas Aerospace Company and Stanford University was initiated to develop, demonstrate, validate and disseminate automated software for numerical aerodynamic simulation. The goal of the initiative was to develop a tri-discipline approach encompassing CFD, Intelligent Systems, and Automated Flow Feature Recognition to improve the utility of CFD in the design cycle. This approach would then be represented through an intelligent computational system which could accept an engineer's definition of a problem and construct an optimal and reliable CFD solution. Stanford University's role focused on developing technologies that advance visualization capabilities for analysis of CFD data, extract specific flow features useful for the design process, and compare CFD data with experimental data. During the years 1995-1997, Stanford University focused on developing techniques in the area of tensor visualization and flow feature extraction. Software libraries were created enabling feature extraction and exploration of tensor fields. As a proof of concept, a prototype system called the Integrated Computational System (ICS) was developed to demonstrate CFD design cycle. The current research effort focuses on finding a quantitative comparison of general vector fields based on topological features. Since the method relies on topological information, grid matching and vector alignment is not needed in the comparison. This is often a problem with many data comparison techniques. In addition, since only topology based information is stored and compared for each field, there is a significant compression of information that enables large databases to be quickly searched. This report will (1) briefly review the technologies developed during 1995-1997 (2) describe current technologies in the area of comparison techniques, (4) describe the theory of our new method researched during the grant year (5) summarize a few of the results and finally (6) discuss work within the last 6 months that are direct extensions from the grant.
NASA Astrophysics Data System (ADS)
Wang, Yao-yao; Zhang, Juan; Zhao, Xue-wei; Song, Li-pei; Zhang, Bo; Zhao, Xing
2018-03-01
In order to improve depth extraction accuracy, a method using moving array lenslet technique (MALT) in pickup stage is proposed, which can decrease the depth interval caused by pixelation. In this method, the lenslet array is moved along the horizontal and vertical directions simultaneously for N times in a pitch to get N sets of elemental images. Computational integral imaging reconstruction method for MALT is taken to obtain the slice images of the 3D scene, and the sum modulus (SMD) blur metric is taken on these slice images to achieve the depth information of the 3D scene. Simulation and optical experiments are carried out to verify the feasibility of this method.
Gehrmann, Sebastian; Dernoncourt, Franck; Li, Yeran; Carlson, Eric T; Wu, Joy T; Welt, Jonathan; Foote, John; Moseley, Edward T; Grant, David W; Tyler, Patrick D; Celi, Leo A
2018-01-01
In secondary analysis of electronic health records, a crucial task consists in correctly identifying the patient cohort under investigation. In many cases, the most valuable and relevant information for an accurate classification of medical conditions exist only in clinical narratives. Therefore, it is necessary to use natural language processing (NLP) techniques to extract and evaluate these narratives. The most commonly used approach to this problem relies on extracting a number of clinician-defined medical concepts from text and using machine learning techniques to identify whether a particular patient has a certain condition. However, recent advances in deep learning and NLP enable models to learn a rich representation of (medical) language. Convolutional neural networks (CNN) for text classification can augment the existing techniques by leveraging the representation of language to learn which phrases in a text are relevant for a given medical condition. In this work, we compare concept extraction based methods with CNNs and other commonly used models in NLP in ten phenotyping tasks using 1,610 discharge summaries from the MIMIC-III database. We show that CNNs outperform concept extraction based methods in almost all of the tasks, with an improvement in F1-score of up to 26 and up to 7 percentage points in area under the ROC curve (AUC). We additionally assess the interpretability of both approaches by presenting and evaluating methods that calculate and extract the most salient phrases for a prediction. The results indicate that CNNs are a valid alternative to existing approaches in patient phenotyping and cohort identification, and should be further investigated. Moreover, the deep learning approach presented in this paper can be used to assist clinicians during chart review or support the extraction of billing codes from text by identifying and highlighting relevant phrases for various medical conditions.
Edward, Joseph; Aziz, Mubarak A; Madhu Usha, Arjun; Narayanan, Jyothi K
2017-12-01
Extractions are routine procedures in dental surgery. Traditional extraction techniques use a combination of severing the periodontal attachment, luxation with an elevator, and removal with forceps. A new technique of extraction of maxillary third molar is introduced in this study-Joedds technique, which is compared with the conventional technique. One hundred people were included in the study, the people were divided into two groups by means of simple random sampling. In one group conventional technique of maxillary third molar extraction was used and on second Joedds technique was used. Statistical analysis was carried out with student's t test. Analysis of 100 patients based on parameters showed that the novel joedds technique had minimal trauma to surrounding tissues, less tuberosity and root fractures and the time taken for extraction was <2 min while compared to other group of patients. This novel technique has proved to be better than conventional third molar extraction technique, with minimal complications. If Proper selection of cases and right technique are used.
High-contrast x-ray microtomography in dental research
NASA Astrophysics Data System (ADS)
Davis, Graham; Mills, David
2017-09-01
X-ray microtomography (XMT) is a well-established technique in dental research. The technique has been used extensively to explore the complex morphology of the root canal system, and to qualitatively and quantitatively evaluate root canal instrumentation and filling efficacy in extracted teeth; enabling different techniques to be compared. Densitometric information can be used to identify and map demineralized tissue resulting from tooth decay (caries) and, in extracted teeth, the method can be used to evaluate different methods of excavation. More recently, high contrast XMT is being used to investigate the relationship between external insults to teeth and the pulpal reaction. When such insults occur, fluid may flow through dentinal tubules as a result of cracking or porosity in enamel. Over time, there is an increase in mineralization along the paths of the tubules from the pulp to the damaged region in enamel and this can be visualized using high contrast XMT. The scanner used for this employs time-delay integration to minimize the effects of detector inhomogeneity in order to greatly increase the upper limit on signal-to-noise ratio that can be achieved with long exposure times. When enamel cracks are present in extracted teeth, the presence of these pathways indicates that the cracking occurred prior to extraction. At high contrast, growth lines are occasionally seen in deciduous teeth which may have resulted from periods of maternal illness. Various other anomalies in mineralization resulting from trauma or genetic abnormalities can also be investigated using this technique.
Secondary ionization mass spectrometry analysis in petrochronology: Chapter 7
Schmitt, Axel K.; Vazquez, Jorge A.
2017-01-01
The goal of petrochronology is to extract information about the rates and conditions at which rocks and magmas are transported through the Earth’s crust. Garnering this information from the rock record greatly benefits from integrating textural and compositional data with radiometric dating of accessory minerals. Length scales of crystal growth and diffusive transport in accessory minerals under realistic geologic conditions are typically in the range of 1–10’s of μm, and in some cases even substantially smaller, with zircon having among the lowest diffusion coefficients at a given temperature (e.g., Cherniak and Watson 2003). Intrinsic to the compartmentalization of geochemical and geochronologic information from intra-crystal domains is the requirement to determine accessory mineral compositions using techniques that sample at commensurate spatial scales so as to not convolute the geologic signals that are recorded within crystals, as may be the case with single grain or large grain fragment analysis by isotope dilution thermal ionization mass spectrometry (ID-TIMS; e.g., Schaltegger and Davies 2017, this volume; Schoene and Baxter 2017, this volume). Small crystals can also be difficult to extract by mineral separation techniques traditionally used in geochronology, which also lead to a loss of petrographic context. Secondary Ionization Mass Spectrometry, that is SIMS performed with an ion microprobe, is an analytical technique ideally suited to meet the high spatial resolution analysis requirements that are critical for petrochronology (Table 1).
A sentence sliding window approach to extract protein annotations from biomedical articles
Krallinger, Martin; Padron, Maria; Valencia, Alfonso
2005-01-01
Background Within the emerging field of text mining and statistical natural language processing (NLP) applied to biomedical articles, a broad variety of techniques have been developed during the past years. Nevertheless, there is still a great ned of comparative assessment of the performance of the proposed methods and the development of common evaluation criteria. This issue was addressed by the Critical Assessment of Text Mining Methods in Molecular Biology (BioCreative) contest. The aim of this contest was to assess the performance of text mining systems applied to biomedical texts including tools which recognize named entities such as genes and proteins, and tools which automatically extract protein annotations. Results The "sentence sliding window" approach proposed here was found to efficiently extract text fragments from full text articles containing annotations on proteins, providing the highest number of correctly predicted annotations. Moreover, the number of correct extractions of individual entities (i.e. proteins and GO terms) involved in the relationships used for the annotations was significantly higher than the correct extractions of the complete annotations (protein-function relations). Conclusion We explored the use of averaging sentence sliding windows for information extraction, especially in a context where conventional training data is unavailable. The combination of our approach with more refined statistical estimators and machine learning techniques might be a way to improve annotation extraction for future biomedical text mining applications. PMID:15960831
Acquiring geographical data with web harvesting
NASA Astrophysics Data System (ADS)
Dramowicz, K.
2016-04-01
Many websites contain very attractive and up to date geographical information. This information can be extracted, stored, analyzed and mapped using web harvesting techniques. Poorly organized data from websites are transformed with web harvesting into a more structured format, which can be stored in a database and analyzed. Almost 25% of web traffic is related to web harvesting, mostly while using search engines. This paper presents how to harvest geographic information from web documents using the free tool called the Beautiful Soup, one of the most commonly used Python libraries for pulling data from HTML and XML files. It is a relatively easy task to process one static HTML table. The more challenging task is to extract and save information from tables located in multiple and poorly organized websites. Legal and ethical aspects of web harvesting are discussed as well. The paper demonstrates two case studies. The first one shows how to extract various types of information about the Good Country Index from the multiple web pages, load it into one attribute table and map the results. The second case study shows how script tools and GIS can be used to extract information from one hundred thirty six websites about Nova Scotia wines. In a little more than three minutes a database containing one hundred and six liquor stores selling these wines is created. Then the availability and spatial distribution of various types of wines (by grape types, by wineries, and by liquor stores) are mapped and analyzed.
Wildey, R.L.
1980-01-01
An economical method of digitally extracting sea-wave spectra from synthetic-aperture radar-signal records, which can be performed routinely in real or near-real time with the reception of telemetry from Seasat satellites, would be of value to a variety of scientific disciplines. This paper explores techniques for such data extraction and concludes that the mere fact that the desired result is devoid of phase information does not, of itself, lead to a simplification in data processing because of the nature of the modulation performed on the radar pulse by the backscattering surface. -from Author
Summary of Work for Joint Research Interchanges with DARWIN Integrated Product Team 1998
NASA Technical Reports Server (NTRS)
Hesselink, Lambertus
1999-01-01
The intent of Stanford University's SciVis group is to develop technologies that enabled comparative analysis and visualization techniques for simulated and experimental flow fields. These techniques would then be made available under the Joint Research Interchange for potential injection into the DARWIN Workspace Environment (DWE). In the past, we have focused on techniques that exploited feature based comparisons such as shock and vortex extractions. Our current research effort focuses on finding a quantitative comparison of general vector fields based on topological features. Since the method relies on topological information, grid matching and vector alignment is not needed in the comparison. This is often a problem with many data comparison techniques. In addition, since only topology based information is stored and compared for each field, there is a significant compression of information that enables large databases to be quickly searched. This report will briefly (1) describe current technologies in the area of comparison techniques, (2) will describe the theory of our new method and finally (3) summarize a few of the results.
Summary of Work for Joint Research Interchanges with DARWIN Integrated Product Team
NASA Technical Reports Server (NTRS)
Hesselink, Lambertus
1999-01-01
The intent of Stanford University's SciVis group is to develop technologies that enabled comparative analysis and visualization techniques for simulated and experimental flow fields. These techniques would then be made available un- der the Joint Research Interchange for potential injection into the DARWIN Workspace Environment (DWE). In the past, we have focused on techniques that exploited feature based comparisons such as shock and vortex extractions. Our current research effort focuses on finding a quantitative comparison of general vector fields based on topological features. Since the method relies on topological information, grid matching an@ vector alignment is not needed in the comparison. This is often a problem with many data comparison techniques. In addition, since only topology based information is stored and compared for each field, there is a significant compression of information that enables large databases to be quickly searched. This report will briefly (1) describe current technologies in the area of comparison techniques, (2) will describe the theory of our new method and finally (3) summarize a few of the results.
Audio feature extraction using probability distribution function
NASA Astrophysics Data System (ADS)
Suhaib, A.; Wan, Khairunizam; Aziz, Azri A.; Hazry, D.; Razlan, Zuradzman M.; Shahriman A., B.
2015-05-01
Voice recognition has been one of the popular applications in robotic field. It is also known to be recently used for biometric and multimedia information retrieval system. This technology is attained from successive research on audio feature extraction analysis. Probability Distribution Function (PDF) is a statistical method which is usually used as one of the processes in complex feature extraction methods such as GMM and PCA. In this paper, a new method for audio feature extraction is proposed which is by using only PDF as a feature extraction method itself for speech analysis purpose. Certain pre-processing techniques are performed in prior to the proposed feature extraction method. Subsequently, the PDF result values for each frame of sampled voice signals obtained from certain numbers of individuals are plotted. From the experimental results obtained, it can be seen visually from the plotted data that each individuals' voice has comparable PDF values and shapes.
An automatic rat brain extraction method based on a deformable surface model.
Li, Jiehua; Liu, Xiaofeng; Zhuo, Jiachen; Gullapalli, Rao P; Zara, Jason M
2013-08-15
The extraction of the brain from the skull in medical images is a necessary first step before image registration or segmentation. While pre-clinical MR imaging studies on small animals, such as rats, are increasing, fully automatic imaging processing techniques specific to small animal studies remain lacking. In this paper, we present an automatic rat brain extraction method, the Rat Brain Deformable model method (RBD), which adapts the popular human brain extraction tool (BET) through the incorporation of information on the brain geometry and MR image characteristics of the rat brain. The robustness of the method was demonstrated on T2-weighted MR images of 64 rats and compared with other brain extraction methods (BET, PCNN, PCNN-3D). The results demonstrate that RBD reliably extracts the rat brain with high accuracy (>92% volume overlap) and is robust against signal inhomogeneity in the images. Copyright © 2013 Elsevier B.V. All rights reserved.
Integrated Micro-Chip Amino Acid Chirality Detector for MOD
NASA Technical Reports Server (NTRS)
Glavin, D. P.; Bada, J. L.; Botta, O.; Kminek, G.; Grunthaner, F.; Mathies, R.
2001-01-01
Integration of a micro-chip capillary electrophoresis analyzer with a sublimation-based extraction technique, as used in the Mars Organic Detector (MOD), for the in-situ detection of amino acids and their enantiomers on solar system bodies. Additional information is contained in the original extended abstract.
Knowledge Discovery from Databases: An Introductory Review.
ERIC Educational Resources Information Center
Vickery, Brian
1997-01-01
Introduces new procedures being used to extract knowledge from databases and discusses rationales for developing knowledge discovery methods. Methods are described for such techniques as classification, clustering, and the detection of deviations from pre-established norms. Examines potential uses of knowledge discovery in the information field.…
Naviglio, Daniele; Formato, Andrea; Gallo, Monica
2014-09-01
The purpose of this study is to compare the extraction process for the production of China elixir starting from the same vegetable mixture, as performed by conventional maceration or a cyclically pressurized extraction process (rapid solid-liquid dynamic extraction) using the Naviglio Extractor. Dry residue was used as a marker for the kinetics of the extraction process because it was proportional to the amount of active principles extracted and, therefore, to their total concentration in the solution. UV spectra of the hydroalcoholic extracts allowed for the identification of the predominant chemical species in the extracts, while the organoleptic tests carried out on the final product provided an indication of the acceptance of the beverage and highlighted features that were not detectable by instrumental analytical techniques. In addition, a numerical simulation of the process has been performed, obtaining useful information about the timing of the process (time history) as well as its mathematical description. © 2014 Institute of Food Technologists®
Kellogg, Joshua J.; Wallace, Emily D.; Graf, Tyler N.; Oberlies, Nicholas H.; Cech, Nadja B.
2018-01-01
Metabolomics has emerged as an important analytical technique for multiple applications. The value of information obtained from metabolomics analysis depends on the degree to which the entire metabolome is present and the reliability of sample treatment to ensure reproducibility across the study. The purpose of this study was to compare methods of preparing complex botanical extract samples prior to metabolomics profiling. Two extraction methodologies, accelerated solvent extraction and a conventional solvent maceration, were compared using commercial green tea [Camellia sinensis (L.) Kuntze (Theaceae)] products as a test case. The accelerated solvent protocol was first evaluated to ascertain critical factors influencing extraction using a D-optimal experimental design study. The accelerated solvent and conventional extraction methods yielded similar metabolite profiles for the green tea samples studied. The accelerated solvent extraction yielded higher total amounts of extracted catechins, was more reproducible, and required less active bench time to prepare the samples. This study demonstrates the effectiveness of accelerated solvent as an efficient methodology for metabolomics studies. PMID:28787673
Development of Availability and Sustainability Spares Optimization Models for Aircraft Reparables
2013-09-01
the integrated SAP ® Enterprise Resource Planning ( ERP ) information system of the RSAF. A more in-depth review of OPUS10 capabilities will be provided...Dynamic Multi-Echelon Technique for Recoverable Item Control EBO: Expected Backorder EOQ: Economic Order Quantity ERP : Enterprise Resource...particular, the propulsion sub-system was expanded to include SSRUs. Spares information are extracted from the RSAF ERP system and include: 22
Advances in Spectral-Spatial Classification of Hyperspectral Images
NASA Technical Reports Server (NTRS)
Fauvel, Mathieu; Tarabalka, Yuliya; Benediktsson, Jon Atli; Chanussot, Jocelyn; Tilton, James C.
2012-01-01
Recent advances in spectral-spatial classification of hyperspectral images are presented in this paper. Several techniques are investigated for combining both spatial and spectral information. Spatial information is extracted at the object (set of pixels) level rather than at the conventional pixel level. Mathematical morphology is first used to derive the morphological profile of the image, which includes characteristics about the size, orientation, and contrast of the spatial structures present in the image. Then, the morphological neighborhood is defined and used to derive additional features for classification. Classification is performed with support vector machines (SVMs) using the available spectral information and the extracted spatial information. Spatial postprocessing is next investigated to build more homogeneous and spatially consistent thematic maps. To that end, three presegmentation techniques are applied to define regions that are used to regularize the preliminary pixel-wise thematic map. Finally, a multiple-classifier (MC) system is defined to produce relevant markers that are exploited to segment the hyperspectral image with the minimum spanning forest algorithm. Experimental results conducted on three real hyperspectral images with different spatial and spectral resolutions and corresponding to various contexts are presented. They highlight the importance of spectral–spatial strategies for the accurate classification of hyperspectral images and validate the proposed methods.
Identifying key hospital service quality factors in online health communities.
Jung, Yuchul; Hur, Cinyoung; Jung, Dain; Kim, Minki
2015-04-07
The volume of health-related user-created content, especially hospital-related questions and answers in online health communities, has rapidly increased. Patients and caregivers participate in online community activities to share their experiences, exchange information, and ask about recommended or discredited hospitals. However, there is little research on how to identify hospital service quality automatically from the online communities. In the past, in-depth analysis of hospitals has used random sampling surveys. However, such surveys are becoming impractical owing to the rapidly increasing volume of online data and the diverse analysis requirements of related stakeholders. As a solution for utilizing large-scale health-related information, we propose a novel approach to identify hospital service quality factors and overtime trends automatically from online health communities, especially hospital-related questions and answers. We defined social media-based key quality factors for hospitals. In addition, we developed text mining techniques to detect such factors that frequently occur in online health communities. After detecting these factors that represent qualitative aspects of hospitals, we applied a sentiment analysis to recognize the types of recommendations in messages posted within online health communities. Korea's two biggest online portals were used to test the effectiveness of detection of social media-based key quality factors for hospitals. To evaluate the proposed text mining techniques, we performed manual evaluations on the extraction and classification results, such as hospital name, service quality factors, and recommendation types using a random sample of messages (ie, 5.44% (9450/173,748) of the total messages). Service quality factor detection and hospital name extraction achieved average F1 scores of 91% and 78%, respectively. In terms of recommendation classification, performance (ie, precision) is 78% on average. Extraction and classification performance still has room for improvement, but the extraction results are applicable to more detailed analysis. Further analysis of the extracted information reveals that there are differences in the details of social media-based key quality factors for hospitals according to the regions in Korea, and the patterns of change seem to accurately reflect social events (eg, influenza epidemics). These findings could be used to provide timely information to caregivers, hospital officials, and medical officials for health care policies.
PCA Tomography: how to extract information from data cubes
NASA Astrophysics Data System (ADS)
Steiner, J. E.; Menezes, R. B.; Ricci, T. V.; Oliveira, A. S.
2009-05-01
Astronomy has evolved almost exclusively by the use of spectroscopic and imaging techniques, operated separately. With the development of modern technologies, it is possible to obtain data cubes in which one combines both techniques simultaneously, producing images with spectral resolution. To extract information from them can be quite complex, and hence the development of new methods of data analysis is desirable. We present a method of analysis of data cube (data from single field observations, containing two spatial and one spectral dimension) that uses Principal Component Analysis (PCA) to express the data in the form of reduced dimensionality, facilitating efficient information extraction from very large data sets. PCA transforms the system of correlated coordinates into a system of uncorrelated coordinates ordered by principal components of decreasing variance. The new coordinates are referred to as eigenvectors, and the projections of the data on to these coordinates produce images we will call tomograms. The association of the tomograms (images) to eigenvectors (spectra) is important for the interpretation of both. The eigenvectors are mutually orthogonal, and this information is fundamental for their handling and interpretation. When the data cube shows objects that present uncorrelated physical phenomena, the eigenvector's orthogonality may be instrumental in separating and identifying them. By handling eigenvectors and tomograms, one can enhance features, extract noise, compress data, extract spectra, etc. We applied the method, for illustration purpose only, to the central region of the low ionization nuclear emission region (LINER) galaxy NGC 4736, and demonstrate that it has a type 1 active nucleus, not known before. Furthermore, we show that it is displaced from the centre of its stellar bulge. Based on observations obtained at the Gemini Observatory, which is operated by the Association of Universities for Research in Astronomy, Inc., under a cooperative agreement with the National Science Foundation on behalf of the Gemini partnership: the National Science Foundation (United States), the Science and Technology Facilities Council (United Kingdom), the National Research Council (Canada), CONICYT (Chile), the Australian Research Council (Australia), Ministério da Ciência e Tecnologia (Brazil) and SECYT (Argentina). E-mail: steiner@astro.iag.usp.br
A comparison of machine learning techniques for detection of drug target articles.
Danger, Roxana; Segura-Bedmar, Isabel; Martínez, Paloma; Rosso, Paolo
2010-12-01
Important progress in treating diseases has been possible thanks to the identification of drug targets. Drug targets are the molecular structures whose abnormal activity, associated to a disease, can be modified by drugs, improving the health of patients. Pharmaceutical industry needs to give priority to their identification and validation in order to reduce the long and costly drug development times. In the last two decades, our knowledge about drugs, their mechanisms of action and drug targets has rapidly increased. Nevertheless, most of this knowledge is hidden in millions of medical articles and textbooks. Extracting knowledge from this large amount of unstructured information is a laborious job, even for human experts. Drug target articles identification, a crucial first step toward the automatic extraction of information from texts, constitutes the aim of this paper. A comparison of several machine learning techniques has been performed in order to obtain a satisfactory classifier for detecting drug target articles using semantic information from biomedical resources such as the Unified Medical Language System. The best result has been achieved by a Fuzzy Lattice Reasoning classifier, which reaches 98% of ROC area measure. Copyright © 2010 Elsevier Inc. All rights reserved.
Simultaneous extraction of proteins and metabolites from cells in culture
Sapcariu, Sean C.; Kanashova, Tamara; Weindl, Daniel; Ghelfi, Jenny; Dittmar, Gunnar; Hiller, Karsten
2014-01-01
Proper sample preparation is an integral part of all omics approaches, and can drastically impact the results of a wide number of analyses. As metabolomics and proteomics research approaches often yield complementary information, it is desirable to have a sample preparation procedure which can yield information for both types of analyses from the same cell population. This protocol explains a method for the separation and isolation of metabolites and proteins from the same biological sample, in order for downstream use in metabolomics and proteomics analyses simultaneously. In this way, two different levels of biological regulation can be studied in a single sample, minimizing the variance that would result from multiple experiments. This protocol can be used with both adherent and suspension cell cultures, and the extraction of metabolites from cellular medium is also detailed, so that cellular uptake and secretion of metabolites can be quantified. Advantages of this technique includes:1.Inexpensive and quick to perform; this method does not require any kits.2.Can be used on any cells in culture, including cell lines and primary cells extracted from living organisms.3.A wide variety of different analysis techniques can be used, adding additional value to metabolomics data analyzed from a sample; this is of high value in experimental systems biology. PMID:26150938
Torres-Pérez, Mónica I; Jiménez-Velez, Braulio D; Mansilla-Rivera, Imar; Rodríguez-Sierra, Carlos J
2005-03-01
The effect that three extraction techniques (e.g., Soxhlet, ultrasound and microwave-assisted extraction) have on the toxicity, as measured by submitochondrial particle (SMP) and Microtox assays, of organic extracts was compared from three sources of airborne particulate matter (APM). The extraction technique influenced the toxicity response of APM extracts and it was dependent on the bioassay method, and APM sample source. APM extracts from microwave-assisted extraction (MAE) were similar or more toxic than the conventional extraction techniques of Soxhlet and ultrasound, thus, providing an alternate extraction method. The microwave extraction technique has the advantage of using less solvent volume, less extraction time, and the capacity to simultaneously extract twelve samples. The ordering of APM toxicity was generally urban dust > diesel dust > PM10 (particles with diameter < 10 microm), thus, reflecting different chemical composition of the samples. This study is the first to report the suitability of two standard in-vitro bioassays for the future toxicological characterization of APM collected from Puerto Rico, with the SMP generally showing better sensitivity to the well-known Microtox bioassay.
NASA Astrophysics Data System (ADS)
Lucciani, Roberto; Laneve, Giovanni; Jahjah, Munzer; Mito, Collins
2016-08-01
The crop growth stage represents essential information for agricultural areas management. In this study we investigate the feasibility of a tool based on remotely sensed satellite (Landsat 8) imagery, capable of automatically classify crop fields and how much resolution enhancement based on pan-sharpening techniques and phenological information extraction, useful to create decision rules that allow to identify semantic class to assign to an object, can effectively support the classification process. Moreover we investigate the opportunity to extract vegetation health status information from remotely sensed assessment of the equivalent water thickness (EWT). Our case study is the Kenya's Great Rift valley, in this area a ground truth campaign was conducted during August 2015 in order to collect crop fields GPS measurements, leaf area index (LAI) and chlorophyll samples.
Class Extraction and Classification Accuracy in Latent Class Models
ERIC Educational Resources Information Center
Wu, Qiong
2009-01-01
Despite the increasing popularity of latent class models (LCM) in educational research, methodological studies have not yet accumulated much information on the appropriate application of this modeling technique, especially with regard to requirement on sample size and number of indicators. This dissertation study represented an initial attempt to…
ESP Needs Washback and the Fine Tuning of Driving Instruction
ERIC Educational Resources Information Center
Freiermuth, Mark R.
2007-01-01
Workplace needs are often difficult for English for Specific Purposes (ESP) teachers to assess due to a variety of obstacles that can restrict opportunities to analyze the existing needs. Nevertheless, the workers' needs may be recognized by employing techniques aimed at extracting information from the workers themselves. Japanese university…
Semantic Preview Benefit during Reading
ERIC Educational Resources Information Center
Hohenstein, Sven; Kliegl, Reinhold
2014-01-01
Word features in parafoveal vision influence eye movements during reading. The question of whether readers extract semantic information from parafoveal words was studied in 3 experiments by using a gaze-contingent display change technique. Subjects read German sentences containing 1 of several preview words that were replaced by a target word…
Critical Evaluation of Soil Pore Water Extraction Methods on a Natural Soil
NASA Astrophysics Data System (ADS)
Orlowski, Natalie; Pratt, Dyan; Breuer, Lutz; McDonnell, Jeffrey
2017-04-01
Soil pore water extraction is an important component in ecohydrological studies for the measurement of δ2H and δ18O. The effect of pore water extraction technique on resultant isotopic signature is poorly understood. Here we present results of an intercomparison of commonly applied lab-based soil water extraction techniques on a natural soil: high pressure mechanical squeezing, centrifugation, direct vapor equilibration, microwave extraction, and two types of cryogenic extraction systems. We applied these extraction methods to a natural summer-dry (gravimetric water contents ranging from 8% to 15%) glacio-lacustrine, moderately fine textured clayey soil; excavated in 10 cm sampling increments to a depth of 1 meter. Isotope results were analyzed via OA-ICOS and compared for each extraction technique that produced liquid water. From our previous intercomparison study among the same extraction techniques but with standard soils, we discovered that extraction methods are not comparable. We therefore tested the null hypothesis that all extraction techniques would be able to replicate the natural evaporation front in a comparable manner occurring in a summer-dry soil. Our results showed that the extraction technique utilized had a significant effect on the soil water isotopic composition. High pressure mechanical squeezing and vapor equilibration techniques produced similar results with similarly sloped evaporation lines. Due to the nature of soil properties and dryness, centrifugation was unsuccessful in obtaining pore water for isotopic analysis. Cryogenic extraction on both tested techniques produced similar results to each other on a similar sloping evaporation line, but dissimilar with depth.
Feature extraction via KPCA for classification of gait patterns.
Wu, Jianning; Wang, Jue; Liu, Li
2007-06-01
Automated recognition of gait pattern change is important in medical diagnostics as well as in the early identification of at-risk gait in the elderly. We evaluated the use of Kernel-based Principal Component Analysis (KPCA) to extract more gait features (i.e., to obtain more significant amounts of information about human movement) and thus to improve the classification of gait patterns. 3D gait data of 24 young and 24 elderly participants were acquired using an OPTOTRAK 3020 motion analysis system during normal walking, and a total of 36 gait spatio-temporal and kinematic variables were extracted from the recorded data. KPCA was used first for nonlinear feature extraction to then evaluate its effect on a subsequent classification in combination with learning algorithms such as support vector machines (SVMs). Cross-validation test results indicated that the proposed technique could allow spreading the information about the gait's kinematic structure into more nonlinear principal components, thus providing additional discriminatory information for the improvement of gait classification performance. The feature extraction ability of KPCA was affected slightly with different kernel functions as polynomial and radial basis function. The combination of KPCA and SVM could identify young-elderly gait patterns with 91% accuracy, resulting in a markedly improved performance compared to the combination of PCA and SVM. These results suggest that nonlinear feature extraction by KPCA improves the classification of young-elderly gait patterns, and holds considerable potential for future applications in direct dimensionality reduction and interpretation of multiple gait signals.
The design and implementation of web mining in web sites security
NASA Astrophysics Data System (ADS)
Li, Jian; Zhang, Guo-Yin; Gu, Guo-Chang; Li, Jian-Li
2003-06-01
The backdoor or information leak of Web servers can be detected by using Web Mining techniques on some abnormal Web log and Web application log data. The security of Web servers can be enhanced and the damage of illegal access can be avoided. Firstly, the system for discovering the patterns of information leakages in CGI scripts from Web log data was proposed. Secondly, those patterns for system administrators to modify their codes and enhance their Web site security were provided. The following aspects were described: one is to combine web application log with web log to extract more information, so web data mining could be used to mine web log for discovering the information that firewall and Information Detection System cannot find. Another approach is to propose an operation module of web site to enhance Web site security. In cluster server session, Density-Based Clustering technique is used to reduce resource cost and obtain better efficiency.
[Advance in interferogram data processing technique].
Jing, Juan-Juan; Xiangli, Bin; Lü, Qun-Bo; Huang, Min; Zhou, Jin-Song
2011-04-01
Fourier transform spectrometry is a type of novel information obtaining technology, which integrated the functions of imaging and spectra, but the data that the instrument acquired is the interference data of the target, which is an intermediate data and couldn't be used directly, so data processing must be adopted for the successful application of the interferometric data In the present paper, data processing techniques are divided into two classes: general-purpose and special-type. First, the advance in universal interferometric data processing technique is introduced, then the special-type interferometric data extracting method and data processing technique is illustrated according to the classification of Fourier transform spectroscopy. Finally, the trends of interferogram data processing technique are discussed.
Document Examination: Applications of Image Processing Systems.
Kopainsky, B
1989-12-01
Dealing with images is a familiar business for an expert in questioned documents: microscopic, photographic, infrared, and other optical techniques generate images containing the information he or she is looking for. A recent method for extracting most of this information is digital image processing, ranging from the simple contrast and contour enhancement to the advanced restoration of blurred texts. When combined with a sophisticated physical imaging system, an image pricessing system has proven to be a powerful and fast tool for routine non-destructive scanning of suspect documents. This article reviews frequent applications, comprising techniques to increase legibility, two-dimensional spectroscopy (ink discrimination, alterations, erased entries, etc.), comparison techniques (stamps, typescript letters, photo substitution), and densitometry. Computerized comparison of handwriting is not included. Copyright © 1989 Central Police University.
Satellite SAR interferometric techniques applied to emergency mapping
NASA Astrophysics Data System (ADS)
Stefanova Vassileva, Magdalena; Riccardi, Paolo; Lecci, Daniele; Giulio Tonolo, Fabio; Boccardo Boccardo, Piero; Chiesa, Giuliana; Angeluccetti, Irene
2017-04-01
This paper aim to investigate the capabilities of the currently available SAR interferometric algorithms in the field of emergency mapping. Several tests have been performed exploiting the Copernicus Sentinel-1 data using the COTS software ENVI/SARscape 5.3. Emergency Mapping can be defined as "creation of maps, geo-information products and spatial analyses dedicated to providing situational awareness emergency management and immediate crisis information for response by means of extraction of reference (pre-event) and crisis (post-event) geographic information/data from satellite or aerial imagery". The conventional differential SAR interferometric technique (DInSAR) and the two currently available multi-temporal SAR interferometric approaches, i.e. Permanent Scatterer Interferometry (PSI) and Small BAseline Subset (SBAS), have been applied to provide crisis information useful for the emergency management activities. Depending on the considered Emergency Management phase, it may be distinguished between rapid mapping, i.e. fast provision of geospatial data regarding the area affected for the immediate emergency response, and monitoring mapping, i.e. detection of phenomena for risk prevention and mitigation activities. In order to evaluate the potential and limitations of the aforementioned SAR interferometric approaches for the specific rapid and monitoring mapping application, five main factors have been taken into account: crisis information extracted, input data required, processing time and expected accuracy. The results highlight that DInSAR has the capacity to delineate areas affected by large and sudden deformations and fulfills most of the immediate response requirements. The main limiting factor of interferometry is the availability of suitable SAR acquisition immediately after the event (e.g. Sentinel-1 mission characterized by 6-day revisiting time may not always satisfy the immediate emergency request). PSI and SBAS techniques are suitable to produce monitoring maps for risk prevention and mitigation purposes. Nevertheless, multi-temporal techniques require large SAR temporal datasets, i.e. 20 and more images. Being the Sentinel-1 missions operational only since April 2014, multi-mission SAR datasets should be therefore exploited to carry out historical analysis.
Machine Reading for Extraction of Bacteria and Habitat Taxonomies
Kordjamshidi, Parisa; Massa, Wouter; Provoost, Thomas; Moens, Marie-Francine
2015-01-01
There is a vast amount of scientific literature available from various resources such as the internet. Automating the extraction of knowledge from these resources is very helpful for biologists to easily access this information. This paper presents a system to extract the bacteria and their habitats, as well as the relations between them. We investigate to what extent current techniques are suited for this task and test a variety of models in this regard. We detect entities in a biological text and map the habitats into a given taxonomy. Our model uses a linear chain Conditional Random Field (CRF). For the prediction of relations between the entities, a model based on logistic regression is built. Designing a system upon these techniques, we explore several improvements for both the generation and selection of good candidates. One contribution to this lies in the extended exibility of our ontology mapper that uses an advanced boundary detection and assigns the taxonomy elements to the detected habitats. Furthermore, we discover value in the combination of several distinct candidate generation rules. Using these techniques, we show results that are significantly improving upon the state of art for the BioNLP Bacteria Biotopes task. PMID:27077141
A mobile unit for memory retrieval in daily life based on image and sensor processing
NASA Astrophysics Data System (ADS)
Takesumi, Ryuji; Ueda, Yasuhiro; Nakanishi, Hidenobu; Nakamura, Atsuyoshi; Kakimori, Nobuaki
2003-10-01
We developed a Mobile Unit which purpose is to support memory retrieval of daily life. In this paper, we describe the two characteristic factors of this unit. (1)The behavior classification with an acceleration sensor. (2)Extracting the difference of environment with image processing technology. In (1), By analyzing power and frequency of an acceleration sensor which turns to gravity direction, the one's activities can be classified using some techniques to walk, stay, and so on. In (2), By extracting the difference between the beginning scene and the ending scene of a stay scene with image processing, the result which is done by user is recognized as the difference of environment. Using those 2 techniques, specific scenes of daily life can be extracted, and important information at the change of scenes can be realized to record. Especially we describe the effect to support retrieving important things, such as a thing left behind and a state of working halfway.
Methods from Information Extraction from LIDAR Intensity Data and Multispectral LIDAR Technology
NASA Astrophysics Data System (ADS)
Scaioni, M.; Höfle, B.; Baungarten Kersting, A. P.; Barazzetti, L.; Previtali, M.; Wujanz, D.
2018-04-01
LiDAR is a consolidated technology for topographic mapping and 3D reconstruction, which is implemented in several platforms On the other hand, the exploitation of the geometric information has been coupled by the use of laser intensity, which may provide additional data for multiple purposes. This option has been emphasized by the availability of sensors working on different wavelength, thus able to provide additional information for classification of surfaces and objects. Several applications ofmonochromatic and multi-spectral LiDAR data have been already developed in different fields: geosciences, agriculture, forestry, building and cultural heritage. The use of intensity data to extract measures of point cloud quality has been also developed. The paper would like to give an overview on the state-of-the-art of these techniques, and to present the modern technologies for the acquisition of multispectral LiDAR data. In addition, the ISPRS WG III/5 on `Information Extraction from LiDAR Intensity Data' has collected and made available a few open data sets to support scholars to do research on this field. This service is presented and data sets delivered so far as are described.
Image Hashes as Templates for Verification
DOE Office of Scientific and Technical Information (OSTI.GOV)
Janik, Tadeusz; Jarman, Kenneth D.; Robinson, Sean M.
2012-07-17
Imaging systems can provide measurements that confidently assess characteristics of nuclear weapons and dismantled weapon components, and such assessment will be needed in future verification for arms control. Yet imaging is often viewed as too intrusive, raising concern about the ability to protect sensitive information. In particular, the prospect of using image-based templates for verifying the presence or absence of a warhead, or of the declared configuration of fissile material in storage, may be rejected out-of-hand as being too vulnerable to violation of information barrier (IB) principles. Development of a rigorous approach for generating and comparing reduced-information templates from images,more » and assessing the security, sensitivity, and robustness of verification using such templates, are needed to address these concerns. We discuss our efforts to develop such a rigorous approach based on a combination of image-feature extraction and encryption-utilizing hash functions to confirm proffered declarations, providing strong classified data security while maintaining high confidence for verification. The proposed work is focused on developing secure, robust, tamper-sensitive and automatic techniques that may enable the comparison of non-sensitive hashed image data outside an IB. It is rooted in research on so-called perceptual hash functions for image comparison, at the interface of signal/image processing, pattern recognition, cryptography, and information theory. Such perceptual or robust image hashing—which, strictly speaking, is not truly cryptographic hashing—has extensive application in content authentication and information retrieval, database search, and security assurance. Applying and extending the principles of perceptual hashing to imaging for arms control, we propose techniques that are sensitive to altering, forging and tampering of the imaged object yet robust and tolerant to content-preserving image distortions and noise. Ensuring that the information contained in the hashed image data (available out-of-IB) cannot be used to extract sensitive information about the imaged object is of primary concern. Thus the techniques are characterized by high unpredictability to guarantee security. We will present an assessment of the performance of our techniques with respect to security, sensitivity and robustness on the basis of a methodical and mathematically precise framework.« less
Estimation of the Scatterer Distribution of the Cirrhotic Liver using Ultrasonic Image
NASA Astrophysics Data System (ADS)
Yamaguchi, Tadashi; Hachiya, Hiroyuki
1998-05-01
In the B-mode image of the liver obtained by an ultrasonic imaging system, the speckled pattern changes with the progression of the disease such as liver cirrhosis.In this paper we present the statistical characteristics of the echo envelope of the liver, and the technique to extract information of the scatterer distribution from the normal and cirrhotic liver images using constant false alarm rate (CFAR) processing.We analyze the relationship between the extracted scatterer distribution and the stage of liver cirrhosis. The ratio of the area in which the amplitude of the processing signal is more than the threshold to the entire processed image area is related quantitatively to the stage of liver cirrhosis.It is found that the proposed technique is valid for the quantitative diagnosis of liver cirrhosis.
Vasudevan, Srivathsan; Chen, George C K; Lin, Zhiping; Ng, Beng Koon
2015-05-10
Photothermal microscopy (PTM), a noninvasive pump-probe high-resolution microscopy, has been applied as a bioimaging tool in many biomedical studies. PTM utilizes a conventional phase contrast microscope to obtain highly resolved photothermal images. However, phase information cannot be extracted from these photothermal images, as they are not quantitative. Moreover, the problem of halos inherent in conventional phase contrast microscopy needs to be tackled. Hence, a digital holographic photothermal microscopy technique is proposed as a solution to obtain quantitative phase images. The proposed technique is demonstrated by extracting phase values of red blood cells from their photothermal images. These phase values can potentially be used to determine the temperature distribution of the photothermal images, which is an important study in live cell monitoring applications.
Botsis, Taxiarchis; Foster, Matthew; Arya, Nina; Kreimeyer, Kory; Pandey, Abhishek; Arya, Deepa
2017-04-26
To evaluate the feasibility of automated dose and adverse event information retrieval in supporting the identification of safety patterns. We extracted all rabbit Anti-Thymocyte Globulin (rATG) reports submitted to the United States Food and Drug Administration Adverse Event Reporting System (FAERS) from the product's initial licensure in April 16, 1984 through February 8, 2016. We processed the narratives using the Medication Extraction (MedEx) and the Event-based Text-mining of Health Electronic Records (ETHER) systems and retrieved the appropriate medication, clinical, and temporal information. When necessary, the extracted information was manually curated. This process resulted in a high quality dataset that was analyzed with the Pattern-based and Advanced Network Analyzer for Clinical Evaluation and Assessment (PANACEA) to explore the association of rATG dosing with post-transplant lymphoproliferative disorder (PTLD). Although manual curation was necessary to improve the data quality, MedEx and ETHER supported the extraction of the appropriate information. We created a final dataset of 1,380 cases with complete information for rATG dosing and date of administration. Analysis in PANACEA found that PTLD was associated with cumulative doses of rATG >8 mg/kg, even in periods where most of the submissions to FAERS reported low doses of rATG. We demonstrated the feasibility of investigating a dose-related safety pattern for a particular product in FAERS using a set of automated tools.
Ziatdinov, Maxim; Dyck, Ondrej; Maksov, Artem; Li, Xufan; Sang, Xiahan; Xiao, Kai; Unocic, Raymond R; Vasudevan, Rama; Jesse, Stephen; Kalinin, Sergei V
2017-12-26
Recent advances in scanning transmission electron and scanning probe microscopies have opened exciting opportunities in probing the materials structural parameters and various functional properties in real space with angstrom-level precision. This progress has been accompanied by an exponential increase in the size and quality of data sets produced by microscopic and spectroscopic experimental techniques. These developments necessitate adequate methods for extracting relevant physical and chemical information from the large data sets, for which a priori information on the structures of various atomic configurations and lattice defects is limited or absent. Here we demonstrate an application of deep neural networks to extract information from atomically resolved images including location of the atomic species and type of defects. We develop a "weakly supervised" approach that uses information on the coordinates of all atomic species in the image, extracted via a deep neural network, to identify a rich variety of defects that are not part of an initial training set. We further apply our approach to interpret complex atomic and defect transformation, including switching between different coordination of silicon dopants in graphene as a function of time, formation of peculiar silicon dimer with mixed 3-fold and 4-fold coordination, and the motion of molecular "rotor". This deep learning-based approach resembles logic of a human operator, but can be scaled leading to significant shift in the way of extracting and analyzing information from raw experimental data.
Automatic extraction of building boundaries using aerial LiDAR data
NASA Astrophysics Data System (ADS)
Wang, Ruisheng; Hu, Yong; Wu, Huayi; Wang, Jian
2016-01-01
Building extraction is one of the main research topics of the photogrammetry community. This paper presents automatic algorithms for building boundary extractions from aerial LiDAR data. First, segmenting height information generated from LiDAR data, the outer boundaries of aboveground objects are expressed as closed chains of oriented edge pixels. Then, building boundaries are distinguished from nonbuilding ones by evaluating their shapes. The candidate building boundaries are reconstructed as rectangles or regular polygons by applying new algorithms, following the hypothesis verification paradigm. These algorithms include constrained searching in Hough space, enhanced Hough transformation, and the sequential linking technique. The experimental results show that the proposed algorithms successfully extract building boundaries at rates of 97%, 85%, and 92% for three LiDAR datasets with varying scene complexities.
Semantic World Modelling and Data Management in a 4d Forest Simulation and Information System
NASA Astrophysics Data System (ADS)
Roßmann, J.; Hoppen, M.; Bücken, A.
2013-08-01
Various types of 3D simulation applications benefit from realistic forest models. They range from flight simulators for entertainment to harvester simulators for training and tree growth simulations for research and planning. Our 4D forest simulation and information system integrates the necessary methods for data extraction, modelling and management. Using modern methods of semantic world modelling, tree data can efficiently be extracted from remote sensing data. The derived forest models contain position, height, crown volume, type and diameter of each tree. This data is modelled using GML-based data models to assure compatibility and exchangeability. A flexible approach for database synchronization is used to manage the data and provide caching, persistence, a central communication hub for change distribution, and a versioning mechanism. Combining various simulation techniques and data versioning, the 4D forest simulation and information system can provide applications with "both directions" of the fourth dimension. Our paper outlines the current state, new developments, and integration of tree extraction, data modelling, and data management. It also shows several applications realized with the system.
Modeling ECM fiber formation: structure information extracted by analysis of 2D and 3D image sets
NASA Astrophysics Data System (ADS)
Wu, Jun; Voytik-Harbin, Sherry L.; Filmer, David L.; Hoffman, Christoph M.; Yuan, Bo; Chiang, Ching-Shoei; Sturgis, Jennis; Robinson, Joseph P.
2002-05-01
Recent evidence supports the notion that biological functions of extracellular matrix (ECM) are highly correlated to its structure. Understanding this fibrous structure is very crucial in tissue engineering to develop the next generation of biomaterials for restoration of tissues and organs. In this paper, we integrate confocal microscopy imaging and image-processing techniques to analyze the structural properties of ECM. We describe a 2D fiber middle-line tracing algorithm and apply it via Euclidean distance maps (EDM) to extract accurate fibrous structure information, such as fiber diameter, length, orientation, and density, from single slices. Based on a 2D tracing algorithm, we extend our analysis to 3D tracing via Euclidean distance maps to extract 3D fibrous structure information. We use computer simulation to construct the 3D fibrous structure which is subsequently used to test our tracing algorithms. After further image processing, these models are then applied to a variety of ECM constructions from which results of 2D and 3D traces are statistically analyzed.
Zhang, Yin; Diao, Tianxi; Wang, Lei
2014-12-01
Designed to advance the two-way translational process between basic research and clinical practice, translational medicine has become one of the most important areas in biomedicine. The quantitative evaluation of translational medicine is valuable for the decision making of global translational medical research and funding. Using the scientometric analysis and information extraction techniques, this study quantitatively analyzed the scientific articles on translational medicine. The results showed that translational medicine had significant scientific output and impact, specific core field and institute, and outstanding academic status and benefit. While it is not considered in this study, the patent data are another important indicators that should be integrated in the relevant research in the future. © 2014 Wiley Periodicals, Inc.
Separation techniques for the clean-up of radioactive mixed waste for ICP-AES/ICP-MS analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Swafford, A.M.; Keller, J.M.
1993-03-17
Two separation techniques were investigated for the clean-up of typical radioactive mixed waste samples requiring elemental analysis by Inductively Coupled Plasma-Atomic Emission Spectroscopy (ICP-AES) or Inductively Coupled Plasma-Mass Spectrometry (ICP-MS). These measurements frequently involve regulatory or compliance criteria which include the determination of elements on the EPA Target Analyte List (TAL). These samples usually consist of both an aqueous phase and a solid phase which is mostly an inorganic sludge. Frequently, samples taken from the waste tanks contain high levels of uranium and thorium which can cause spectral interferences in ICP-AES or ICP-MS analysis. The removal of these interferences ismore » necessary to determine the presence of the EPA TAL elements in the sample. Two clean-up methods were studied on simulated aqueous waste samples containing the EPA TAL elements. The first method studied was a classical procedure based upon liquid-liquid extraction using tri-n- octylphosphine oxide (TOPO) dissolved in cyclohexane. The second method investigated was based on more recently developed techniques using extraction chromatography; specifically the use of a commercially available Eichrom TRU[center dot]Spec[trademark] column. Literature on these two methods indicates the efficient removal of uranium and thorium from properly prepared samples and provides considerable qualitative information on the extraction behavior of many other elements. However, there is a lack of quantitative data on the extraction behavior of elements on the EPA Target Analyte List. Experimental studies on these two methods consisted of determining whether any of the analytes were extracted by these methods and the recoveries obtained. Both methods produced similar results; the EPA target analytes were only slightly or not extracted. Advantages and disadvantages of each method were evaluated and found to be comparable.« less
Combining Feature Extraction Methods to Assist the Diagnosis of Alzheimer's Disease.
Segovia, F; Górriz, J M; Ramírez, J; Phillips, C
2016-01-01
Neuroimaging data as (18)F-FDG PET is widely used to assist the diagnosis of Alzheimer's disease (AD). Looking for regions with hypoperfusion/ hypometabolism, clinicians may predict or corroborate the diagnosis of the patients. Modern computer aided diagnosis (CAD) systems based on the statistical analysis of whole neuroimages are more accurate than classical systems based on quantifying the uptake of some predefined regions of interests (ROIs). In addition, these new systems allow determining new ROIs and take advantage of the huge amount of information comprised in neuroimaging data. A major branch of modern CAD systems for AD is based on multivariate techniques, which analyse a neuroimage as a whole, considering not only the voxel intensities but also the relations among them. In order to deal with the vast dimensionality of the data, a number of feature extraction methods have been successfully applied. In this work, we propose a CAD system based on the combination of several feature extraction techniques. First, some commonly used feature extraction methods based on the analysis of the variance (as principal component analysis), on the factorization of the data (as non-negative matrix factorization) and on classical magnitudes (as Haralick features) were simultaneously applied to the original data. These feature sets were then combined by means of two different combination approaches: i) using a single classifier and a multiple kernel learning approach and ii) using an ensemble of classifier and selecting the final decision by majority voting. The proposed approach was evaluated using a labelled neuroimaging database along with a cross validation scheme. As conclusion, the proposed CAD system performed better than approaches using only one feature extraction technique. We also provide a fair comparison (using the same database) of the selected feature extraction methods.
NASA Astrophysics Data System (ADS)
Wang, Min; Cui, Qi; Sun, Yujie; Wang, Qiao
2018-07-01
In object-based image analysis (OBIA), object classification performance is jointly determined by image segmentation, sample or rule setting, and classifiers. Typically, as a crucial step to obtain object primitives, image segmentation quality significantly influences subsequent feature extraction and analyses. By contrast, template matching extracts specific objects from images and prevents shape defects caused by image segmentation. However, creating or editing templates is tedious and sometimes results in incomplete or inaccurate templates. In this study, we combine OBIA and template matching techniques to address these problems and aim for accurate photovoltaic panel (PVP) extraction from very high-resolution (VHR) aerial imagery. The proposed method is based on the previously proposed region-line primitive association framework, in which complementary information between region (segment) and line (straight line) primitives is utilized to achieve a more powerful performance than routine OBIA. Several novel concepts, including the mutual fitting ratio and best-fitting template based on region-line primitive association analyses, are proposed. Automatic template generation and matching method for PVP extraction from VHR imagery are designed for concept and model validation. Results show that the proposed method can successfully extract PVPs without any user-specified matching template or training sample. High user independency and accuracy are the main characteristics of the proposed method in comparison with routine OBIA and template matching techniques.
Logan, Heather; Wolfaardt, Johan; Boulanger, Pierre; Hodgetts, Bill; Seikaly, Hadi
2013-06-19
It is important to understand the perceived value of surgical design and simulation (SDS) amongst surgeons, as this will influence its implementation in clinical settings. The purpose of the present study was to examine the application of the convergent interview technique in the field of surgical design and simulation and evaluate whether the technique would uncover new perceptions of virtual surgical planning (VSP) and medical models not discovered by other qualitative case-based techniques. Five surgeons were asked to participate in the study. Each participant was interviewed following the convergent interview technique. After each interview, the interviewer interpreted the information by seeking agreements and disagreements among the interviewees in order to understand the key concepts in the field of SDS. Fifteen important issues were extracted from the convergent interviews. In general, the convergent interview was an effective technique in collecting information about the perception of clinicians. The study identified three areas where the technique could be improved upon for future studies in the SDS field.
2013-01-01
Background It is important to understand the perceived value of surgical design and simulation (SDS) amongst surgeons, as this will influence its implementation in clinical settings. The purpose of the present study was to examine the application of the convergent interview technique in the field of surgical design and simulation and evaluate whether the technique would uncover new perceptions of virtual surgical planning (VSP) and medical models not discovered by other qualitative case-based techniques. Methods Five surgeons were asked to participate in the study. Each participant was interviewed following the convergent interview technique. After each interview, the interviewer interpreted the information by seeking agreements and disagreements among the interviewees in order to understand the key concepts in the field of SDS. Results Fifteen important issues were extracted from the convergent interviews. Conclusion In general, the convergent interview was an effective technique in collecting information about the perception of clinicians. The study identified three areas where the technique could be improved upon for future studies in the SDS field. PMID:23782771
Comparison of Two Simplification Methods for Shoreline Extraction from Digital Orthophoto Images
NASA Astrophysics Data System (ADS)
Bayram, B.; Sen, A.; Selbesoglu, M. O.; Vārna, I.; Petersons, P.; Aykut, N. O.; Seker, D. Z.
2017-11-01
The coastal ecosystems are very sensitive to external influences. Coastal resources such as sand dunes, coral reefs and mangroves has vital importance to prevent coastal erosion. Human based effects also threats the coastal areas. Therefore, the change of coastal areas should be monitored. Up-to-date, accurate shoreline information is indispensable for coastal managers and decision makers. Remote sensing and image processing techniques give a big opportunity to obtain reliable shoreline information. In the presented study, NIR bands of seven 1:5000 scaled digital orthophoto images of Riga Bay-Latvia have been used. The Object-oriented Simple Linear Clustering method has been utilized to extract shoreline of Riga Bay. Bend and Douglas-Peucker methods have been used to simplify the extracted shoreline to test the effect of both methods. Photogrammetrically digitized shoreline has been taken as reference data to compare obtained results. The accuracy assessment has been realised by Digital Shoreline Analysis tool. As a result, the achieved shoreline by the Bend method has been found closer to the extracted shoreline with Simple Linear Clustering method.
McDonald, Gene D; Storrie-Lombardi, Michael C
2006-02-01
The relative abundance of the protein amino acids has been previously investigated as a potential marker for biogenicity in meteoritic samples. However, these investigations were executed without a quantitative metric to evaluate distribution variations, and they did not account for the possibility of interdisciplinary systematic error arising from inter-laboratory differences in extraction and detection techniques. Principal component analysis (PCA), hierarchical cluster analysis (HCA), and stochastic probabilistic artificial neural networks (ANNs) were used to compare the distributions for nine protein amino acids previously reported for the Murchison carbonaceous chondrite, Mars meteorites (ALH84001, Nakhla, and EETA79001), prebiotic synthesis experiments, and terrestrial biota and sediments. These techniques allowed us (1) to identify a shift in terrestrial amino acid distributions secondary to diagenesis; (2) to detect differences in terrestrial distributions that may be systematic differences between extraction and analysis techniques in biological and geological laboratories; and (3) to determine that distributions in meteoritic samples appear more similar to prebiotic chemistry samples than they do to the terrestrial unaltered or diagenetic samples. Both diagenesis and putative interdisciplinary differences in analysis complicate interpretation of meteoritic amino acid distributions. We propose that the analysis of future samples from such diverse sources as meteoritic influx, sample return missions, and in situ exploration of Mars would be less ambiguous with adoption of standardized assay techniques, systematic inclusion of assay standards, and the use of a quantitative, probabilistic metric. We present here one such metric determined by sequential feature extraction and normalization (PCA), information-driven automated exploration of classification possibilities (HCA), and prediction of classification accuracy (ANNs).
NASA Astrophysics Data System (ADS)
Forootan, Ehsan; Kusche, Jürgen
2016-04-01
Geodetic/geophysical observations, such as the time series of global terrestrial water storage change or sea level and temperature change, represent samples of physical processes and therefore contain information about complex physical interactionswith many inherent time scales. Extracting relevant information from these samples, for example quantifying the seasonality of a physical process or its variability due to large-scale ocean-atmosphere interactions, is not possible by rendering simple time series approaches. In the last decades, decomposition techniques have found increasing interest for extracting patterns from geophysical observations. Traditionally, principal component analysis (PCA) and more recently independent component analysis (ICA) are common techniques to extract statistical orthogonal (uncorrelated) and independent modes that represent the maximum variance of observations, respectively. PCA and ICA can be classified as stationary signal decomposition techniques since they are based on decomposing the auto-covariance matrix or diagonalizing higher (than two)-order statistical tensors from centered time series. However, the stationary assumption is obviously not justifiable for many geophysical and climate variables even after removing cyclic components e.g., the seasonal cycles. In this paper, we present a new decomposition method, the complex independent component analysis (CICA, Forootan, PhD-2014), which can be applied to extract to non-stationary (changing in space and time) patterns from geophysical time series. Here, CICA is derived as an extension of real-valued ICA (Forootan and Kusche, JoG-2012), where we (i) define a new complex data set using a Hilbert transformation. The complex time series contain the observed values in their real part, and the temporal rate of variability in their imaginary part. (ii) An ICA algorithm based on diagonalization of fourth-order cumulants is then applied to decompose the new complex data set in (i). (iii) Dominant non-stationary patterns are recognized as independent complex patterns that can be used to represent the space and time amplitude and phase propagations. We present the results of CICA on simulated and real cases e.g., for quantifying the impact of large-scale ocean-atmosphere interaction on global mass changes. Forootan (PhD-2014) Statistical signal decomposition techniques for analyzing time-variable satellite gravimetry data, PhD Thesis, University of Bonn, http://hss.ulb.uni-bonn.de/2014/3766/3766.htm Forootan and Kusche (JoG-2012) Separation of global time-variable gravity signals into maximally independent components, Journal of Geodesy 86 (7), 477-497, doi: 10.1007/s00190-011-0532-5
Chen, Guijie; Yuan, Qingxia; Saeeduddin, Muhammad; Ou, Shiyi; Zeng, Xiaoxiong; Ye, Hong
2016-11-20
Tea has a long history of medicinal and dietary use. Tea polysaccharide (TPS) is regarded as one of the main bioactive constituents of tea and is beneficial for health. Over the last decades, considerable efforts have been devoted to the studies on TPS: extraction, structural feature and bioactivity of TPS. However, it has been received much less attention compared with tea polyphenols. In order to provide new insight for further development of TPS in functional foods, in present review we summarize the recent literature, update the information and put forward future perspectives on TPS covering its extraction, purification, quantitative determination techniques as well as physicochemical characterization and bioactivities. Copyright © 2016 Elsevier Ltd. All rights reserved.
Supporting the Growing Needs of the GIS Industry
NASA Technical Reports Server (NTRS)
2003-01-01
Visual Learning Systems, Inc. (VLS), of Missoula, Montana, has developed a commercial software application called Feature Analyst. Feature Analyst was conceived under a Small Business Innovation Research (SBIR) contract with NASA's Stennis Space Center, and through the Montana State University TechLink Center, an organization funded by NASA and the U.S. Department of Defense to link regional companies with Federal laboratories for joint research and technology transfer. The software provides a paradigm shift to automated feature extraction, as it utilizes spectral, spatial, temporal, and ancillary information to model the feature extraction process; presents the ability to remove clutter; incorporates advanced machine learning techniques to supply unparalleled levels of accuracy; and includes an exceedingly simple interface for feature extraction.
Discovery of Newer Therapeutic Leads for Prostate Cancer
2009-06-01
promising plant extracts and then prepare large-scale quantities of the plant extracts using supercritical fluid extraction techniques and use this...quantities of the plant extracts using supercritical fluid extraction techniques. Large scale plant collections were conducted for 14 of the top 20...material for bioassay-guided fractionation of the biologically active constituents using modern chromatography techniques. The chemical structures of
High speed digital holographic interferometry for hypersonic flow visualization
NASA Astrophysics Data System (ADS)
Hegde, G. M.; Jagdeesh, G.; Reddy, K. P. J.
2013-06-01
Optical imaging techniques have played a major role in understanding the flow dynamics of varieties of fluid flows, particularly in the study of hypersonic flows. Schlieren and shadowgraph techniques have been the flow diagnostic tools for the investigation of compressible flows since more than a century. However these techniques provide only the qualitative information about the flow field. Other optical techniques such as holographic interferometry and laser induced fluorescence (LIF) have been used extensively for extracting quantitative information about the high speed flows. In this paper we present the application of digital holographic interferometry (DHI) technique integrated with short duration hypersonic shock tunnel facility having 1 ms test time, for quantitative flow visualization. Dynamics of the flow fields in hypersonic/supersonic speeds around different test models is visualized with DHI using a high-speed digital camera (0.2 million fps). These visualization results are compared with schlieren visualization and CFD simulation results. Fringe analysis is carried out to estimate the density of the flow field.
Modeling of ETL-Processes and Processed Information in Clinical Data Warehousing.
Tute, Erik; Steiner, Jochen
2018-01-01
Literature describes a big potential for reuse of clinical patient data. A clinical data warehouse (CDWH) is a means for that. To support management and maintenance of processes extracting, transforming and loading (ETL) data into CDWHs as well as to ease reuse of metadata between regular IT-management, CDWH and secondary data users by providing a modeling approach. Expert survey and literature review to find requirements and existing modeling techniques. An ETL-modeling-technique was developed extending existing modeling techniques. Evaluation by exemplarily modeling existing ETL-process and a second expert survey. Nine experts participated in the first survey. Literature review yielded 15 included publications. Six existing modeling techniques were identified. A modeling technique extending 3LGM2 and combining it with openEHR information models was developed and evaluated. Seven experts participated in the evaluation. The developed approach can help in management and maintenance of ETL-processes and could serve as interface between regular IT-management, CDWH and secondary data users.
Cicchetti, Esmeralda; Chaintreau, Alain
2009-06-01
Accelerated solvent extraction (ASE) of vanilla beans has been optimized using ethanol as a solvent. A theoretical model is proposed to account for this multistep extraction. This allows the determination, for the first time, of the total amount of analytes initially present in the beans and thus the calculation of recoveries using ASE or any other extraction technique. As a result, ASE and Soxhlet extractions have been determined to be efficient methods, whereas recoveries are modest for maceration techniques and depend on the solvent used. Because industrial extracts are obtained by many different procedures, including maceration in various solvents, authenticating vanilla extracts using quantitative ratios between the amounts of vanilla flavor constituents appears to be unreliable. When authentication techniques based on isotopic ratios are used, ASE is a valid sample preparation technique because it does not induce isotopic fractionation.
Cyber Security: Big Data Think II Working Group Meeting
NASA Technical Reports Server (NTRS)
Hinke, Thomas; Shaw, Derek
2015-01-01
This presentation focuses on approaches that could be used by a data computation center to identify attacks and ensure malicious code and backdoors are identified if planted in system. The goal is to identify actionable security information from the mountain of data that flows into and out of an organization. The approaches are applicable to big data computational center and some must also use big data techniques to extract the actionable security information from the mountain of data that flows into and out of a data computational center. The briefing covers the detection of malicious delivery sites and techniques for reducing the mountain of data so that intrusion detection information can be useful, and not hidden in a plethora of false alerts. It also looks at the identification of possible unauthorized data exfiltration.
Biologically active extracts with kidney affections applications
NASA Astrophysics Data System (ADS)
Pascu (Neagu), Mihaela; Pascu, Daniela-Elena; Cozea, Andreea; Bunaciu, Andrei A.; Miron, Alexandra Raluca; Nechifor, Cristina Aurelia
2015-12-01
This paper is aimed to select plant materials rich in bioflavonoid compounds, made from herbs known for their application performances in the prevention and therapy of renal diseases, namely kidney stones and urinary infections (renal lithiasis, nephritis, urethritis, cystitis, etc.). This paper presents a comparative study of the medicinal plant extracts composition belonging to Ericaceae-Cranberry (fruit and leaves) - Vaccinium vitis-idaea L. and Bilberry (fruit) - Vaccinium myrtillus L. Concentrated extracts obtained from medicinal plants used in this work were analyzed from structural, morphological and compositional points of view using different techniques: chromatographic methods (HPLC), scanning electronic microscopy, infrared, and UV spectrophotometry, also by using kinetic model. Liquid chromatography was able to identify the specific compounds of the Ericaceae family, present in all three extracts, arbutosid, as well as specific components of each species, mostly from the class of polyphenols. The identification and quantitative determination of the active ingredients from these extracts can give information related to their therapeutic effects.
Compressed-domain video indexing techniques using DCT and motion vector information in MPEG video
NASA Astrophysics Data System (ADS)
Kobla, Vikrant; Doermann, David S.; Lin, King-Ip; Faloutsos, Christos
1997-01-01
Development of various multimedia applications hinges on the availability of fast and efficient storage, browsing, indexing, and retrieval techniques. Given that video is typically stored efficiently in a compressed format, if we can analyze the compressed representation directly, we can avoid the costly overhead of decompressing and operating at the pixel level. Compressed domain parsing of video has been presented in earlier work where a video clip is divided into shots, subshots, and scenes. In this paper, we describe key frame selection, feature extraction, and indexing and retrieval techniques that are directly applicable to MPEG compressed video. We develop a frame-type independent representation of the various types of frames present in an MPEG video in which al frames can be considered equivalent. Features are derived from the available DCT, macroblock, and motion vector information and mapped to a low-dimensional space where they can be accessed with standard database techniques. The spatial information is used as primary index while the temporal information is used to enhance the robustness of the system during the retrieval process. The techniques presented enable fast archiving, indexing, and retrieval of video. Our operational prototype typically takes a fraction of a second to retrieve similar video scenes from our database, with over 95% success.
Enviromentally Sound Timber Extracting Techniques for Small Tree Harvesting
Lihai Wang
1999-01-01
Due to large area disturbed and great deal of energy cost during-its operations, introducing or applying the appropriate timber extracting techniques could significantly reduce the impact of timber extraction operations to forest environment while pursuing the reasonable operation costs. Four environmentally sound timber extraction techniques for small tree harvesting...
NASA Astrophysics Data System (ADS)
Su, Zhongqing; Ye, Lin
2004-08-01
The practical utilization of elastic waves, e.g. Rayleigh-Lamb waves, in high-performance structural health monitoring techniques is somewhat impeded due to the complicated wave dispersion phenomena, the existence of multiple wave modes, the high susceptibility to diverse interferences, the bulky sampled data and the difficulty in signal interpretation. An intelligent signal processing and pattern recognition (ISPPR) approach using the wavelet transform and artificial neural network algorithms was developed; this was actualized in a signal processing package (SPP). The ISPPR technique comprehensively functions as signal filtration, data compression, characteristic extraction, information mapping and pattern recognition, capable of extracting essential yet concise features from acquired raw wave signals and further assisting in structural health evaluation. For validation, the SPP was applied to the prediction of crack growth in an alloy structural beam and construction of a damage parameter database for defect identification in CF/EP composite structures. It was clearly apparent that the elastic wave propagation-based damage assessment could be dramatically streamlined by introduction of the ISPPR technique.
Methods for spectral image analysis by exploiting spatial simplicity
Keenan, Michael R.
2010-05-25
Several full-spectrum imaging techniques have been introduced in recent years that promise to provide rapid and comprehensive chemical characterization of complex samples. One of the remaining obstacles to adopting these techniques for routine use is the difficulty of reducing the vast quantities of raw spectral data to meaningful chemical information. Multivariate factor analysis techniques, such as Principal Component Analysis and Alternating Least Squares-based Multivariate Curve Resolution, have proven effective for extracting the essential chemical information from high dimensional spectral image data sets into a limited number of components that describe the spectral characteristics and spatial distributions of the chemical species comprising the sample. There are many cases, however, in which those constraints are not effective and where alternative approaches may provide new analytical insights. For many cases of practical importance, imaged samples are "simple" in the sense that they consist of relatively discrete chemical phases. That is, at any given location, only one or a few of the chemical species comprising the entire sample have non-zero concentrations. The methods of spectral image analysis of the present invention exploit this simplicity in the spatial domain to make the resulting factor models more realistic. Therefore, more physically accurate and interpretable spectral and abundance components can be extracted from spectral images that have spatially simple structure.
Methods for spectral image analysis by exploiting spatial simplicity
Keenan, Michael R.
2010-11-23
Several full-spectrum imaging techniques have been introduced in recent years that promise to provide rapid and comprehensive chemical characterization of complex samples. One of the remaining obstacles to adopting these techniques for routine use is the difficulty of reducing the vast quantities of raw spectral data to meaningful chemical information. Multivariate factor analysis techniques, such as Principal Component Analysis and Alternating Least Squares-based Multivariate Curve Resolution, have proven effective for extracting the essential chemical information from high dimensional spectral image data sets into a limited number of components that describe the spectral characteristics and spatial distributions of the chemical species comprising the sample. There are many cases, however, in which those constraints are not effective and where alternative approaches may provide new analytical insights. For many cases of practical importance, imaged samples are "simple" in the sense that they consist of relatively discrete chemical phases. That is, at any given location, only one or a few of the chemical species comprising the entire sample have non-zero concentrations. The methods of spectral image analysis of the present invention exploit this simplicity in the spatial domain to make the resulting factor models more realistic. Therefore, more physically accurate and interpretable spectral and abundance components can be extracted from spectral images that have spatially simple structure.
NASA Astrophysics Data System (ADS)
Wolf, Nils; Hof, Angela
2012-10-01
Urban sprawl driven by shifts in tourism development produces new suburban landscapes of water consumption on Mediterranean coasts. Golf courses, ornamental, 'Atlantic' gardens and swimming pools are the most striking artefacts of this transformation, threatening the local water supply systems and exacerbating water scarcity. In the face of climate change, urban landscape irrigation is becoming increasingly important from a resource management point of view. This paper adopts urban remote sensing towards a targeted mapping approach using machine learning techniques and highresolution satellite imagery (WorldView-2) to generate GIS-ready information for urban water consumption studies. Swimming pools, vegetation and - as a subgroup of vegetation - turf grass are extracted as important determinants of water consumption. For image analysis, the complex nature of urban environments suggests spatial-spectral classification, i.e. the complementary use of the spectral signature and spatial descriptors. Multiscale image segmentation provides means to extract the spatial descriptors - namely object feature layers - which can be concatenated at pixel level to the spectral signature. This study assesses the value of object features using different machine learning techniques and amounts of labeled information for learning. The results indicate the benefit of the spatial-spectral approach if combined with appropriate classifiers like tree-based ensembles or support vector machines, which can handle high dimensionality. Finally, a Random Forest classifier was chosen to deliver the classified input data for the estimation of evaporative water loss and net landscape irrigation requirements.
Soil solution extraction techniques for microbial ecotoxicity testing: a comparative evaluation.
Tiensing, T; Preston, S; Strachan, N; Paton, G I
2001-02-01
The suitability of two different techniques (centrifugation and Rhizon sampler) for obtaining the interstitial pore water of soil (soil solution), integral to the ecotoxicity assessment of metal contaminated soil, were investigated by combining chemical analyses and a luminescence-based microbial biosensor. Two different techniques, centrifugation and Rhizon sampler, were used to extract the soil solution from Insch (a loamy sand) and Boyndie (a sandy loam) soils, which had been amended with different concentrations of Zn and Cd. The concentrations of dissolved organic carbon (DOC), major anions (F- , CI-, NO3, SO4(2-)) and major cations (K+, Mg2+, Ca2+) in the soil solutions varied depending on the extraction technique used. Overall, the concentrations of Zn and Cd were significantly higher in the soil solution extracted using the centrifugation technique compared with that extracted using the Rhizon sampler technique. Furthermore, the differences observed between the two extraction techniques depended on the type of soil from which the solution was being extracted. The luminescence-based biosensor Escherichia coli HB101 pUCD607 was shown to respond to the free metal concentrations in the soil solutions and showed that different toxicities were associated with each soil, depending on the technique used to extract the soil solution. This study highlights the need to characterise the type of extraction technique used to obtain the soil solution for ecotoxicity testing in order that a representative ecotoxicity assessment can be carried out.
Epileptic seizure detection in EEG signal using machine learning techniques.
Jaiswal, Abeg Kumar; Banka, Haider
2018-03-01
Epilepsy is a well-known nervous system disorder characterized by seizures. Electroencephalograms (EEGs), which capture brain neural activity, can detect epilepsy. Traditional methods for analyzing an EEG signal for epileptic seizure detection are time-consuming. Recently, several automated seizure detection frameworks using machine learning technique have been proposed to replace these traditional methods. The two basic steps involved in machine learning are feature extraction and classification. Feature extraction reduces the input pattern space by keeping informative features and the classifier assigns the appropriate class label. In this paper, we propose two effective approaches involving subpattern based PCA (SpPCA) and cross-subpattern correlation-based PCA (SubXPCA) with Support Vector Machine (SVM) for automated seizure detection in EEG signals. Feature extraction was performed using SpPCA and SubXPCA. Both techniques explore the subpattern correlation of EEG signals, which helps in decision-making process. SVM is used for classification of seizure and non-seizure EEG signals. The SVM was trained with radial basis kernel. All the experiments have been carried out on the benchmark epilepsy EEG dataset. The entire dataset consists of 500 EEG signals recorded under different scenarios. Seven different experimental cases for classification have been conducted. The classification accuracy was evaluated using tenfold cross validation. The classification results of the proposed approaches have been compared with the results of some of existing techniques proposed in the literature to establish the claim.
Automatic classification of animal vocalizations
NASA Astrophysics Data System (ADS)
Clemins, Patrick J.
2005-11-01
Bioacoustics, the study of animal vocalizations, has begun to use increasingly sophisticated analysis techniques in recent years. Some common tasks in bioacoustics are repertoire determination, call detection, individual identification, stress detection, and behavior correlation. Each research study, however, uses a wide variety of different measured variables, called features, and classification systems to accomplish these tasks. The well-established field of human speech processing has developed a number of different techniques to perform many of the aforementioned bioacoustics tasks. Melfrequency cepstral coefficients (MFCCs) and perceptual linear prediction (PLP) coefficients are two popular feature sets. The hidden Markov model (HMM), a statistical model similar to a finite autonoma machine, is the most commonly used supervised classification model and is capable of modeling both temporal and spectral variations. This research designs a framework that applies models from human speech processing for bioacoustic analysis tasks. The development of the generalized perceptual linear prediction (gPLP) feature extraction model is one of the more important novel contributions of the framework. Perceptual information from the species under study can be incorporated into the gPLP feature extraction model to represent the vocalizations as the animals might perceive them. By including this perceptual information and modifying parameters of the HMM classification system, this framework can be applied to a wide range of species. The effectiveness of the framework is shown by analyzing African elephant and beluga whale vocalizations. The features extracted from the African elephant data are used as input to a supervised classification system and compared to results from traditional statistical tests. The gPLP features extracted from the beluga whale data are used in an unsupervised classification system and the results are compared to labels assigned by experts. The development of a framework from which to build animal vocalization classifiers will provide bioacoustics researchers with a consistent platform to analyze and classify vocalizations. A common framework will also allow studies to compare results across species and institutions. In addition, the use of automated classification techniques can speed analysis and uncover behavioral correlations not readily apparent using traditional techniques.
A PCA aided cross-covariance scheme for discriminative feature extraction from EEG signals.
Zarei, Roozbeh; He, Jing; Siuly, Siuly; Zhang, Yanchun
2017-07-01
Feature extraction of EEG signals plays a significant role in Brain-computer interface (BCI) as it can significantly affect the performance and the computational time of the system. The main aim of the current work is to introduce an innovative algorithm for acquiring reliable discriminating features from EEG signals to improve classification performances and to reduce the time complexity. This study develops a robust feature extraction method combining the principal component analysis (PCA) and the cross-covariance technique (CCOV) for the extraction of discriminatory information from the mental states based on EEG signals in BCI applications. We apply the correlation based variable selection method with the best first search on the extracted features to identify the best feature set for characterizing the distribution of mental state signals. To verify the robustness of the proposed feature extraction method, three machine learning techniques: multilayer perceptron neural networks (MLP), least square support vector machine (LS-SVM), and logistic regression (LR) are employed on the obtained features. The proposed methods are evaluated on two publicly available datasets. Furthermore, we evaluate the performance of the proposed methods by comparing it with some recently reported algorithms. The experimental results show that all three classifiers achieve high performance (above 99% overall classification accuracy) for the proposed feature set. Among these classifiers, the MLP and LS-SVM methods yield the best performance for the obtained feature. The average sensitivity, specificity and classification accuracy for these two classifiers are same, which are 99.32%, 100%, and 99.66%, respectively for the BCI competition dataset IVa and 100%, 100%, and 100%, for the BCI competition dataset IVb. The results also indicate the proposed methods outperform the most recently reported methods by at least 0.25% average accuracy improvement in dataset IVa. The execution time results show that the proposed method has less time complexity after feature selection. The proposed feature extraction method is very effective for getting representatives information from mental states EEG signals in BCI applications and reducing the computational complexity of classifiers by reducing the number of extracted features. Copyright © 2017 Elsevier B.V. All rights reserved.
Mining and Analyzing Circulation and ILL Data for Informed Collection Development
ERIC Educational Resources Information Center
Link, Forrest E.; Tosaka, Yuji; Weng, Cathy
2015-01-01
The authors investigated quantitative methods of collection use analysis employing library data that are available in ILS and ILL systems to better understand library collection use and user needs. For the purpose of the study, the authors extracted circulation and ILL records from the library's systems using data-mining techniques. By comparing…
Non-invasive assessment of the liver using imaging
NASA Astrophysics Data System (ADS)
Thorling Thompson, Camilla; Wang, Haolu; Liu, Xin; Liang, Xiaowen; Crawford, Darrell H.; Roberts, Michael S.
2016-12-01
Chronic liver disease causes 2,000 deaths in Australia per year and early diagnosis is crucial to avoid progression to cirrhosis and end stage liver disease. There is no ideal method to evaluate liver function. Blood tests and liver biopsies provide spot examinations and are unable to track changes in function quickly. Therefore better techniques are needed. Non-invasive imaging has the potential to extract increased information over a large sampling area, continuously tracking dynamic changes in liver function. This project aimed to study the ability of three imaging techniques, multiphoton and fluorescence lifetime imaging microscopy, infrared thermography and photoacoustic imaging, in measuring liver function. Collagen deposition was obvious in multiphoton and fluorescence lifetime imaging in fibrosis and cirrhosis and comparable to conventional histology. Infrared thermography revealed a significantly increased liver temperature in hepatocellular carcinoma. In addition, multiphoton and fluorescence lifetime imaging and photoacoustic imaging could both track uptake and excretion of indocyanine green in rat liver. These results prove that non-invasive imaging can extract crucial information about the liver continuously over time and has the potential to be translated into clinic in the assessment of liver disease.
NASA Astrophysics Data System (ADS)
Batubara, I.; Suparto, I. H.; Wulandari, N. S.
2017-03-01
Guava leaves contain various compounds that have biological activity such as kaempferol and quercetin as anticancer. Twelve extraction techniques were performed to obtain the best extraction technique to isolate kaempferol and quercetin from the guava leaves. Toxicity of extracts was tested against Artemia salina larvae. All extracts were toxic (LC50 value less than 1000 ppm) except extract of direct soxhletation on guava leaves, and extract of sonication and soxhletation using n-hexane. The extract with high content of total phenols and total flavonoids, low content of tannins, intense color of spot on thin layer chromatogram was selected for high performance liquid chromatography analysis. Direct sonication of guava leaves was chosen as the best extraction technique with kampferol and quercetin content of 0.02% and 2.15%, respectively. In addition to high content of kaempferol and quercetin, direct sonication was chosen due to the shortest extraction time, lesser impurities and high toxicity.
A Model-Based Anomaly Detection Approach for Analyzing Streaming Aircraft Engine Measurement Data
NASA Technical Reports Server (NTRS)
Simon, Donald L.; Rinehart, Aidan W.
2014-01-01
This paper presents a model-based anomaly detection architecture designed for analyzing streaming transient aircraft engine measurement data. The technique calculates and monitors residuals between sensed engine outputs and model predicted outputs for anomaly detection purposes. Pivotal to the performance of this technique is the ability to construct a model that accurately reflects the nominal operating performance of the engine. The dynamic model applied in the architecture is a piecewise linear design comprising steady-state trim points and dynamic state space matrices. A simple curve-fitting technique for updating the model trim point information based on steadystate information extracted from available nominal engine measurement data is presented. Results from the application of the model-based approach for processing actual engine test data are shown. These include both nominal fault-free test case data and seeded fault test case data. The results indicate that the updates applied to improve the model trim point information also improve anomaly detection performance. Recommendations for follow-on enhancements to the technique are also presented and discussed.
A Model-Based Anomaly Detection Approach for Analyzing Streaming Aircraft Engine Measurement Data
NASA Technical Reports Server (NTRS)
Simon, Donald L.; Rinehart, Aidan Walker
2015-01-01
This paper presents a model-based anomaly detection architecture designed for analyzing streaming transient aircraft engine measurement data. The technique calculates and monitors residuals between sensed engine outputs and model predicted outputs for anomaly detection purposes. Pivotal to the performance of this technique is the ability to construct a model that accurately reflects the nominal operating performance of the engine. The dynamic model applied in the architecture is a piecewise linear design comprising steady-state trim points and dynamic state space matrices. A simple curve-fitting technique for updating the model trim point information based on steadystate information extracted from available nominal engine measurement data is presented. Results from the application of the model-based approach for processing actual engine test data are shown. These include both nominal fault-free test case data and seeded fault test case data. The results indicate that the updates applied to improve the model trim point information also improve anomaly detection performance. Recommendations for follow-on enhancements to the technique are also presented and discussed.
NASA Technical Reports Server (NTRS)
Billingsley, F.
1982-01-01
Concerns are expressed about the data handling aspects of system design and about enabling technology for data handling and data analysis. The status, contributing factors, critical issues, and recommendations for investigations are listed for data handling, rectification and registration, and information extraction. Potential supports to individual P.I., research tasks, systematic data system design, and to system operation. The need for an airborne spectrometer class instrument for fundamental research in high spectral and spatial resolution is indicated. Geographic information system formatting and labelling techniques, very large scale integration, and methods for providing multitype data sets must also be developed.
Predicate Argument Structure Frames for Modeling Information in Operative Notes
Wang, Yan; Pakhomov, Serguei; Melton, Genevieve B.
2015-01-01
The rich information about surgical procedures contained in operative notes is a valuable data source for improving the clinical evidence base and clinical research. In this study, we propose a set of Predicate Argument Structure (PAS) frames for surgical action verbs to assist in the creation of an information extraction (IE) system to automatically extract details about the techniques, equipment, and operative steps from operative notes. We created PropBank style PAS frames for the 30 top surgical action verbs based on examination of randomly selected sample sentences from 3,000 Laparoscopic Cholecystectomy notes. To assess completeness of the PAS frames to represent usage of same action verbs, we evaluated the PAS frames created on sample sentences from operative notes of 6 other gastrointestinal surgical procedures. Our results showed that the PAS frames created with one type of surgery can successfully denote the usage of the same verbs in operative notes of broader surgical categories. PMID:23920664
Jonnagaddala, Jitendra; Liaw, Siaw-Teng; Ray, Pradeep; Kumar, Manish; Dai, Hong-Jie; Hsu, Chien-Yeh
2015-01-01
Heart disease is the leading cause of death worldwide. Therefore, assessing the risk of its occurrence is a crucial step in predicting serious cardiac events. Identifying heart disease risk factors and tracking their progression is a preliminary step in heart disease risk assessment. A large number of studies have reported the use of risk factor data collected prospectively. Electronic health record systems are a great resource of the required risk factor data. Unfortunately, most of the valuable information on risk factor data is buried in the form of unstructured clinical notes in electronic health records. In this study, we present an information extraction system to extract related information on heart disease risk factors from unstructured clinical notes using a hybrid approach. The hybrid approach employs both machine learning and rule-based clinical text mining techniques. The developed system achieved an overall microaveraged F-score of 0.8302.
Hypergraph Based Feature Selection Technique for Medical Diagnosis.
Somu, Nivethitha; Raman, M R Gauthama; Kirthivasan, Kannan; Sriram, V S Shankar
2016-11-01
The impact of internet and information systems across various domains have resulted in substantial generation of multidimensional datasets. The use of data mining and knowledge discovery techniques to extract the original information contained in the multidimensional datasets play a significant role in the exploitation of complete benefit provided by them. The presence of large number of features in the high dimensional datasets incurs high computational cost in terms of computing power and time. Hence, feature selection technique has been commonly used to build robust machine learning models to select a subset of relevant features which projects the maximal information content of the original dataset. In this paper, a novel Rough Set based K - Helly feature selection technique (RSKHT) which hybridize Rough Set Theory (RST) and K - Helly property of hypergraph representation had been designed to identify the optimal feature subset or reduct for medical diagnostic applications. Experiments carried out using the medical datasets from the UCI repository proves the dominance of the RSKHT over other feature selection techniques with respect to the reduct size, classification accuracy and time complexity. The performance of the RSKHT had been validated using WEKA tool, which shows that RSKHT had been computationally attractive and flexible over massive datasets.
Multiple Vehicle Detection and Segmentation in Malaysia Traffic Flow
NASA Astrophysics Data System (ADS)
Fariz Hasan, Ahmad; Fikri Che Husin, Mohd; Affendi Rosli, Khairul; Norhafiz Hashim, Mohd; Faiz Zainal Abidin, Amar
2018-03-01
Vision based system are widely used in the field of Intelligent Transportation System (ITS) to extract a large amount of information to analyze traffic scenes. By rapid number of vehicles on the road as well as significant increase on cameras dictated the need for traffic surveillance systems. This system can take over the burden some task was performed by human operator in traffic monitoring centre. The main technique proposed by this paper is concentrated on developing a multiple vehicle detection and segmentation focusing on monitoring through Closed Circuit Television (CCTV) video. The system is able to automatically segment vehicle extracted from heavy traffic scene by optical flow estimation alongside with blob analysis technique in order to detect the moving vehicle. Prior to segmentation, blob analysis technique will compute the area of interest region corresponding to moving vehicle which will be used to create bounding box on that particular vehicle. Experimental validation on the proposed system was performed and the algorithm is demonstrated on various set of traffic scene.
Kellogg, Joshua J; Wallace, Emily D; Graf, Tyler N; Oberlies, Nicholas H; Cech, Nadja B
2017-10-25
Metabolomics has emerged as an important analytical technique for multiple applications. The value of information obtained from metabolomics analysis depends on the degree to which the entire metabolome is present and the reliability of sample treatment to ensure reproducibility across the study. The purpose of this study was to compare methods of preparing complex botanical extract samples prior to metabolomics profiling. Two extraction methodologies, accelerated solvent extraction and a conventional solvent maceration, were compared using commercial green tea [Camellia sinensis (L.) Kuntze (Theaceae)] products as a test case. The accelerated solvent protocol was first evaluated to ascertain critical factors influencing extraction using a D-optimal experimental design study. The accelerated solvent and conventional extraction methods yielded similar metabolite profiles for the green tea samples studied. The accelerated solvent extraction yielded higher total amounts of extracted catechins, was more reproducible, and required less active bench time to prepare the samples. This study demonstrates the effectiveness of accelerated solvent as an efficient methodology for metabolomics studies. Copyright © 2017. Published by Elsevier B.V.
Song, Min; Yu, Hwanjo; Han, Wook-Shin
2011-11-24
Protein-protein interaction (PPI) extraction has been a focal point of many biomedical research and database curation tools. Both Active Learning and Semi-supervised SVMs have recently been applied to extract PPI automatically. In this paper, we explore combining the AL with the SSL to improve the performance of the PPI task. We propose a novel PPI extraction technique called PPISpotter by combining Deterministic Annealing-based SSL and an AL technique to extract protein-protein interaction. In addition, we extract a comprehensive set of features from MEDLINE records by Natural Language Processing (NLP) techniques, which further improve the SVM classifiers. In our feature selection technique, syntactic, semantic, and lexical properties of text are incorporated into feature selection that boosts the system performance significantly. By conducting experiments with three different PPI corpuses, we show that PPISpotter is superior to the other techniques incorporated into semi-supervised SVMs such as Random Sampling, Clustering, and Transductive SVMs by precision, recall, and F-measure. Our system is a novel, state-of-the-art technique for efficiently extracting protein-protein interaction pairs.
Image Alignment for Multiple Camera High Dynamic Range Microscopy.
Eastwood, Brian S; Childs, Elisabeth C
2012-01-09
This paper investigates the problem of image alignment for multiple camera high dynamic range (HDR) imaging. HDR imaging combines information from images taken with different exposure settings. Combining information from multiple cameras requires an alignment process that is robust to the intensity differences in the images. HDR applications that use a limited number of component images require an alignment technique that is robust to large exposure differences. We evaluate the suitability for HDR alignment of three exposure-robust techniques. We conclude that image alignment based on matching feature descriptors extracted from radiant power images from calibrated cameras yields the most accurate and robust solution. We demonstrate the use of this alignment technique in a high dynamic range video microscope that enables live specimen imaging with a greater level of detail than can be captured with a single camera.
Image Alignment for Multiple Camera High Dynamic Range Microscopy
Eastwood, Brian S.; Childs, Elisabeth C.
2012-01-01
This paper investigates the problem of image alignment for multiple camera high dynamic range (HDR) imaging. HDR imaging combines information from images taken with different exposure settings. Combining information from multiple cameras requires an alignment process that is robust to the intensity differences in the images. HDR applications that use a limited number of component images require an alignment technique that is robust to large exposure differences. We evaluate the suitability for HDR alignment of three exposure-robust techniques. We conclude that image alignment based on matching feature descriptors extracted from radiant power images from calibrated cameras yields the most accurate and robust solution. We demonstrate the use of this alignment technique in a high dynamic range video microscope that enables live specimen imaging with a greater level of detail than can be captured with a single camera. PMID:22545028
Multi-Intelligence Analytics for Next Generation Analysts (MIAGA)
NASA Astrophysics Data System (ADS)
Blasch, Erik; Waltz, Ed
2016-05-01
Current analysts are inundated with large volumes of data from which extraction, exploitation, and indexing are required. A future need for next-generation analysts is an appropriate balance between machine analytics from raw data and the ability of the user to interact with information through automation. Many quantitative intelligence tools and techniques have been developed which are examined towards matching analyst opportunities with recent technical trends such as big data, access to information, and visualization. The concepts and techniques summarized are derived from discussions with real analysts, documented trends of technical developments, and methods to engage future analysts with multiintelligence services. For example, qualitative techniques should be matched against physical, cognitive, and contextual quantitative analytics for intelligence reporting. Future trends include enabling knowledge search, collaborative situational sharing, and agile support for empirical decision-making and analytical reasoning.
EVALUATION OF ANALYTICAL METHODS FOR DETERMINING PESTICIDES IN BABY FOOD
Three extraction methods and two detection techniques for determining pesticides in baby food were evaluated. The extraction techniques examined were supercritical fluid extraction (SFE), enhanced solvent extraction (ESE), and solid phase extraction (SPE). The detection techni...
Ultrahigh pressure extraction of bioactive compounds from plants-A review.
Xi, Jun
2017-04-13
Extraction of bioactive compounds from plants is one of the most important research areas for pharmaceutical and food industries. Conventional extraction techniques are usually associated with longer extraction times, lower yields, more organic solvent consumption, and poor extraction efficiency. A novel extraction technique, ultrahigh pressure extraction, has been developed for the extraction of bioactive compounds from plants, in order to shorten the extraction time, decrease the solvent consumption, increase the extraction yields, and enhance the quality of extracts. The mild processing temperature of ultrahigh pressure extraction may lead to an enhanced extraction of thermolabile bioactive ingredients. A critical review is conducted to introduce the different aspects of ultrahigh pressure extraction of plants bioactive compounds, including principles and mechanisms, the important parameters influencing its performance, comparison of ultrahigh pressure extraction with other extraction techniques, advantages, and disadvantages. The future opportunities of ultrahigh pressure extraction are also discussed.
Green bio-oil extraction for oil crops
NASA Astrophysics Data System (ADS)
Zainab, H.; Nurfatirah, N.; Norfaezah, A.; Othman, H.
2016-06-01
The move towards a green bio-oil extraction technique is highlighted in this paper. The commonly practised organic solvent oil extraction technique could be replaced with a modified microwave extraction. Jatropha seeds (Jatropha curcas) were used to extract bio-oil. Clean samples were heated in an oven at 110 ° C for 24 hours to remove moisture content and ground to obtain particle size smaller than 500μm. Extraction was carried out at different extraction times 15 min, 30 min, 45 min, 60 min and 120 min to determine oil yield. The biooil yield obtained from microwave assisted extraction system at 90 minutes was 36% while that from soxhlet extraction for 6 hours was 42%. Bio-oil extracted using the microwave assisted extraction (MAE) system could enhance yield of bio-oil compared to soxhlet extraction. The MAE extraction system is rapid using only water as solvent which is a nonhazardous, environment-friendly technique compared to soxhlet extraction (SE) method using hexane as solvent. Thus, this is a green technique of bio-oil extraction using only water as extractant. Bio-oil extraction from the pyrolysis of empty fruit bunch (EFB), a biomass waste from oil palm crop, was enhanced using a biocatalyst derived from seashell waste. Oil yield for non-catalytic extraction was 43.8% while addition of seashell based biocatalyst was 44.6%. Oil yield for non-catalytic extraction was 43.8% while with addition of seashell-based biocatalyst was 44.6%. The pH of bio-oil increased from 3.5 to 4.3. The viscosity of bio-oil obtained by catalytic means increased from 20.5 to 37.8 cP. A rapid and environment friendly extraction technique is preferable to enhance bio-oil yield. The microwave assisted approach is a green, rapid and environmental friendly extraction technique for the production of bio-oil bearing crops.
Naqvi, Atta Abbas; Zehra, Fatima; Ahmad, Rizwan; Ahmad, Niyaz
2016-12-09
There is a general hesitation in participation among Pakistani women when it comes to giving their responses in surveys related to breast cancer which may be due to the associated stigma and conservatism in society. We felt that no research instrument was able to extract information from the respondents to the extent it was needed for the successful execution of our study. The need to develop a research instrument tailored for Pakistani women was based upon the fact that most Pakistani women come from a conservative background and sometimes view this topic as provocative and believe discussing publicly about it as inappropriate. Existing research instruments exhibited a number of weaknesses during literature review. Therefore, using them may not be able to extract information concretely. A research instrument was, thus, developed exclusively. It was coined as, "breast cancer inventory (BCI)" by a panel of experts for executing a study aimed at documenting awareness, knowledge, and attitudes of Pakistani women regarding breast cancer and early detection techniques. The study is still in the data collection phase. The statistical analysis involved the Kaiser-Meyer-Olkin (KMO) measure and Bartlett's test for sampling adequacy. In addition, reliability analysis and exploratory factor analysis (EFA) were, also employed. This concept paper focuses on the development, piloting and validation of the BCI. It is the first research instrument which has high acceptability among Pakistani women and is able to extract adequate information from the respondents without causing embarrassment or unease.
Naqvi, Atta Abbas; Zehra, Fatima; Ahmad, Rizwan; Ahmad, Niyaz
2016-01-01
There is a general hesitation in participation among Pakistani women when it comes to giving their responses in surveys related to breast cancer which may be due to the associated stigma and conservatism in society. We felt that no research instrument was able to extract information from the respondents to the extent it was needed for the successful execution of our study. The need to develop a research instrument tailored for Pakistani women was based upon the fact that most Pakistani women come from a conservative background and sometimes view this topic as provocative and believe discussing publicly about it as inappropriate. Existing research instruments exhibited a number of weaknesses during literature review. Therefore, using them may not be able to extract information concretely. A research instrument was, thus, developed exclusively. It was coined as, “breast cancer inventory (BCI)” by a panel of experts for executing a study aimed at documenting awareness, knowledge, and attitudes of Pakistani women regarding breast cancer and early detection techniques. The study is still in the data collection phase. The statistical analysis involved the Kaiser-Meyer-Olkin (KMO) measure and Bartlett’s test for sampling adequacy. In addition, reliability analysis and exploratory factor analysis (EFA) were, also employed. This concept paper focuses on the development, piloting and validation of the BCI. It is the first research instrument which has high acceptability among Pakistani women and is able to extract adequate information from the respondents without causing embarrassment or unease. PMID:28933416
Collecting and Analyzing Patient Experiences of Health Care From Social Media.
Rastegar-Mojarad, Majid; Ye, Zhan; Wall, Daniel; Murali, Narayana; Lin, Simon
2015-07-02
Social Media, such as Yelp, provides rich information of consumer experience. Previous studies suggest that Yelp can serve as a new source to study patient experience. However, the lack of a corpus of patient reviews causes a major bottleneck for applying computational techniques. The objective of this study is to create a corpus of patient experience (COPE) and report descriptive statistics to characterize COPE. Yelp reviews about health care-related businesses were extracted from the Yelp Academic Dataset. Natural language processing (NLP) tools were used to split reviews into sentences, extract noun phrases and adjectives from each sentence, and generate parse trees and dependency trees for each sentence. Sentiment analysis techniques and Hadoop were used to calculate a sentiment score of each sentence and for parallel processing, respectively. COPE contains 79,173 sentences from 6914 patient reviews of 985 health care facilities near 30 universities in the United States. We found that patients wrote longer reviews when they rated the facility poorly (1 or 2 stars). We demonstrated that the computed sentiment scores correlated well with consumer-generated ratings. A consumer vocabulary to describe their health care experience was constructed by a statistical analysis of word counts and co-occurrences in COPE. A corpus called COPE was built as an initial step to utilize social media to understand patient experiences at health care facilities. The corpus is available to download and COPE can be used in future studies to extract knowledge of patients' experiences from their perspectives. Such information can subsequently inform and provide opportunity to improve the quality of health care.
NASA Technical Reports Server (NTRS)
Aires, Filipe; Rossow, William B.; Chedin, Alain; Hansen, James E. (Technical Monitor)
2000-01-01
The use of the Principal Component Analysis technique for the analysis of geophysical time series has been questioned in particular for its tendency to extract components that mix several physical phenomena even when the signal is just their linear sum. We demonstrate with a data simulation experiment that the Independent Component Analysis, a recently developed technique, is able to solve this problem. This new technique requires the statistical independence of components, a stronger constraint, that uses higher-order statistics, instead of the classical decorrelation a weaker constraint, that uses only second-order statistics. Furthermore, ICA does not require additional a priori information such as the localization constraint used in Rotational Techniques.
Road extraction from aerial images using a region competition algorithm.
Amo, Miriam; Martínez, Fernando; Torre, Margarita
2006-05-01
In this paper, we present a user-guided method based on the region competition algorithm to extract roads, and therefore we also provide some clues concerning the placement of the points required by the algorithm. The initial points are analyzed in order to find out whether it is necessary to add more initial points, and this process will be based on image information. Not only is the algorithm able to obtain the road centerline, but it also recovers the road sides. An initial simple model is deformed by using region growing techniques to obtain a rough road approximation. This model will be refined by region competition. The result of this approach is that it delivers the simplest output vector information, fully recovering the road details as they are on the image, without performing any kind of symbolization. Therefore, we tried to refine a general road model by using a reliable method to detect transitions between regions. This method is proposed in order to obtain information for feeding large-scale Geographic Information System.
Unsupervised, Robust Estimation-based Clustering for Multispectral Images
NASA Technical Reports Server (NTRS)
Netanyahu, Nathan S.
1997-01-01
To prepare for the challenge of handling the archiving and querying of terabyte-sized scientific spatial databases, the NASA Goddard Space Flight Center's Applied Information Sciences Branch (AISB, Code 935) developed a number of characterization algorithms that rely on supervised clustering techniques. The research reported upon here has been aimed at continuing the evolution of some of these supervised techniques, namely the neural network and decision tree-based classifiers, plus extending the approach to incorporating unsupervised clustering algorithms, such as those based on robust estimation (RE) techniques. The algorithms developed under this task should be suited for use by the Intelligent Information Fusion System (IIFS) metadata extraction modules, and as such these algorithms must be fast, robust, and anytime in nature. Finally, so that the planner/schedule module of the IlFS can oversee the use and execution of these algorithms, all information required by the planner/scheduler must be provided to the IIFS development team to ensure the timely integration of these algorithms into the overall system.
Infrared moving small target detection based on saliency extraction and image sparse representation
NASA Astrophysics Data System (ADS)
Zhang, Xiaomin; Ren, Kan; Gao, Jin; Li, Chaowei; Gu, Guohua; Wan, Minjie
2016-10-01
Moving small target detection in infrared image is a crucial technique of infrared search and tracking system. This paper present a novel small target detection technique based on frequency-domain saliency extraction and image sparse representation. First, we exploit the features of Fourier spectrum image and magnitude spectrum of Fourier transform to make a rough extract of saliency regions and use a threshold segmentation system to classify the regions which look salient from the background, which gives us a binary image as result. Second, a new patch-image model and over-complete dictionary were introduced to the detection system, then the infrared small target detection was converted into a problem solving and optimization process of patch-image information reconstruction based on sparse representation. More specifically, the test image and binary image can be decomposed into some image patches follow certain rules. We select the target potential area according to the binary patch-image which contains salient region information, then exploit the over-complete infrared small target dictionary to reconstruct the test image blocks which may contain targets. The coefficients of target image patch satisfy sparse features. Finally, for image sequence, Euclidean distance was used to reduce false alarm ratio and increase the detection accuracy of moving small targets in infrared images due to the target position correlation between frames.
Biomedical named entity extraction: some issues of corpus compatibilities.
Ekbal, Asif; Saha, Sriparna; Sikdar, Utpal Kumar
2013-01-01
Named Entity (NE) extraction is one of the most fundamental and important tasks in biomedical information extraction. It involves identification of certain entities from text and their classification into some predefined categories. In the biomedical community, there is yet no general consensus regarding named entity (NE) annotation; thus, it is very difficult to compare the existing systems due to corpus incompatibilities. Due to this problem we can not also exploit the advantages of using different corpora together. In our present work we address the issues of corpus compatibilities, and use a single objective optimization (SOO) based classifier ensemble technique that uses the search capability of genetic algorithm (GA) for NE extraction in biomedicine. We hypothesize that the reliability of predictions of each classifier differs among the various output classes. We use Conditional Random Field (CRF) and Support Vector Machine (SVM) frameworks to build a number of models depending upon the various representations of the set of features and/or feature templates. It is to be noted that we tried to extract the features without using any deep domain knowledge and/or resources. In order to assess the challenges of corpus compatibilities, we experiment with the different benchmark datasets and their various combinations. Comparison results with the existing approaches prove the efficacy of the used technique. GA based ensemble achieves around 2% performance improvements over the individual classifiers. Degradation in performance on the integrated corpus clearly shows the difficulties of the task. In summary, our used ensemble based approach attains the state-of-the-art performance levels for entity extraction in three different kinds of biomedical datasets. The possible reasons behind the better performance in our used approach are the (i). use of variety and rich features as described in Subsection "Features for named entity extraction"; (ii) use of GA based classifier ensemble technique to combine the outputs of multiple classifiers.
Text Mining in Biomedical Domain with Emphasis on Document Clustering.
Renganathan, Vinaitheerthan
2017-07-01
With the exponential increase in the number of articles published every year in the biomedical domain, there is a need to build automated systems to extract unknown information from the articles published. Text mining techniques enable the extraction of unknown knowledge from unstructured documents. This paper reviews text mining processes in detail and the software tools available to carry out text mining. It also reviews the roles and applications of text mining in the biomedical domain. Text mining processes, such as search and retrieval of documents, pre-processing of documents, natural language processing, methods for text clustering, and methods for text classification are described in detail. Text mining techniques can facilitate the mining of vast amounts of knowledge on a given topic from published biomedical research articles and draw meaningful conclusions that are not possible otherwise.
NASA Astrophysics Data System (ADS)
Darlow, Luke Nicholas; Connan, James
2015-11-01
Surface fingerprint scanners are limited to a two-dimensional representation of the fingerprint topography, and thus, are vulnerable to fingerprint damage, distortion, and counterfeiting. Optical coherence tomography (OCT) scanners are able to image (in three dimensions) the internal structure of the fingertip skin. Techniques for obtaining the internal fingerprint from OCT scans have since been developed. This research presents an internal fingerprint extraction algorithm designed to extract high-quality internal fingerprints from touchless OCT fingertip scans. Furthermore, it serves as a correlation study between surface and internal fingerprints. Provided the scanned region contains sufficient fingerprint information, correlation to the surface topography is shown to be good (74% have true matches). The cross-correlation of internal fingerprints (96% have true matches) is substantial that internal fingerprints can constitute a fingerprint database. The internal fingerprints' performance was also compared to the performance of cropped surface counterparts, to eliminate bias owing to information level present, showing that the internal fingerprints' performance is superior 63.6% of the time.
Aprea, Eugenio; Gika, Helen; Carlin, Silvia; Theodoridis, Georgios; Vrhovsek, Urska; Mattivi, Fulvio
2011-07-15
A headspace SPME GC-TOF-MS method was developed for the acquisition of metabolite profiles of apple volatiles. As a first step, an experimental design was applied to find out the most appropriate conditions for the extraction of apple volatile compounds by SPME. The selected SPME method was applied in profiling of four different apple varieties by GC-EI-TOF-MS. Full scan GC-MS data were processed by MarkerLynx software for peak picking, normalisation, alignment and feature extraction. Advanced chemometric/statistical techniques (PCA and PLS-DA) were used to explore data and extract useful information. Characteristic markers of each variety were successively identified using the NIST library thus providing useful information for variety classification. The developed HS-SPME sampling method is fully automated and proved useful in obtaining the fingerprint of the volatile content of the fruit. The described analytical protocol can aid in further studies of the apple metabolome. Copyright © 2011 Elsevier B.V. All rights reserved.
Inferring the most probable maps of underground utilities using Bayesian mapping model
NASA Astrophysics Data System (ADS)
Bilal, Muhammad; Khan, Wasiq; Muggleton, Jennifer; Rustighi, Emiliano; Jenks, Hugo; Pennock, Steve R.; Atkins, Phil R.; Cohn, Anthony
2018-03-01
Mapping the Underworld (MTU), a major initiative in the UK, is focused on addressing social, environmental and economic consequences raised from the inability to locate buried underground utilities (such as pipes and cables) by developing a multi-sensor mobile device. The aim of MTU device is to locate different types of buried assets in real time with the use of automated data processing techniques and statutory records. The statutory records, even though typically being inaccurate and incomplete, provide useful prior information on what is buried under the ground and where. However, the integration of information from multiple sensors (raw data) with these qualitative maps and their visualization is challenging and requires the implementation of robust machine learning/data fusion approaches. An approach for automated creation of revised maps was developed as a Bayesian Mapping model in this paper by integrating the knowledge extracted from sensors raw data and available statutory records. The combination of statutory records with the hypotheses from sensors was for initial estimation of what might be found underground and roughly where. The maps were (re)constructed using automated image segmentation techniques for hypotheses extraction and Bayesian classification techniques for segment-manhole connections. The model consisting of image segmentation algorithm and various Bayesian classification techniques (segment recognition and expectation maximization (EM) algorithm) provided robust performance on various simulated as well as real sites in terms of predicting linear/non-linear segments and constructing refined 2D/3D maps.
Framework for automatic information extraction from research papers on nanocrystal devices
Yoshioka, Masaharu; Hara, Shinjiro; Newton, Marcus C
2015-01-01
Summary To support nanocrystal device development, we have been working on a computational framework to utilize information in research papers on nanocrystal devices. We developed an annotated corpus called “ NaDev” (Nanocrystal Device Development) for this purpose. We also proposed an automatic information extraction system called “NaDevEx” (Nanocrystal Device Automatic Information Extraction Framework). NaDevEx aims at extracting information from research papers on nanocrystal devices using the NaDev corpus and machine-learning techniques. However, the characteristics of NaDevEx were not examined in detail. In this paper, we conduct system evaluation experiments for NaDevEx using the NaDev corpus. We discuss three main issues: system performance, compared with human annotators; the effect of paper type (synthesis or characterization) on system performance; and the effects of domain knowledge features (e.g., a chemical named entity recognition system and list of names of physical quantities) on system performance. We found that overall system performance was 89% in precision and 69% in recall. If we consider identification of terms that intersect with correct terms for the same information category as the correct identification, i.e., loose agreement (in many cases, we can find that appropriate head nouns such as temperature or pressure loosely match between two terms), the overall performance is 95% in precision and 74% in recall. The system performance is almost comparable with results of human annotators for information categories with rich domain knowledge information (source material). However, for other information categories, given the relatively large number of terms that exist only in one paper, recall of individual information categories is not high (39–73%); however, precision is better (75–97%). The average performance for synthesis papers is better than that for characterization papers because of the lack of training examples for characterization papers. Based on these results, we discuss future research plans for improving the performance of the system. PMID:26665057
Framework for automatic information extraction from research papers on nanocrystal devices.
Dieb, Thaer M; Yoshioka, Masaharu; Hara, Shinjiro; Newton, Marcus C
2015-01-01
To support nanocrystal device development, we have been working on a computational framework to utilize information in research papers on nanocrystal devices. We developed an annotated corpus called " NaDev" (Nanocrystal Device Development) for this purpose. We also proposed an automatic information extraction system called "NaDevEx" (Nanocrystal Device Automatic Information Extraction Framework). NaDevEx aims at extracting information from research papers on nanocrystal devices using the NaDev corpus and machine-learning techniques. However, the characteristics of NaDevEx were not examined in detail. In this paper, we conduct system evaluation experiments for NaDevEx using the NaDev corpus. We discuss three main issues: system performance, compared with human annotators; the effect of paper type (synthesis or characterization) on system performance; and the effects of domain knowledge features (e.g., a chemical named entity recognition system and list of names of physical quantities) on system performance. We found that overall system performance was 89% in precision and 69% in recall. If we consider identification of terms that intersect with correct terms for the same information category as the correct identification, i.e., loose agreement (in many cases, we can find that appropriate head nouns such as temperature or pressure loosely match between two terms), the overall performance is 95% in precision and 74% in recall. The system performance is almost comparable with results of human annotators for information categories with rich domain knowledge information (source material). However, for other information categories, given the relatively large number of terms that exist only in one paper, recall of individual information categories is not high (39-73%); however, precision is better (75-97%). The average performance for synthesis papers is better than that for characterization papers because of the lack of training examples for characterization papers. Based on these results, we discuss future research plans for improving the performance of the system.
NASA Astrophysics Data System (ADS)
McClanahan, James Patrick
Eddy Current Testing (ECT) is a Non-Destructive Examination (NDE) technique that is widely used in power generating plants (both nuclear and fossil) to test the integrity of heat exchanger (HX) and steam generator (SG) tubing. Specifically for this research, laboratory-generated, flawed tubing data were examined. The purpose of this dissertation is to develop and implement an automated method for the classification and an advanced characterization of defects in HX and SG tubing. These two improvements enhanced the robustness of characterization as compared to traditional bobbin-coil ECT data analysis methods. A more robust classification and characterization of the tube flaw in-situ (while the SG is on-line but not when the plant is operating), should provide valuable information to the power industry. The following are the conclusions reached from this research. A feature extraction program acquiring relevant information from both the mixed, absolute and differential data was successfully implemented. The CWT was utilized to extract more information from the mixed, complex differential data. Image Processing techniques used to extract the information contained in the generated CWT, classified the data with a high success rate. The data were accurately classified, utilizing the compressed feature vector and using a Bayes classification system. An estimation of the upper bound for the probability of error, using the Bhattacharyya distance, was successfully applied to the Bayesian classification. The classified data were separated according to flaw-type (classification) to enhance characterization. The characterization routine used dedicated, flaw-type specific ANNs that made the characterization of the tube flaw more robust. The inclusion of outliers may help complete the feature space so that classification accuracy is increased. Given that the eddy current test signals appear very similar, there may not be sufficient information to make an extremely accurate (>95%) classification or an advanced characterization using this system. It is necessary to have a larger database fore more accurate system learning.
NASA Astrophysics Data System (ADS)
Cominola, A.; Spang, E. S.; Giuliani, M.; Castelletti, A.; Loge, F. J.; Lund, J. R.
2016-12-01
Demand side management strategies are key to meet future water and energy demands in urban contexts, promote water and energy efficiency in the residential sector, provide customized services and communications to consumers, and reduce utilities' costs. Smart metering technologies allow gathering high temporal and spatial resolution water and energy consumption data and support the development of data-driven models of consumers' behavior. Modelling and predicting resource consumption behavior is essential to inform demand management. Yet, analyzing big, smart metered, databases requires proper data mining and modelling techniques, in order to extract useful information supporting decision makers to spot end uses towards which water and energy efficiency or conservation efforts should be prioritized. In this study, we consider the following research questions: (i) how is it possible to extract representative consumers' personalities out of big smart metered water and energy data? (ii) are residential water and energy consumption profiles interconnected? (iii) Can we design customized water and energy demand management strategies based on the knowledge of water- energy demand profiles and other user-specific psychographic information? To address the above research questions, we contribute a data-driven approach to identify and model routines in water and energy consumers' behavior. We propose a novel customer segmentation procedure based on data-mining techniques. Our procedure consists of three steps: (i) extraction of typical water-energy consumption profiles for each household, (ii) profiles clustering based on their similarity, and (iii) evaluation of the influence of candidate explanatory variables on the identified clusters. The approach is tested onto a dataset of smart metered water and energy consumption data from over 1000 households in South California. Our methodology allows identifying heterogeneous groups of consumers from the studied sample, as well as characterizing them with respect to consumption profiles features and socio- demographic information. Results show how such better understanding of the considered users' community allows spotting potentially interesting areas for water and energy demand management interventions.
Ziatdinov, Maxim; Dyck, Ondrej; Maksov, Artem; ...
2017-12-07
Recent advances in scanning transmission electron and scanning probe microscopies have opened unprecedented opportunities in probing the materials structural parameters and various functional properties in real space with an angstrom-level precision. This progress has been accompanied by exponential increase in the size and quality of datasets produced by microscopic and spectroscopic experimental techniques. These developments necessitate adequate methods for extracting relevant physical and chemical information from the large datasets, for which a priori information on the structures of various atomic configurations and lattice defects is limited or absent. Here we demonstrate an application of deep neural networks to extracting informationmore » from atomically resolved images including location of the atomic species and type of defects. We develop a “weakly-supervised” approach that uses information on the coordinates of all atomic species in the image, extracted via a deep neural network, to identify a rich variety of defects that are not part of an initial training set. We further apply our approach to interpret complex atomic and defect transformation, including switching between different coordination of silicon dopants in graphene as a function of time, formation of peculiar silicon dimer with mixed 3-fold and 4-fold coordination, and the motion of molecular “rotor”. In conclusion, this deep learning based approach resembles logic of a human operator, but can be scaled leading to significant shift in the way of extracting and analyzing information from raw experimental data.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ziatdinov, Maxim; Dyck, Ondrej; Maksov, Artem
Recent advances in scanning transmission electron and scanning probe microscopies have opened unprecedented opportunities in probing the materials structural parameters and various functional properties in real space with an angstrom-level precision. This progress has been accompanied by exponential increase in the size and quality of datasets produced by microscopic and spectroscopic experimental techniques. These developments necessitate adequate methods for extracting relevant physical and chemical information from the large datasets, for which a priori information on the structures of various atomic configurations and lattice defects is limited or absent. Here we demonstrate an application of deep neural networks to extracting informationmore » from atomically resolved images including location of the atomic species and type of defects. We develop a “weakly-supervised” approach that uses information on the coordinates of all atomic species in the image, extracted via a deep neural network, to identify a rich variety of defects that are not part of an initial training set. We further apply our approach to interpret complex atomic and defect transformation, including switching between different coordination of silicon dopants in graphene as a function of time, formation of peculiar silicon dimer with mixed 3-fold and 4-fold coordination, and the motion of molecular “rotor”. In conclusion, this deep learning based approach resembles logic of a human operator, but can be scaled leading to significant shift in the way of extracting and analyzing information from raw experimental data.« less
Intercomparison of Lab-Based Soil Water Extraction Methods for Stable Water Isotope Analysis
NASA Astrophysics Data System (ADS)
Pratt, D.; Orlowski, N.; McDonnell, J.
2016-12-01
The effect of pore water extraction technique on resultant isotopic signature is poorly understood. Here we present results of an intercomparison of five common lab-based soil water extraction techniques: high pressure mechanical squeezing, centrifugation, direct vapor equilibration, microwave extraction, and cryogenic extraction. We applied five extraction methods to two physicochemically different standard soil types (silty sand and clayey loam) that were oven-dried and rewetted with water of known isotopic composition at three different gravimetric water contents (8, 20, and 30%). We tested the null hypothisis that all extraction techniques would provide the same isotopic result independent from soil type and water content. Our results showed that the extraction technique had a significant effect on the soil water isotopic composition. Each method exhibited deviations from spiked reference water, with soil type and water content showing a secondary effect. Cryogenic extraction showed the largest deviations from the reference water, whereas mechanical squeezing and centrifugation provided the closest match to the reference water for both soil types. We also compared results for each extraction technique that produced liquid water on both an OA-ICOS and IRMS; differences between them were negligible.
Alonso, J F; Mañanas, M A; Hoyer, D; Topor, Z L; Bruce, E N
2004-01-01
Analysis of respiratory muscle activity is a promising technique for the study of pulmonary diseases such as obstructive sleep apnea syndrome (OSAS). Evaluation of interactions between muscles is very useful in order to determine the muscular pattern during an exercise. These interactions have already been assessed by means of different linear techniques like cross-spectrum, magnitude squared coherence or cross-correlation. The aim of this work is to evaluate interactions between respiratory and myographic signals through nonlinear analysis by means of cross mutual information function (CMIF), and finding out what information can be extracted from it. Some parameters are defined and calculated from CMIF between ventilatory and myographic signals of three respiratory muscles. Finally, differences in certain parameters were obtained between OSAS patients and healthy subjects indicating different respiratory muscle couplings.
Automating the generation of lexical patterns for processing free text in clinical documents.
Meng, Frank; Morioka, Craig
2015-09-01
Many tasks in natural language processing utilize lexical pattern-matching techniques, including information extraction (IE), negation identification, and syntactic parsing. However, it is generally difficult to derive patterns that achieve acceptable levels of recall while also remaining highly precise. We present a multiple sequence alignment (MSA)-based technique that automatically generates patterns, thereby leveraging language usage to determine the context of words that influence a given target. MSAs capture the commonalities among word sequences and are able to reveal areas of linguistic stability and variation. In this way, MSAs provide a systemic approach to generating lexical patterns that are generalizable, which will both increase recall levels and maintain high levels of precision. The MSA-generated patterns exhibited consistent F1-, F.5-, and F2- scores compared to two baseline techniques for IE across four different tasks. Both baseline techniques performed well for some tasks and less well for others, but MSA was found to consistently perform at a high level for all four tasks. The performance of MSA on the four extraction tasks indicates the method's versatility. The results show that the MSA-based patterns are able to handle the extraction of individual data elements as well as relations between two concepts without the need for large amounts of manual intervention. We presented an MSA-based framework for generating lexical patterns that showed consistently high levels of both performance and recall over four different extraction tasks when compared to baseline methods. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Automated DICOM metadata and volumetric anatomical information extraction for radiation dosimetry
NASA Astrophysics Data System (ADS)
Papamichail, D.; Ploussi, A.; Kordolaimi, S.; Karavasilis, E.; Papadimitroulas, P.; Syrgiamiotis, V.; Efstathopoulos, E.
2015-09-01
Patient-specific dosimetry calculations based on simulation techniques have as a prerequisite the modeling of the modality system and the creation of voxelized phantoms. This procedure requires the knowledge of scanning parameters and patients’ information included in a DICOM file as well as image segmentation. However, the extraction of this information is complicated and time-consuming. The objective of this study was to develop a simple graphical user interface (GUI) to (i) automatically extract metadata from every slice image of a DICOM file in a single query and (ii) interactively specify the regions of interest (ROI) without explicit access to the radiology information system. The user-friendly application developed in Matlab environment. The user can select a series of DICOM files and manage their text and graphical data. The metadata are automatically formatted and presented to the user as a Microsoft Excel file. The volumetric maps are formed by interactively specifying the ROIs and by assigning a specific value in every ROI. The result is stored in DICOM format, for data and trend analysis. The developed GUI is easy, fast and and constitutes a very useful tool for individualized dosimetry. One of the future goals is to incorporate a remote access to a PACS server functionality.
The ISES: A non-intrusive medium for in-space experiments in on-board information extraction
NASA Technical Reports Server (NTRS)
Murray, Nicholas D.; Katzberg, Stephen J.; Nealy, Mike
1990-01-01
The Information Science Experiment System (ISES) represents a new approach in applying advanced systems technology and techniques to on-board information extraction in the space environment. Basically, what is proposed is a 'black box' attached to the spacecraft data bus or local area network. To the spacecraft the 'black box' appears to be just another payload requiring power, heat rejection, interfaces, adding weight, and requiring time on the data management and communication system. In reality, the 'black box' is a programmable computational resource which eavesdrops on the data network, taking and producing selectable, real-time science data back on the network. This paper will present a brief overview of the ISES Concept and will discuss issues related to applying the ISES to the polar platform and Space Station Freedom. Critical to the operation of ISES is the viability of a payload-like interface to the spacecraft data bus or local area network. Study results that address this question will be reviewed vis-a-vis the solar platform and the core space station. Also, initial results of processing science and other requirements for onboard, real-time information extraction will be presented with particular emphasis on the polar platform. Opportunities for a broader range of applications on the core space station will also be discussed.
ERIC Educational Resources Information Center
Freeman, Ramona
2011-01-01
This case study considers pedagogical techniques used in family childcare to promote children's learning experiences. Data extracted from an earlier study were used to inform this examination of four family childcare providers' pedagogy. In the current study, I use socio-cultural theory and the Reggio Emilia approach to address the following…
Identifying Key Hospital Service Quality Factors in Online Health Communities
Jung, Yuchul; Hur, Cinyoung; Jung, Dain
2015-01-01
Background The volume of health-related user-created content, especially hospital-related questions and answers in online health communities, has rapidly increased. Patients and caregivers participate in online community activities to share their experiences, exchange information, and ask about recommended or discredited hospitals. However, there is little research on how to identify hospital service quality automatically from the online communities. In the past, in-depth analysis of hospitals has used random sampling surveys. However, such surveys are becoming impractical owing to the rapidly increasing volume of online data and the diverse analysis requirements of related stakeholders. Objective As a solution for utilizing large-scale health-related information, we propose a novel approach to identify hospital service quality factors and overtime trends automatically from online health communities, especially hospital-related questions and answers. Methods We defined social media–based key quality factors for hospitals. In addition, we developed text mining techniques to detect such factors that frequently occur in online health communities. After detecting these factors that represent qualitative aspects of hospitals, we applied a sentiment analysis to recognize the types of recommendations in messages posted within online health communities. Korea’s two biggest online portals were used to test the effectiveness of detection of social media–based key quality factors for hospitals. Results To evaluate the proposed text mining techniques, we performed manual evaluations on the extraction and classification results, such as hospital name, service quality factors, and recommendation types using a random sample of messages (ie, 5.44% (9450/173,748) of the total messages). Service quality factor detection and hospital name extraction achieved average F1 scores of 91% and 78%, respectively. In terms of recommendation classification, performance (ie, precision) is 78% on average. Extraction and classification performance still has room for improvement, but the extraction results are applicable to more detailed analysis. Further analysis of the extracted information reveals that there are differences in the details of social media–based key quality factors for hospitals according to the regions in Korea, and the patterns of change seem to accurately reflect social events (eg, influenza epidemics). Conclusions These findings could be used to provide timely information to caregivers, hospital officials, and medical officials for health care policies. PMID:25855612
Image enhancement and advanced information extraction techniques for ERTS-1 data
NASA Technical Reports Server (NTRS)
Malila, W. A. (Principal Investigator); Nalepka, R. F.; Sarno, J. E.
1975-01-01
The author has identified the following significant results. It was demonstrated and concluded that: (1) the atmosphere has significant effects on ERTS MSS data which can seriously degrade recognition performance; (2) the application of selected signature extension techniques serve to reduce the deleterious effects of both the atmosphere and changing ground conditions on recognition performance; and (3) a proportion estimation algorithm for overcoming problems in acreage estimation accuracy resulting from the coarse spatial resolution of the ERTS MSS, was able to significantly improve acreage estimation accuracy over that achievable by conventional techniques, especially for high contrast targets such as lakes and ponds.
NASA Astrophysics Data System (ADS)
Jun, An Won
2006-01-01
We implement a first practical holographic security system using electrical biometrics that combines optical encryption and digital holographic memory technologies. Optical information for identification includes a picture of face, a name, and a fingerprint, which has been spatially multiplexed by random phase mask used for a decryption key. For decryption in our biometric security system, a bit-error-detection method that compares the digital bit of live fingerprint with of fingerprint information extracted from hologram is used.
Principal component greenness transformation in multitemporal agricultural Landsat data
NASA Technical Reports Server (NTRS)
Abotteen, R. A.
1978-01-01
A data compression technique for multitemporal Landsat imagery which extracts phenological growth pattern information for agricultural crops is described. The principal component greenness transformation was applied to multitemporal agricultural Landsat data for information retrieval. The transformation was favorable for applications in agricultural Landsat data analysis because of its physical interpretability and its relation to the phenological growth of crops. It was also found that the first and second greenness eigenvector components define a temporal small-grain trajectory and nonsmall-grain trajectory, respectively.
Beyond Information Retrieval—Medical Question Answering
Lee, Minsuk; Cimino, James; Zhu, Hai Ran; Sable, Carl; Shanker, Vijay; Ely, John; Yu, Hong
2006-01-01
Physicians have many questions when caring for patients, and frequently need to seek answers for their questions. Information retrieval systems (e.g., PubMed) typically return a list of documents in response to a user’s query. Frequently the number of returned documents is large and makes physicians’ information seeking “practical only ‘after hours’ and not in the clinical settings”. Question answering techniques are based on automatically analyzing thousands of electronic documents to generate short-text answers in response to clinical questions that are posed by physicians. The authors address physicians’ information needs and described the design, implementation, and evaluation of the medical question answering system (MedQA). Although our long term goal is to enable MedQA to answer all types of medical questions, currently, we currently implement MedQA to integrate information retrieval, extraction, and summarization techniques to automatically generate paragraph-level text for definitional questions (i.e., “What is X?”). MedQA can be accessed at http://www.dbmi.columbia.edu/~yuh9001/research/MedQA.html. PMID:17238385
Mohseni Salehi, Seyed Sadegh; Erdogmus, Deniz; Gholipour, Ali
2017-11-01
Brain extraction or whole brain segmentation is an important first step in many of the neuroimage analysis pipelines. The accuracy and the robustness of brain extraction, therefore, are crucial for the accuracy of the entire brain analysis process. The state-of-the-art brain extraction techniques rely heavily on the accuracy of alignment or registration between brain atlases and query brain anatomy, and/or make assumptions about the image geometry, and therefore have limited success when these assumptions do not hold or image registration fails. With the aim of designing an accurate, learning-based, geometry-independent, and registration-free brain extraction tool, in this paper, we present a technique based on an auto-context convolutional neural network (CNN), in which intrinsic local and global image features are learned through 2-D patches of different window sizes. We consider two different architectures: 1) a voxelwise approach based on three parallel 2-D convolutional pathways for three different directions (axial, coronal, and sagittal) that implicitly learn 3-D image information without the need for computationally expensive 3-D convolutions and 2) a fully convolutional network based on the U-net architecture. Posterior probability maps generated by the networks are used iteratively as context information along with the original image patches to learn the local shape and connectedness of the brain to extract it from non-brain tissue. The brain extraction results we have obtained from our CNNs are superior to the recently reported results in the literature on two publicly available benchmark data sets, namely, LPBA40 and OASIS, in which we obtained the Dice overlap coefficients of 97.73% and 97.62%, respectively. Significant improvement was achieved via our auto-context algorithm. Furthermore, we evaluated the performance of our algorithm in the challenging problem of extracting arbitrarily oriented fetal brains in reconstructed fetal brain magnetic resonance imaging (MRI) data sets. In this application, our voxelwise auto-context CNN performed much better than the other methods (Dice coefficient: 95.97%), where the other methods performed poorly due to the non-standard orientation and geometry of the fetal brain in MRI. Through training, our method can provide accurate brain extraction in challenging applications. This, in turn, may reduce the problems associated with image registration in segmentation tasks.
Data of furfural adsorption on nano zero valent iron (NZVI) synthesized from Nettle extract.
Fazlzadeh, Mehdi; Ansarizadeh, Mohammad; Leili, Mostafa
2018-02-01
Among various water and wastewater treatment methods, adsorption techniques are widely used to remove certain classes of pollutants due to its unique features. Thus, the aim of this data article is to synthesize zero valent iron nanoparticles (NZVI) from Nettle leaf extract by green synthesis method as an environmentally friendly technique, and to evaluate it's efficiency in the removal of furfural from aqueous solutions. The data of possible adsorption mechanism and isotherm of furfural on the synthesized adsorbent are depicted in this data article. The data acquired showed that the adsorption trend follows the pseudo-second order kinetic model and that the Langmuir isotherm was suitable for correlation of equilibrium data with the maximum adsorption capacity of 454.4 mg/g. The information of initial furfural concentration, pH, adsorbent dosage and contact time effects on the removal efficiency are presented. Considering the findings data, the developed nanoparticle from Nettle leaf extract, as a low cost adsorbent, could be considered as promising adsorbent for furfural and probably similar organic pollutants removal from aqueous solutions.
Wireless AE Event and Environmental Monitoring for Wind Turbine Blades at Low Sampling Rates
NASA Astrophysics Data System (ADS)
Bouzid, Omar M.; Tian, Gui Y.; Cumanan, K.; Neasham, J.
Integration of acoustic wireless technology in structural health monitoring (SHM) applications introduces new challenges due to requirements of high sampling rates, additional communication bandwidth, memory space, and power resources. In order to circumvent these challenges, this chapter proposes a novel solution through building a wireless SHM technique in conjunction with acoustic emission (AE) with field deployment on the structure of a wind turbine. This solution requires a low sampling rate which is lower than the Nyquist rate. In addition, features extracted from aliased AE signals instead of reconstructing the original signals on-board the wireless nodes are exploited to monitor AE events, such as wind, rain, strong hail, and bird strike in different environmental conditions in conjunction with artificial AE sources. Time feature extraction algorithm, in addition to the principal component analysis (PCA) method, is used to extract and classify the relevant information, which in turn is used to classify or recognise a testing condition that is represented by the response signals. This proposed novel technique yields a significant data reduction during the monitoring process of wind turbine blades.
Sample preparation for the analysis of isoflavones from soybeans and soy foods.
Rostagno, M A; Villares, A; Guillamón, E; García-Lafuente, A; Martínez, J A
2009-01-02
This manuscript provides a review of the actual state and the most recent advances as well as current trends and future prospects in sample preparation and analysis for the quantification of isoflavones from soybeans and soy foods. Individual steps of the procedures used in sample preparation, including sample conservation, extraction techniques and methods, and post-extraction treatment procedures are discussed. The most commonly used methods for extraction of isoflavones with both conventional and "modern" techniques are examined in detail. These modern techniques include ultrasound-assisted extraction, pressurized liquid extraction, supercritical fluid extraction and microwave-assisted extraction. Other aspects such as stability during extraction and analysis by high performance liquid chromatography are also covered.
Extraction of quantitative surface characteristics from AIRSAR data for Death Valley, California
NASA Technical Reports Server (NTRS)
Kierein-Young, K. S.; Kruse, F. A.
1992-01-01
Polarimetric Airborne Synthetic Aperture Radar (AIRSAR) data were collected for the Geologic Remote Sensing Field Experiment (GRSFE) over Death Valley, California, USA, in Sep. 1989. AIRSAR is a four-look, quad-polarization, three frequency instrument. It collects measurements at C-band (5.66 cm), L-band (23.98 cm), and P-band (68.13 cm), and has a GIFOV of 10 meters and a swath width of 12 kilometers. Because the radar measures at three wavelengths, different scales of surface roughness are measured. Also, dielectric constants can be calculated from the data. The AIRSAR data were calibrated using in-scene trihedral corner reflectors to remove cross-talk; and to calibrate the phase, amplitude, and co-channel gain imbalance. The calibration allows for the extraction of accurate values of rms surface roughness, dielectric constants, sigma(sub 0) backscatter, and polarization information. The radar data sets allow quantitative characterization of small scale surface structure of geologic units, providing information about the physical and chemical processes that control the surface morphology. Combining the quantitative information extracted from the radar data with other remotely sensed data sets allows discrimination, identification and mapping of geologic units that may be difficult to discern using conventional techniques.
Sohrabpour, Abbas; Ye, Shuai; Worrell, Gregory A.; Zhang, Wenbo
2016-01-01
Objective Combined source imaging techniques and directional connectivity analysis can provide useful information about the underlying brain networks in a non-invasive fashion. Source imaging techniques have been used successfully to either determine the source of activity or to extract source time-courses for Granger causality analysis, previously. In this work, we utilize source imaging algorithms to both find the network nodes (regions of interest) and then extract the activation time series for further Granger causality analysis. The aim of this work is to find network nodes objectively from noninvasive electromagnetic signals, extract activation time-courses and apply Granger analysis on the extracted series to study brain networks under realistic conditions. Methods Source imaging methods are used to identify network nodes and extract time-courses and then Granger causality analysis is applied to delineate the directional functional connectivity of underlying brain networks. Computer simulations studies where the underlying network (nodes and connectivity pattern) is known were performed; additionally, this approach has been evaluated in partial epilepsy patients to study epilepsy networks from inter-ictal and ictal signals recorded by EEG and/or MEG. Results Localization errors of network nodes are less than 5 mm and normalized connectivity errors of ~20% in estimating underlying brain networks in simulation studies. Additionally, two focal epilepsy patients were studied and the identified nodes driving the epileptic network were concordant with clinical findings from intracranial recordings or surgical resection. Conclusion Our study indicates that combined source imaging algorithms with Granger causality analysis can identify underlying networks precisely (both in terms of network nodes location and internodal connectivity). Significance The combined source imaging and Granger analysis technique is an effective tool for studying normal or pathological brain conditions. PMID:27740473
Sohrabpour, Abbas; Ye, Shuai; Worrell, Gregory A; Zhang, Wenbo; He, Bin
2016-12-01
Combined source-imaging techniques and directional connectivity analysis can provide useful information about the underlying brain networks in a noninvasive fashion. Source-imaging techniques have been used successfully to either determine the source of activity or to extract source time-courses for Granger causality analysis, previously. In this work, we utilize source-imaging algorithms to both find the network nodes [regions of interest (ROI)] and then extract the activation time series for further Granger causality analysis. The aim of this work is to find network nodes objectively from noninvasive electromagnetic signals, extract activation time-courses, and apply Granger analysis on the extracted series to study brain networks under realistic conditions. Source-imaging methods are used to identify network nodes and extract time-courses and then Granger causality analysis is applied to delineate the directional functional connectivity of underlying brain networks. Computer simulations studies where the underlying network (nodes and connectivity pattern) is known were performed; additionally, this approach has been evaluated in partial epilepsy patients to study epilepsy networks from interictal and ictal signals recorded by EEG and/or Magnetoencephalography (MEG). Localization errors of network nodes are less than 5 mm and normalized connectivity errors of ∼20% in estimating underlying brain networks in simulation studies. Additionally, two focal epilepsy patients were studied and the identified nodes driving the epileptic network were concordant with clinical findings from intracranial recordings or surgical resection. Our study indicates that combined source-imaging algorithms with Granger causality analysis can identify underlying networks precisely (both in terms of network nodes location and internodal connectivity). The combined source imaging and Granger analysis technique is an effective tool for studying normal or pathological brain conditions.
NASA Astrophysics Data System (ADS)
Wang, Weixing; Wang, Zhiwei; Han, Ya; Li, Shuang; Zhang, Xin
2015-03-01
In order to ensure safety, long term stability and quality control in modern tunneling operations, the acquisition of geotechnical information about encountered rock conditions and detailed installed support information is required. The limited space and time in an operational tunnel environment make the acquiring data challenging. The laser scanning in a tunneling environment, however, shows a great potential. The surveying and mapping of tunnels are crucial for the optimal use after construction and in routine inspections. Most of these applications focus on the geometric information of the tunnels extracted from the laser scanning data. There are two kinds of applications widely discussed: deformation measurement and feature extraction. The traditional deformation measurement in an underground environment is performed with a series of permanent control points installed around the profile of an excavation, which is unsuitable for a global consideration of the investigated area. Using laser scanning for deformation analysis provides many benefits as compared to traditional monitoring techniques. The change in profile is able to be fully characterized and the areas of the anomalous movement can easily be separated from overall trends due to the high density of the point cloud data. Furthermore, monitoring with a laser scanner does not require the permanent installation of control points, therefore the monitoring can be completed more quickly after excavation, and the scanning is non-contact, hence, no damage is done during the installation of temporary control points. The main drawback of using the laser scanning for deformation monitoring is that the point accuracy of the original data is generally the same magnitude as the smallest level of deformations that are to be measured. To overcome this, statistical techniques and three dimensional image processing techniques for the point clouds must be developed. For safely, effectively and easily control the problem of Over Underbreak detection of road and solve the problemof the roadway data collection difficulties, this paper presents a new method of continuous section extraction and Over Underbreak detection of road based on 3D laser scanning technology and image processing, the method is divided into the following three steps: based on Canny edge detection, local axis fitting, continuous extraction section and Over Underbreak detection of section. First, after Canny edge detection, take the least-squares curve fitting method to achieve partial fitting in axis. Then adjust the attitude of local roadway that makes the axis of the roadway be consistent with the direction of the extraction reference, and extract section along the reference direction. Finally, we compare the actual cross-sectional view and the cross-sectional design to complete Overbreak detected. Experimental results show that the proposed method have a great advantage in computing costs and ensure cross-section orthogonal intercept terms compared with traditional detection methods.
A tool for filtering information in complex systems
Tumminello, M.; Aste, T.; Di Matteo, T.; Mantegna, R. N.
2005-01-01
We introduce a technique to filter out complex data sets by extracting a subgraph of representative links. Such a filtering can be tuned up to any desired level by controlling the genus of the resulting graph. We show that this technique is especially suitable for correlation-based graphs, giving filtered graphs that preserve the hierarchical organization of the minimum spanning tree but containing a larger amount of information in their internal structure. In particular in the case of planar filtered graphs (genus equal to 0), triangular loops and four-element cliques are formed. The application of this filtering procedure to 100 stocks in the U.S. equity markets shows that such loops and cliques have important and significant relationships with the market structure and properties. PMID:16027373
A tool for filtering information in complex systems.
Tumminello, M; Aste, T; Di Matteo, T; Mantegna, R N
2005-07-26
We introduce a technique to filter out complex data sets by extracting a subgraph of representative links. Such a filtering can be tuned up to any desired level by controlling the genus of the resulting graph. We show that this technique is especially suitable for correlation-based graphs, giving filtered graphs that preserve the hierarchical organization of the minimum spanning tree but containing a larger amount of information in their internal structure. In particular in the case of planar filtered graphs (genus equal to 0), triangular loops and four-element cliques are formed. The application of this filtering procedure to 100 stocks in the U.S. equity markets shows that such loops and cliques have important and significant relationships with the market structure and properties.
Towards NIRS-based hand movement recognition.
Paleari, Marco; Luciani, Riccardo; Ariano, Paolo
2017-07-01
This work reports on preliminary results about on hand movement recognition with Near InfraRed Spectroscopy (NIRS) and surface ElectroMyoGraphy (sEMG). Either basing on physical contact (touchscreens, data-gloves, etc.), vision techniques (Microsoft Kinect, Sony PlayStation Move, etc.), or other modalities, hand movement recognition is a pervasive function in today environment and it is at the base of many gaming, social, and medical applications. Albeit, in recent years, the use of muscle information extracted by sEMG has spread out from the medical applications to contaminate the consumer world, this technique still falls short when dealing with movements of the hand. We tested NIRS as a technique to get another point of view on the muscle phenomena and proved that, within a specific movements selection, NIRS can be used to recognize movements and return information regarding muscles at different depths. Furthermore, we propose here three different multimodal movement recognition approaches and compare their performances.
Djouahri, Abderrahmane; Saka, Boualem; Boudarene, Lynda; Baaliouamer, Aoumeur
2016-12-01
In the present work, the hydrodistillation (HD) and microwave-assisted hydrodistillation (MAHD) kinetics of essential oil (EO) extracted from Tetraclinis articulata (Vahl) Mast. wood was conducted, in order to assess the impact of extraction time and technique on chemical composition and biological activities. Gas chromatography (GC) and GC/mass spectrometry analyses showed significant differences between the extracted EOs, where each family class or component presents a specific kinetic according to extraction time, technique and especially for the major components: camphene, linalool, cedrol, carvacrol and α-acorenol. Furthermore, our findings showed a high variability for both antioxidant and anti-inflammatory activities, where each activity has a specific effect according to extraction time and technique. The highlighted variability reflects the high impact of extraction time and technique on chemical composition and biological activities, which led to conclude that we should select EOs to be investigated carefully depending on extraction time and technique, in order to isolate the bioactive components or to have the best quality of EO in terms of biological activities and preventive effects in food. © 2016 Wiley-VHCA AG, Zurich, Switzerland.
Transient plasma estimation: a noise cancelling/identification approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Candy, J.V.; Casper, T.; Kane, R.
1985-03-01
The application of a noise cancelling technique to extract energy storage information from sensors occurring during fusion reactor experiments on the Tandem Mirror Experiment-Upgrade (TMX-U) at the Lawrence Livermore National Laboratory (LLNL) is examined. We show how this technique can be used to decrease the uncertainty in the corresponding sensor measurements used for diagnostics in both real-time and post-experimental environments. We analyze the performance of algorithm on the sensor data and discuss the various tradeoffs. The algorithm suggested is designed using SIG, an interactive signal processing package developed at LLNL.
NASA Astrophysics Data System (ADS)
Tian, J.; Krauß, T.; d'Angelo, P.
2017-05-01
Automatic rooftop extraction is one of the most challenging problems in remote sensing image analysis. Classical 2D image processing techniques are expensive due to the high amount of features required to locate buildings. This problem can be avoided when 3D information is available. In this paper, we show how to fuse the spectral and height information of stereo imagery to achieve an efficient and robust rooftop extraction. In the first step, the digital terrain model (DTM) and in turn the normalized digital surface model (nDSM) is generated by using a newly step-edge approach. In the second step, the initial building locations and rooftop boundaries are derived by removing the low-level pixels and high-level pixels with higher probability to be trees and shadows. This boundary is then served as the initial level set function, which is further refined to fit the best possible boundaries through distance regularized level-set curve evolution. During the fitting procedure, the edge-based active contour model is adopted and implemented by using the edges indicators extracted from panchromatic image. The performance of the proposed approach is tested by using the WorldView-2 satellite data captured over Munich.
NASA Astrophysics Data System (ADS)
Jang, Yujin; Hong, Helen; Chung, Jin Wook; Yoon, Young Ho
2012-02-01
We propose an effective technique for the extraction of liver boundary based on multi-planar anatomy and deformable surface model in abdominal contrast-enhanced CT images. Our method is composed of four main steps. First, for extracting an optimal volume circumscribing a liver, lower and side boundaries are defined by positional information of pelvis and rib. An upper boundary is defined by separating the lungs and heart from CT images. Second, for extracting an initial liver volume, optimal liver volume is smoothed by anisotropic diffusion filtering and is segmented using adaptively selected threshold value. Third, for removing neighbor organs from initial liver volume, morphological opening and connected component labeling are applied to multiple planes. Finally, for refining the liver boundaries, deformable surface model is applied to a posterior liver surface and missing left robe in previous step. Then, probability summation map is generated by calculating regional information of the segmented liver in coronal plane, which is used for restoring the inaccurate liver boundaries. Experimental results show that our segmentation method can accurately extract liver boundaries without leakage to neighbor organs in spite of various liver shape and ambiguous boundary.
NASA Astrophysics Data System (ADS)
Ojima, Nobutoshi; Okiyama, Natsuko; Okaguchi, Saya; Tsumura, Norimichi; Nakaguchi, Toshiya; Hori, Kimihiko; Miyake, Yoichi
2005-04-01
In the cosmetics industry, skin color is very important because skin color gives a direct impression of the face. In particular, many people suffer from melanin pigmentation such as liver spots and freckles. However, it is very difficult to evaluate melanin pigmentation using conventional colorimetric values because these values contain information on various skin chromophores simultaneously. Therefore, it is necessary to extract information of the chromophore of individual skins independently as density information. The isolation of the melanin component image based on independent component analysis (ICA) from a single skin image was reported in 2003. However, this technique has not developed a quantification method for melanin pigmentation. This paper introduces a quantification method based on the ICA of a skin color image to isolate melanin pigmentation. The image acquisition system we used consists of commercially available equipment such as digital cameras and lighting sources with polarized light. The images taken were analyzed using ICA to extract the melanin component images, and Laplacian of Gaussian (LOG) filter was applied to extract the pigmented area. As a result, for skin images including those showing melanin pigmentation and acne, the method worked well. Finally, the total amount of extracted area had a strong correspondence to the subjective rating values for the appearance of pigmentation. Further analysis is needed to recognize the appearance of pigmentation concerning the size of the pigmented area and its spatial gradation.
On-line coupling of supercritical fluid extraction and chromatographic techniques.
Sánchez-Camargo, Andrea Del Pilar; Parada-Alfonso, Fabián; Ibáñez, Elena; Cifuentes, Alejandro
2017-01-01
This review summarizes and discusses recent advances and applications of on-line supercritical fluid extraction coupled to liquid chromatography, gas chromatography, and supercritical fluid chromatographic techniques. Supercritical fluids, due to their exceptional physical properties, provide unique opportunities not only during the extraction step but also in the separation process. Although supercritical fluid extraction is especially suitable for recovery of non-polar organic compounds, this technique can also be successfully applied to the extraction of polar analytes by the aid of modifiers. Supercritical fluid extraction process can be performed following "off-line" or "on-line" approaches and their main features are contrasted herein. Besides, the parameters affecting the supercritical fluid extraction process are explained and a "decision tree" is for the first time presented in this review work as a guide tool for method development. The general principles (instrumental and methodological) of the different on-line couplings of supercritical fluid extraction with chromatographic techniques are described. Advantages and shortcomings of supercritical fluid extraction as hyphenated technique are discussed. Besides, an update of the most recent applications (from 2005 up to now) of the mentioned couplings is also presented in this review. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Automatic indexing of compound words based on mutual information for Korean text retrieval
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pan Koo Kim; Yoo Kun Cho
In this paper, we present an automatic indexing technique for compound words suitable to an aggulutinative language, specifically Korean. Firstly, we present the construction conditions to compose compound words as indexing terms. Also we present the decomposition rules applicable to consecutive nouns to extract all contents of text. Finally we propose a measure to estimate the usefulness of a term, mutual information, to calculate the degree of word association of compound words, based on the information theoretic notion. By applying this method, our system has raised the precision rate of compound words from 72% to 87%.
Fitting Flux Ropes to a Global MHD Solution: A Comparison of Techniques. Appendix 1
NASA Technical Reports Server (NTRS)
Riley, Pete; Linker, J. A.; Lionello, R.; Mikic, Z.; Odstrcil, D.; Hidalgo, M. A.; Cid, C.; Hu, Q.; Lepping, R. P.; Lynch, B. J.
2004-01-01
Flux rope fitting (FRF) techniques are an invaluable tool for extracting information about the properties of a subclass of CMEs in the solar wind. However, it has proven difficult to assess their accuracy since the underlying global structure of the CME cannot be independently determined from the data. In contrast, large-scale MHD simulations of CME evolution can provide both a global view as well as localized time series at specific points in space. In this study we apply 5 different fitting techniques to 2 hypothetical time series derived from MHD simulation results. Independent teams performed the analysis of the events in "blind tests", for which no information, other than the time series, was provided. F rom the results, we infer the following: (1) Accuracy decreases markedly with increasingly glancing encounters; (2) Correct identification of the boundaries of the flux rope can be a significant limiter; and (3) Results from techniques that infer global morphology must be viewed with caution. In spite of these limitations, FRF techniques remain a useful tool for describing in situ observations of flux rope CMEs.
Analysis of a municipal wastewater treatment plant using a neural network-based pattern analysis
Hong, Y.-S.T.; Rosen, Michael R.; Bhamidimarri, R.
2003-01-01
This paper addresses the problem of how to capture the complex relationships that exist between process variables and to diagnose the dynamic behaviour of a municipal wastewater treatment plant (WTP). Due to the complex biological reaction mechanisms, the highly time-varying, and multivariable aspects of the real WTP, the diagnosis of the WTP are still difficult in practice. The application of intelligent techniques, which can analyse the multi-dimensional process data using a sophisticated visualisation technique, can be useful for analysing and diagnosing the activated-sludge WTP. In this paper, the Kohonen Self-Organising Feature Maps (KSOFM) neural network is applied to analyse the multi-dimensional process data, and to diagnose the inter-relationship of the process variables in a real activated-sludge WTP. By using component planes, some detailed local relationships between the process variables, e.g., responses of the process variables under different operating conditions, as well as the global information is discovered. The operating condition and the inter-relationship among the process variables in the WTP have been diagnosed and extracted by the information obtained from the clustering analysis of the maps. It is concluded that the KSOFM technique provides an effective analysing and diagnosing tool to understand the system behaviour and to extract knowledge contained in multi-dimensional data of a large-scale WTP. ?? 2003 Elsevier Science Ltd. All rights reserved.
Maulidiani; Rudiyanto; Abas, Faridah; Ismail, Intan Safinar; Lajis, Nordin H
2018-06-01
Optimization process is an important aspect in the natural product extractions. Herein, an alternative approach is proposed for the optimization in extraction, namely, the Generalized Likelihood Uncertainty Estimation (GLUE). The approach combines the Latin hypercube sampling, the feasible range of independent variables, the Monte Carlo simulation, and the threshold criteria of response variables. The GLUE method is tested in three different techniques including the ultrasound, the microwave, and the supercritical CO 2 assisted extractions utilizing the data from previously published reports. The study found that this method can: provide more information on the combined effects of the independent variables on the response variables in the dotty plots; deal with unlimited number of independent and response variables; consider combined multiple threshold criteria, which is subjective depending on the target of the investigation for response variables; and provide a range of values with their distribution for the optimization. Copyright © 2018 Elsevier Ltd. All rights reserved.
Neural net diagnostics for VLSI test
NASA Technical Reports Server (NTRS)
Lin, T.; Tseng, H.; Wu, A.; Dogan, N.; Meador, J.
1990-01-01
This paper discusses the application of neural network pattern analysis algorithms to the IC fault diagnosis problem. A fault diagnostic is a decision rule combining what is known about an ideal circuit test response with information about how it is distorted by fabrication variations and measurement noise. The rule is used to detect fault existence in fabricated circuits using real test equipment. Traditional statistical techniques may be used to achieve this goal, but they can employ unrealistic a priori assumptions about measurement data. Our approach to this problem employs an adaptive pattern analysis technique based on feedforward neural networks. During training, a feedforward network automatically captures unknown sample distributions. This is important because distributions arising from the nonlinear effects of process variation can be more complex than is typically assumed. A feedforward network is also able to extract measurement features which contribute significantly to making a correct decision. Traditional feature extraction techniques employ matrix manipulations which can be particularly costly for large measurement vectors. In this paper we discuss a software system which we are developing that uses this approach. We also provide a simple example illustrating the use of the technique for fault detection in an operational amplifier.
NASA Astrophysics Data System (ADS)
Chen, H.-Y.; Huang, Y.-R.; Shih, H.-Y.; Chen, M.-J.; Sheu, J.-K.; Sun, C.-K.
2017-11-01
Modern devices adopting denser designs and complex 3D structures have created much more interfaces than before, where atomically thin interfacial layers could form. However, fundamental information such as the elastic property of the interfacial layers is hard to measure. The elastic property of the interfacial layer is of great importance in both thermal management and nano-engineering of modern devices. Appropriate techniques to probe the elastic properties of interfacial layers as thin as only several atoms are thus critically needed. In this work, we demonstrated the feasibility of utilizing the time-resolved femtosecond acoustics technique to extract the elastic properties and mass density of a 1.85-nm-thick interfacial layer, with the aid of transmission electron microscopy. We believe that this femtosecond acoustics approach will provide a strategy to measure the absolute elastic properties of atomically thin interfacial layers.
Text Mining in Biomedical Domain with Emphasis on Document Clustering
2017-01-01
Objectives With the exponential increase in the number of articles published every year in the biomedical domain, there is a need to build automated systems to extract unknown information from the articles published. Text mining techniques enable the extraction of unknown knowledge from unstructured documents. Methods This paper reviews text mining processes in detail and the software tools available to carry out text mining. It also reviews the roles and applications of text mining in the biomedical domain. Results Text mining processes, such as search and retrieval of documents, pre-processing of documents, natural language processing, methods for text clustering, and methods for text classification are described in detail. Conclusions Text mining techniques can facilitate the mining of vast amounts of knowledge on a given topic from published biomedical research articles and draw meaningful conclusions that are not possible otherwise. PMID:28875048
In-line phase contrast micro-CT reconstruction for biomedical specimens.
Fu, Jian; Tan, Renbo
2014-01-01
X-ray phase contrast micro computed tomography (micro-CT) can non-destructively provide the internal structure information of soft tissues and low atomic number materials. It has become an invaluable analysis tool for biomedical specimens. Here an in-line phase contrast micro-CT reconstruction technique is reported, which consists of a projection extraction method and the conventional filter back-projection (FBP) reconstruction algorithm. The projection extraction is implemented by applying the Fourier transform to the forward projections of in-line phase contrast micro-CT. This work comprises a numerical study of the method and its experimental verification using a biomedical specimen dataset measured at an X-ray tube source micro-CT setup. The numerical and experimental results demonstrate that the presented technique can improve the imaging contrast of biomedical specimens. It will be of interest for a wide range of in-line phase contrast micro-CT applications in medicine and biology.
Pose estimation of teeth through crown-shape matching
NASA Astrophysics Data System (ADS)
Mok, Vevin; Ong, Sim Heng; Foong, Kelvin W. C.; Kondo, Toshiaki
2002-05-01
This paper presents a technique for determining a tooth's pose given a dental plaster cast and a set of generic tooth models. The ultimate goal of pose estimation is to obtain information about the sizes and positions of the roots, which lie hidden within the gums, without the use of X-rays, CT or MRI. In our approach, the tooth of interest is first extracted from the 3D dental cast image through segmentation. 2D views are then generated from the extracted tooth and are matched against a target view generated from the generic model with known pose. Additional views are generated in the vicinity of the best view and the entire process is repeated until convergence. Upon convergence, the generic tooth is superimposed onto the dental cast to show the position of the root. The results of applying the technique to canines demonstrate the excellent potential of the algorithm for generic tooth fitting.
A review on "A Novel Technique for Image Steganography Based on Block-DCT and Huffman Encoding"
NASA Astrophysics Data System (ADS)
Das, Rig; Tuithung, Themrichon
2013-03-01
This paper reviews the embedding and extraction algorithm proposed by "A. Nag, S. Biswas, D. Sarkar and P. P. Sarkar" on "A Novel Technique for Image Steganography based on Block-DCT and Huffman Encoding" in "International Journal of Computer Science and Information Technology, Volume 2, Number 3, June 2010" [3] and shows that the Extraction of Secret Image is Not Possible for the algorithm proposed in [3]. 8 bit Cover Image of size is divided into non joint blocks and a two dimensional Discrete Cosine Transformation (2-D DCT) is performed on each of the blocks. Huffman Encoding is performed on an 8 bit Secret Image of size and each bit of the Huffman Encoded Bit Stream is embedded in the frequency domain by altering the LSB of the DCT coefficients of Cover Image blocks. The Huffman Encoded Bit Stream and Huffman Table
An inexpensive technique for the time resolved laser induced plasma spectroscopy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ahmed, Rizwan, E-mail: rizwan.ahmed@ncp.edu.pk; Ahmed, Nasar; Iqbal, J.
We present an efficient and inexpensive method for calculating the time resolved emission spectrum from the time integrated spectrum by monitoring the time evolution of neutral and singly ionized species in the laser produced plasma. To validate our assertion of extracting time resolved information from the time integrated spectrum, the time evolution data of the Cu II line at 481.29 nm and the molecular bands of AlO in the wavelength region (450–550 nm) have been studied. The plasma parameters were also estimated from the time resolved and time integrated spectra. A comparison of the results clearly reveals that the time resolved informationmore » about the plasma parameters can be extracted from the spectra registered with a time integrated spectrograph. Our proposed method will make the laser induced plasma spectroscopy robust and a low cost technique which is attractive for industry and environmental monitoring.« less
Learning temporal rules to forecast instability in continuously monitored patients
Dubrawski, Artur; Wang, Donghan; Hravnak, Marilyn; Clermont, Gilles; Pinsky, Michael R
2017-01-01
Inductive machine learning, and in particular extraction of association rules from data, has been successfully used in multiple application domains, such as market basket analysis, disease prognosis, fraud detection, and protein sequencing. The appeal of rule extraction techniques stems from their ability to handle intricate problems yet produce models based on rules that can be comprehended by humans, and are therefore more transparent. Human comprehension is a factor that may improve adoption and use of data-driven decision support systems clinically via face validity. In this work, we explore whether we can reliably and informatively forecast cardiorespiratory instability (CRI) in step-down unit (SDU) patients utilizing data from continuous monitoring of physiologic vital sign (VS) measurements. We use a temporal association rule extraction technique in conjunction with a rule fusion protocol to learn how to forecast CRI in continuously monitored patients. We detail our approach and present and discuss encouraging empirical results obtained using continuous multivariate VS data from the bedside monitors of 297 SDU patients spanning 29 346 hours (3.35 patient-years) of observation. We present example rules that have been learned from data to illustrate potential benefits of comprehensibility of the extracted models, and we analyze the empirical utility of each VS as a potential leading indicator of an impending CRI event. PMID:27274020
Cankar, Katarina; Štebih, Dejan; Dreo, Tanja; Žel, Jana; Gruden, Kristina
2006-01-01
Background Real-time PCR is the technique of choice for nucleic acid quantification. In the field of detection of genetically modified organisms (GMOs) quantification of biotech products may be required to fulfil legislative requirements. However, successful quantification depends crucially on the quality of the sample DNA analyzed. Methods for GMO detection are generally validated on certified reference materials that are in the form of powdered grain material, while detection in routine laboratories must be performed on a wide variety of sample matrixes. Due to food processing, the DNA in sample matrixes can be present in low amounts and also degraded. In addition, molecules of plant origin or from other sources that affect PCR amplification of samples will influence the reliability of the quantification. Further, the wide variety of sample matrixes presents a challenge for detection laboratories. The extraction method must ensure high yield and quality of the DNA obtained and must be carefully selected, since even components of DNA extraction solutions can influence PCR reactions. GMO quantification is based on a standard curve, therefore similarity of PCR efficiency for the sample and standard reference material is a prerequisite for exact quantification. Little information on the performance of real-time PCR on samples of different matrixes is available. Results Five commonly used DNA extraction techniques were compared and their suitability for quantitative analysis was assessed. The effect of sample matrix on nucleic acid quantification was assessed by comparing 4 maize and 4 soybean matrixes. In addition 205 maize and soybean samples from routine analysis were analyzed for PCR efficiency to assess variability of PCR performance within each sample matrix. Together with the amount of DNA needed for reliable quantification, PCR efficiency is the crucial parameter determining the reliability of quantitative results, therefore it was chosen as the primary criterion by which to evaluate the quality and performance on different matrixes and extraction techniques. The effect of PCR efficiency on the resulting GMO content is demonstrated. Conclusion The crucial influence of extraction technique and sample matrix properties on the results of GMO quantification is demonstrated. Appropriate extraction techniques for each matrix need to be determined to achieve accurate DNA quantification. Nevertheless, as it is shown that in the area of food and feed testing matrix with certain specificities is impossible to define strict quality controls need to be introduced to monitor PCR. The results of our study are also applicable to other fields of quantitative testing by real-time PCR. PMID:16907967
Visibility enhancement of color images using Type-II fuzzy membership function
NASA Astrophysics Data System (ADS)
Singh, Harmandeep; Khehra, Baljit Singh
2018-04-01
Images taken in poor environmental conditions decrease the visibility and hidden information of digital images. Therefore, image enhancement techniques are necessary for improving the significant details of these images. An extensive review has shown that histogram-based enhancement techniques greatly suffer from over/under enhancement issues. Fuzzy-based enhancement techniques suffer from over/under saturated pixels problems. In this paper, a novel Type-II fuzzy-based image enhancement technique has been proposed for improving the visibility of images. The Type-II fuzzy logic can automatically extract the local atmospheric light and roughly eliminate the atmospheric veil in local detail enhancement. The proposed technique has been evaluated on 10 well-known weather degraded color images and is also compared with four well-known existing image enhancement techniques. The experimental results reveal that the proposed technique outperforms others regarding visible edge ratio, color gradients and number of saturated pixels.
Kollia, Eleni; Markaki, Panagiota; Zoumpoulakis, Panagiotis; Proestos, Charalampos
2017-05-01
Extracts of different parts (heads, bracts and stems) of Cynara cardunculus L. (cardoon) and Cynara scolymus L. (globe artichoke), obtained by two different extraction techniques (Ultrasound-Assisted Extraction (UAE) and classical extraction (CE)) were examined and compared for their total phenolic content (TPC) and their antioxidant activity. Moreover, infusions of the plant's parts were also analysed and compared to aforementioned samples. Results showed that cardoon's heads extract (obtained by Ultrasound-Assisted Extraction) displayed the highest TPC values (1.57 mg Gallic Acid Equivalents (GAE) g -1 fresh weight (fw)), the highest DPPH • scavenging activity (IC50; 0.91 mg ml -1 ) and the highest ABTS •+ radical scavenging capacity (2.08 mg Trolox Equivalents (TE) g -1 fw) compared to infusions and other extracts studied. Moreover, Ultrasound-Assisted Extraction technique proved to be more appropriate and effective for the extraction of antiradical and phenolic compounds.
Chen, Yen-Lin; Liang, Wen-Yew; Chiang, Chuan-Yen; Hsieh, Tung-Ju; Lee, Da-Cheng; Yuan, Shyan-Ming; Chang, Yang-Lang
2011-01-01
This study presents efficient vision-based finger detection, tracking, and event identification techniques and a low-cost hardware framework for multi-touch sensing and display applications. The proposed approach uses a fast bright-blob segmentation process based on automatic multilevel histogram thresholding to extract the pixels of touch blobs obtained from scattered infrared lights captured by a video camera. The advantage of this automatic multilevel thresholding approach is its robustness and adaptability when dealing with various ambient lighting conditions and spurious infrared noises. To extract the connected components of these touch blobs, a connected-component analysis procedure is applied to the bright pixels acquired by the previous stage. After extracting the touch blobs from each of the captured image frames, a blob tracking and event recognition process analyzes the spatial and temporal information of these touch blobs from consecutive frames to determine the possible touch events and actions performed by users. This process also refines the detection results and corrects for errors and occlusions caused by noise and errors during the blob extraction process. The proposed blob tracking and touch event recognition process includes two phases. First, the phase of blob tracking associates the motion correspondence of blobs in succeeding frames by analyzing their spatial and temporal features. The touch event recognition process can identify meaningful touch events based on the motion information of touch blobs, such as finger moving, rotating, pressing, hovering, and clicking actions. Experimental results demonstrate that the proposed vision-based finger detection, tracking, and event identification system is feasible and effective for multi-touch sensing applications in various operational environments and conditions. PMID:22163990
2014-01-01
Background Determination of fetal aneuploidy is central to evaluation of recurrent pregnancy loss (RPL). However, obtaining this information at the time of a miscarriage is not always possible or may not have been ordered. Here we report on “rescue karyotyping”, wherein DNA extracted from archived paraffin-embedded pregnancy loss tissue from a prior dilation and curettage (D&C) is evaluated by array-based comparative genomic hybridization (aCGH). Methods A retrospective case series was conducted at an academic medical center. Patients included had unexplained RPL and a prior pregnancy loss for which karyotype information would be clinically informative but was unavailable. After extracting DNA from slides of archived tissue, aCGH with a reduced stringency approach was performed, allowing for analysis of partially degraded DNA. Statistics were computed using STATA v12.1 (College Station, TX). Results Rescue karyotyping was attempted on 20 specimens from 17 women. DNA was successfully extracted in 16 samples (80.0%), enabling analysis at either high or low resolution. The longest interval from tissue collection to DNA extraction was 4.2 years. There was no significant difference in specimen sufficiency for analysis in the collection-to-extraction interval (p = 0.14) or gestational age at pregnancy loss (p = 0.32). Eight specimens showed copy number variants: 3 trisomies, 2 partial chromosomal deletions, 1 mosaic abnormality and 2 unclassified variants. Conclusions Rescue karyotyping using aCGH on DNA extracted from paraffin-embedded tissue provides the opportunity to obtain critical fetal cytogenetic information from a prior loss, even if it occurred years earlier. Given the ubiquitous archiving of paraffin embedded tissue obtained during a D&C and the ease of obtaining results despite long loss-to-testing intervals or early gestational age at time of fetal demise, this may provide a useful technique in the evaluation of couples with recurrent pregnancy loss. PMID:24589081
High quality topic extraction from business news explains abnormal financial market volatility.
Hisano, Ryohei; Sornette, Didier; Mizuno, Takayuki; Ohnishi, Takaaki; Watanabe, Tsutomu
2013-01-01
Understanding the mutual relationships between information flows and social activity in society today is one of the cornerstones of the social sciences. In financial economics, the key issue in this regard is understanding and quantifying how news of all possible types (geopolitical, environmental, social, financial, economic, etc.) affects trading and the pricing of firms in organized stock markets. In this article, we seek to address this issue by performing an analysis of more than 24 million news records provided by Thompson Reuters and of their relationship with trading activity for 206 major stocks in the S&P US stock index. We show that the whole landscape of news that affects stock price movements can be automatically summarized via simple regularized regressions between trading activity and news information pieces decomposed, with the help of simple topic modeling techniques, into their "thematic" features. Using these methods, we are able to estimate and quantify the impacts of news on trading. We introduce network-based visualization techniques to represent the whole landscape of news information associated with a basket of stocks. The examination of the words that are representative of the topic distributions confirms that our method is able to extract the significant pieces of information influencing the stock market. Our results show that one of the most puzzling stylized facts in financial economies, namely that at certain times trading volumes appear to be "abnormally large," can be partially explained by the flow of news. In this sense, our results prove that there is no "excess trading," when restricting to times when news is genuinely novel and provides relevant financial information.
Signal processing methods for MFE plasma diagnostics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Candy, J.V.; Casper, T.; Kane, R.
1985-02-01
The application of various signal processing methods to extract energy storage information from plasma diamagnetism sensors occurring during physics experiments on the Tandom Mirror Experiment-Upgrade (TMX-U) is discussed. We show how these processing techniques can be used to decrease the uncertainty in the corresponding sensor measurements. The algorithms suggested are implemented using SIG, an interactive signal processing package developed at LLNL.
Variable-rate optical communication through the turbulent atmosphere. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Levitt, B. K.
1971-01-01
It was demonstrated that the data transmitter can extract real time, channel state information by processing the field received when a pilot tone is sent from the data receiver to the data transmitter. Based on these channel measurements, optimal variable rate techniques were derived and significant improvements in system perforamnce were obtained, particularly at low bit error rates.
Forest Fire History... A Computer Method of Data Analysis
Romain M. Meese
1973-01-01
A series of computer programs is available to extract information from the individual Fire Reports (U.S. Forest Service Form 5100-29). The programs use a statistical technique to fit a continuous distribution to a set of sampled data. The goodness-of-fit program is applicable to data other than the fire history. Data summaries illustrate analysis of fire occurrence,...
NASA Technical Reports Server (NTRS)
Haralick, R. M.; Kelly, G. L. (Principal Investigator); Bosley, R. J.
1973-01-01
The author has identified the following significant results. The land use category of subimage regions over Kansas within an MSS image can be identified with an accuracy of about 70% using the textural-spectral features of the multi-images from the four MSS bands.
FacetGist: Collective Extraction of Document Facets in Large Technical Corpora.
Siddiqui, Tarique; Ren, Xiang; Parameswaran, Aditya; Han, Jiawei
2016-10-01
Given the large volume of technical documents available, it is crucial to automatically organize and categorize these documents to be able to understand and extract value from them. Towards this end, we introduce a new research problem called Facet Extraction. Given a collection of technical documents, the goal of Facet Extraction is to automatically label each document with a set of concepts for the key facets ( e.g. , application, technique, evaluation metrics, and dataset) that people may be interested in. Facet Extraction has numerous applications, including document summarization, literature search, patent search and business intelligence. The major challenge in performing Facet Extraction arises from multiple sources: concept extraction, concept to facet matching, and facet disambiguation. To tackle these challenges, we develop FacetGist, a framework for facet extraction. Facet Extraction involves constructing a graph-based heterogeneous network to capture information available across multiple local sentence-level features, as well as global context features. We then formulate a joint optimization problem, and propose an efficient algorithm for graph-based label propagation to estimate the facet of each concept mention. Experimental results on technical corpora from two domains demonstrate that Facet Extraction can lead to an improvement of over 25% in both precision and recall over competing schemes.
FacetGist: Collective Extraction of Document Facets in Large Technical Corpora
Siddiqui, Tarique; Ren, Xiang; Parameswaran, Aditya; Han, Jiawei
2017-01-01
Given the large volume of technical documents available, it is crucial to automatically organize and categorize these documents to be able to understand and extract value from them. Towards this end, we introduce a new research problem called Facet Extraction. Given a collection of technical documents, the goal of Facet Extraction is to automatically label each document with a set of concepts for the key facets (e.g., application, technique, evaluation metrics, and dataset) that people may be interested in. Facet Extraction has numerous applications, including document summarization, literature search, patent search and business intelligence. The major challenge in performing Facet Extraction arises from multiple sources: concept extraction, concept to facet matching, and facet disambiguation. To tackle these challenges, we develop FacetGist, a framework for facet extraction. Facet Extraction involves constructing a graph-based heterogeneous network to capture information available across multiple local sentence-level features, as well as global context features. We then formulate a joint optimization problem, and propose an efficient algorithm for graph-based label propagation to estimate the facet of each concept mention. Experimental results on technical corpora from two domains demonstrate that Facet Extraction can lead to an improvement of over 25% in both precision and recall over competing schemes. PMID:28210517
Santos, Maximillan Leite; Magalhães, Chaiana Froés; da Rosa, Marcelo Barcellos; de Assis Santos, Daniel; Brasileiro, Beatriz Gonçalves; de Carvalho, Leandro Machado; da Silva, Marcelo Barreto; Zani, Carlos Leomar; de Siqueira, Ezequias Pessoa; Peres, Rodrigo Loreto; Andrade, Anderson Assunção
2013-12-01
The effects of different solvents and extraction techniques upon the phytochemical profile and anti-Trichophyton activity of extracts from Piper aduncum leaves were evaluated. Extract done by maceration method with ethanol has higher content of sesquiterpenes and antifungal activity. This extract may be useful as an alternative treatment for dermatophytosis.
Santos, Maximillan Leite; Magalhães, Chaiana Froés; da Rosa, Marcelo Barcellos; de Assis Santos, Daniel; Brasileiro, Beatriz Gonçalves; de Carvalho, Leandro Machado; da Silva, Marcelo Barreto; Zani, Carlos Leomar; de Siqueira, Ezequias Pessoa; Peres, Rodrigo Loreto; Andrade, Anderson Assunção
2013-01-01
The effects of different solvents and extraction techniques upon the phytochemical profile and anti-Trichophyton activity of extracts from Piper aduncum leaves were evaluated. Extract done by maceration method with ethanol has higher content of sesquiterpenes and antifungal activity. This extract may be useful as an alternative treatment for dermatophytosis. PMID:24688522
Coupling Analysis of Heat Island Effects, Vegetation Coverage and Urban Flood in Wuhan
NASA Astrophysics Data System (ADS)
Liu, Y.; Liu, Q.; Fan, W.; Wang, G.
2018-04-01
In this paper, satellite image, remote sensing technique and geographic information system technique are main technical bases. Spectral and other factors comprehensive analysis and visual interpretation are main methods. We use GF-1 and Landsat8 remote sensing satellite image of Wuhan as data source, and from which we extract vegetation distribution, urban heat island relative intensity distribution map and urban flood submergence range. Based on the extracted information, through spatial analysis and regression analysis, we find correlations among heat island effect, vegetation coverage and urban flood. The results show that there is a high degree of overlap between of urban heat island and urban flood. The area of urban heat island has buildings with little vegetation cover, which may be one of the reasons for the local heavy rainstorms. Furthermore, the urban heat island has a negative correlation with vegetation coverage, and the heat island effect can be alleviated by the vegetation to a certain extent. So it is easy to understand that the new industrial zones and commercial areas which under constructions distribute in the city, these land surfaces becoming bare or have low vegetation coverage, can form new heat islands easily.
Automatic video summarization driven by a spatio-temporal attention model
NASA Astrophysics Data System (ADS)
Barland, R.; Saadane, A.
2008-02-01
According to the literature, automatic video summarization techniques can be classified in two parts, following the output nature: "video skims", which are generated using portions of the original video and "key-frame sets", which correspond to the images, selected from the original video, having a significant semantic content. The difference between these two categories is reduced when we consider automatic procedures. Most of the published approaches are based on the image signal and use either pixel characterization or histogram techniques or image decomposition by blocks. However, few of them integrate properties of the Human Visual System (HVS). In this paper, we propose to extract keyframes for video summarization by studying the variations of salient information between two consecutive frames. For each frame, a saliency map is produced simulating the human visual attention by a bottom-up (signal-dependent) approach. This approach includes three parallel channels for processing three early visual features: intensity, color and temporal contrasts. For each channel, the variations of the salient information between two consecutive frames are computed. These outputs are then combined to produce the global saliency variation which determines the key-frames. Psychophysical experiments have been defined and conducted to analyze the relevance of the proposed key-frame extraction algorithm.
Context-based automated defect classification system using multiple morphological masks
Gleason, Shaun S.; Hunt, Martin A.; Sari-Sarraf, Hamed
2002-01-01
Automatic detection of defects during the fabrication of semiconductor wafers is largely automated, but the classification of those defects is still performed manually by technicians. This invention includes novel digital image analysis techniques that generate unique feature vector descriptions of semiconductor defects as well as classifiers that use these descriptions to automatically categorize the defects into one of a set of pre-defined classes. Feature extraction techniques based on multiple-focus images, multiple-defect mask images, and segmented semiconductor wafer images are used to create unique feature-based descriptions of the semiconductor defects. These feature-based defect descriptions are subsequently classified by a defect classifier into categories that depend on defect characteristics and defect contextual information, that is, the semiconductor process layer(s) with which the defect comes in contact. At the heart of the system is a knowledge database that stores and distributes historical semiconductor wafer and defect data to guide the feature extraction and classification processes. In summary, this invention takes as its input a set of images containing semiconductor defect information, and generates as its output a classification for the defect that describes not only the defect itself, but also the location of that defect with respect to the semiconductor process layers.
Ashok, Praveen C.; Praveen, Bavishna B.; Bellini, Nicola; Riches, Andrew; Dholakia, Kishan; Herrington, C. Simon
2013-01-01
We report a multimodal optical approach using both Raman spectroscopy and optical coherence tomography (OCT) in tandem to discriminate between colonic adenocarcinoma and normal colon. Although both of these non-invasive techniques are capable of discriminating between normal and tumour tissues, they are unable individually to provide both the high specificity and high sensitivity required for disease diagnosis. We combine the chemical information derived from Raman spectroscopy with the texture parameters extracted from OCT images. The sensitivity obtained using Raman spectroscopy and OCT individually was 89% and 78% respectively and the specificity was 77% and 74% respectively. Combining the information derived using the two techniques increased both sensitivity and specificity to 94% demonstrating that combining complementary optical information enhances diagnostic accuracy. These data demonstrate that multimodal optical analysis has the potential to achieve accurate non-invasive cancer diagnosis. PMID:24156073
Tecchio, Franca; Porcaro, Camillo; Barbati, Giulia; Zappasodi, Filippo
2007-01-01
A brain–computer interface (BCI) can be defined as any system that can track the person's intent which is embedded in his/her brain activity and, from it alone, translate the intention into commands of a computer. Among the brain signal monitoring systems best suited for this challenging task, electroencephalography (EEG) and magnetoencephalography (MEG) are the most realistic, since both are non-invasive, EEG is portable and MEG could provide more specific information that could be later exploited also through EEG signals. The first two BCI steps require set up of the appropriate experimental protocol while recording the brain signal and then to extract interesting features from the recorded cerebral activity. To provide information useful in these BCI stages, our aim is to provide an overview of a new procedure we recently developed, named functional source separation (FSS). As it comes from the blind source separation algorithms, it exploits the most valuable information provided by the electrophysiological techniques, i.e. the waveform signal properties, remaining blind to the biophysical nature of the signal sources. FSS returns the single trial source activity, estimates the time course of a neuronal pool along different experimental states on the basis of a specific functional requirement in a specific time period, and uses the simulated annealing as the optimization procedure allowing the exploit of functional constraints non-differentiable. Moreover, a minor section is included, devoted to information acquired by MEG in stroke patients, to guide BCI applications aiming at sustaining motor behaviour in these patients. Relevant BCI features – spatial and time-frequency properties – are in fact altered by a stroke in the regions devoted to hand control. Moreover, a method to investigate the relationship between sensory and motor hand cortical network activities is described, providing information useful to develop BCI feedback control systems. This review provides a description of the FSS technique, a promising tool for the BCI community for online electrophysiological feature extraction, and offers interesting information to develop BCI applications to sustain hand control in stroke patients. PMID:17331989
Berton, Paula; Lana, Nerina B; Ríos, Juan M; García-Reyes, Juan F; Altamirano, Jorgelina C
2016-01-28
Green chemistry principles for developing methodologies have gained attention in analytical chemistry in recent decades. A growing number of analytical techniques have been proposed for determination of organic persistent pollutants in environmental and biological samples. In this light, the current review aims to present state-of-the-art sample preparation approaches based on green analytical principles proposed for the determination of polybrominated diphenyl ethers (PBDEs) and metabolites (OH-PBDEs and MeO-PBDEs) in environmental and biological samples. Approaches to lower the solvent consumption and accelerate the extraction, such as pressurized liquid extraction, microwave-assisted extraction, and ultrasound-assisted extraction, are discussed in this review. Special attention is paid to miniaturized sample preparation methodologies and strategies proposed to reduce organic solvent consumption. Additionally, extraction techniques based on alternative solvents (surfactants, supercritical fluids, or ionic liquids) are also commented in this work, even though these are scarcely used for determination of PBDEs. In addition to liquid-based extraction techniques, solid-based analytical techniques are also addressed. The development of greener, faster and simpler sample preparation approaches has increased in recent years (2003-2013). Among green extraction techniques, those based on the liquid phase predominate over those based on the solid phase (71% vs. 29%, respectively). For solid samples, solvent assisted extraction techniques are preferred for leaching of PBDEs, and liquid phase microextraction techniques are mostly used for liquid samples. Likewise, green characteristics of the instrumental analysis used after the extraction and clean-up steps are briefly discussed. Copyright © 2015 Elsevier B.V. All rights reserved.
Périno-Issartier, Sandrine; Ginies, Christian; Cravotto, Giancarlo; Chemat, Farid
2013-08-30
A total of eight extraction techniques ranging from conventional methods (hydrodistillation (HD), steam distillation (SD), turbohydrodistillation (THD)), through innovative techniques (ultrasound assisted extraction (US-SD) and finishing with microwave assisted extraction techniques such as In situ microwave-generated hydrodistillation (ISMH), microwave steam distillation (MSD), microwave hydrodiffusion and gravity (MHG), and microwave steam diffusion (MSDf)) were used to extract essential oil from lavandin flowers and their results were compared. Extraction time, yield, essential oil composition and sensorial analysis were considered as the principal terms of comparison. The essential oils extracted using the more innovative processes were quantitatively (yield) and qualitatively (aromatic profile) similar to those obtained from the conventional techniques. The method which gave the best results was the microwave hydrodiffusion and gravity (MHG) method which gave reduced extraction time (30min against 220min for SD) and gave no differences in essential oil yield and sensorial perception. Copyright © 2013 Elsevier B.V. All rights reserved.
Context Oriented Information Integration
NASA Astrophysics Data System (ADS)
Mohania, Mukesh; Bhide, Manish; Roy, Prasan; Chakaravarthy, Venkatesan T.; Gupta, Himanshu
Faced with growing knowledge management needs, enterprises are increasingly realizing the importance of seamlessly integrating critical business information distributed across both structured and unstructured data sources. Academicians have focused on this problem but there still remain a lot of obstacles for its widespread use in practice. One of the key problems is the absence of schema in unstructured text. In this paper we present a new paradigm for integrating information which overcomes this problem - that of Context Oriented Information Integration. The goal is to integrate unstructured data with the structured data present in the enterprise and use the extracted information to generate actionable insights for the enterprise. We present two techniques which enable context oriented information integration and show how they can be used for solving real world problems.
A new method for recognizing hand configurations of Brazilian gesture language.
Costa Filho, C F F; Dos Santos, B L; de Souza, R S; Dos Santos, J R; Costa, M G F
2016-08-01
This paper describes a new method for recognizing hand configurations of the Brazilian Gesture Language - LIBRAS - using depth maps obtained with a Kinect® camera. The proposed method comprised three phases: hand segmentation, feature extraction, and classification. The segmentation phase is independent from the background and depends only on pixel depth information. Using geometric operations and numerical normalization, the feature extraction process was done independent from rotation and translation. The features are extracted employing two techniques: (2D)2LDA and (2D)2PCA. The classification is made with a novelty classifier. A robust database was constructed for classifier evaluation, with 12,200 images of LIBRAS and 200 gestures of each hand configuration. The best accuracy obtained was 95.41%, which was greater than previous values obtained in the literature.
Extracting genetic alteration information for personalized cancer therapy from ClinicalTrials.gov
Xu, Jun; Lee, Hee-Jin; Zeng, Jia; Wu, Yonghui; Zhang, Yaoyun; Huang, Liang-Chin; Johnson, Amber; Holla, Vijaykumar; Bailey, Ann M; Cohen, Trevor; Meric-Bernstam, Funda; Bernstam, Elmer V
2016-01-01
Objective: Clinical trials investigating drugs that target specific genetic alterations in tumors are important for promoting personalized cancer therapy. The goal of this project is to create a knowledge base of cancer treatment trials with annotations about genetic alterations from ClinicalTrials.gov. Methods: We developed a semi-automatic framework that combines advanced text-processing techniques with manual review to curate genetic alteration information in cancer trials. The framework consists of a document classification system to identify cancer treatment trials from ClinicalTrials.gov and an information extraction system to extract gene and alteration pairs from the Title and Eligibility Criteria sections of clinical trials. By applying the framework to trials at ClinicalTrials.gov, we created a knowledge base of cancer treatment trials with genetic alteration annotations. We then evaluated each component of the framework against manually reviewed sets of clinical trials and generated descriptive statistics of the knowledge base. Results and Discussion: The automated cancer treatment trial identification system achieved a high precision of 0.9944. Together with the manual review process, it identified 20 193 cancer treatment trials from ClinicalTrials.gov. The automated gene-alteration extraction system achieved a precision of 0.8300 and a recall of 0.6803. After validation by manual review, we generated a knowledge base of 2024 cancer trials that are labeled with specific genetic alteration information. Analysis of the knowledge base revealed the trend of increased use of targeted therapy for cancer, as well as top frequent gene-alteration pairs of interest. We expect this knowledge base to be a valuable resource for physicians and patients who are seeking information about personalized cancer therapy. PMID:27013523
Meta-Generalis: A Novel Method for Structuring Information from Radiology Reports
Barbosa, Flavio; Traina, Agma Jucci
2016-01-01
Summary Background A structured report for imaging exams aims at increasing the precision in information retrieval and communication between physicians. However, it is more concise than free text and may limit specialists’ descriptions of important findings not covered by pre-defined structures. A computational ontological structure derived from free texts designed by specialists may be a solution for this problem. Therefore, the goal of our study was to develop a methodology for structuring information in radiology reports covering specifications required for the Brazilian Portuguese language, including the terminology to be used. Methods We gathered 1,701 radiological reports of magnetic resonance imaging (MRI) studies of the lumbosacral spine from three different institutions. Techniques of text mining and ontological conceptualization of lexical units extracted were used to structure information. Ten radiologists, specialists in lumbosacral MRI, evaluated the textual superstructure and terminology extracted using an electronic questionnaire. Results The established methodology consists of six steps: 1) collection of radiology reports of a specific MRI examination; 2) textual decomposition; 3) normalization of lexical units; 4) identification of textual superstructures; 5) conceptualization of candidate-terms; and 6) evaluation of superstructures and extracted terminology by experts using an electronic questionnaire. Three different textual superstructures were identified, with terminological variations in the names of their textual categories. The number of candidate-terms conceptualized was 4,183, yielding 727 concepts. There were a total of 13,963 relationships between candidate-terms and concepts and 789 relationships among concepts. Conclusions The proposed methodology allowed structuring information in a more intuitive and practical way. Indications of three textual superstructures, extraction of lexicon units and the normalization and ontologically conceptualization were achieved while maintaining references to their respective categories and free text radiology reports. PMID:27580980
Meta-generalis: A novel method for structuring information from radiology reports.
Barbosa, Flavio; Traina, Agma Jucci; Muglia, Valdair Francisco
2016-08-24
A structured report for imaging exams aims at increasing the precision in information retrieval and communication between physicians. However, it is more concise than free text and may limit specialists' descriptions of important findings not covered by pre-defined structures. A computational ontological structure derived from free texts designed by specialists may be a solution for this problem. Therefore, the goal of our study was to develop a methodology for structuring information in radiology reports covering specifications required for the Brazilian Portuguese language, including the terminology to be used. We gathered 1,701 radiological reports of magnetic resonance imaging (MRI) studies of the lumbosacral spine from three different institutions. Techniques of text mining and ontological conceptualization of lexical units extracted were used to structure information. Ten radiologists, specialists in lumbosacral MRI, evaluated the textual superstructure and terminology extracted using an electronic questionnaire. The established methodology consists of six steps: 1) collection of radiology reports of a specific MRI examination; 2) textual decomposition; 3) normalization of lexical units; 4) identification of textual superstructures; 5) conceptualization of candidate-terms; and 6) evaluation of superstructures and extracted terminology by experts using an electronic questionnaire. Three different textual superstructures were identified, with terminological variations in the names of their textual categories. The number of candidate-terms conceptualized was 4,183, yielding 727 concepts. There were a total of 13,963 relationships between candidate-terms and concepts and 789 relationships among concepts. The proposed methodology allowed structuring information in a more intuitive and practical way. Indications of three textual superstructures, extraction of lexicon units and the normalization and ontologically conceptualization were achieved while maintaining references to their respective categories and free text radiology reports.
Extracting genetic alteration information for personalized cancer therapy from ClinicalTrials.gov.
Xu, Jun; Lee, Hee-Jin; Zeng, Jia; Wu, Yonghui; Zhang, Yaoyun; Huang, Liang-Chin; Johnson, Amber; Holla, Vijaykumar; Bailey, Ann M; Cohen, Trevor; Meric-Bernstam, Funda; Bernstam, Elmer V; Xu, Hua
2016-07-01
Clinical trials investigating drugs that target specific genetic alterations in tumors are important for promoting personalized cancer therapy. The goal of this project is to create a knowledge base of cancer treatment trials with annotations about genetic alterations from ClinicalTrials.gov. We developed a semi-automatic framework that combines advanced text-processing techniques with manual review to curate genetic alteration information in cancer trials. The framework consists of a document classification system to identify cancer treatment trials from ClinicalTrials.gov and an information extraction system to extract gene and alteration pairs from the Title and Eligibility Criteria sections of clinical trials. By applying the framework to trials at ClinicalTrials.gov, we created a knowledge base of cancer treatment trials with genetic alteration annotations. We then evaluated each component of the framework against manually reviewed sets of clinical trials and generated descriptive statistics of the knowledge base. The automated cancer treatment trial identification system achieved a high precision of 0.9944. Together with the manual review process, it identified 20 193 cancer treatment trials from ClinicalTrials.gov. The automated gene-alteration extraction system achieved a precision of 0.8300 and a recall of 0.6803. After validation by manual review, we generated a knowledge base of 2024 cancer trials that are labeled with specific genetic alteration information. Analysis of the knowledge base revealed the trend of increased use of targeted therapy for cancer, as well as top frequent gene-alteration pairs of interest. We expect this knowledge base to be a valuable resource for physicians and patients who are seeking information about personalized cancer therapy. © The Author 2016. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Charlton, Peter H; Bonnici, Timothy; Tarassenko, Lionel; Alastruey, Jordi; Clifton, David A; Beale, Richard; Watkinson, Peter J
2017-05-01
Breathing rate (BR) can be estimated by extracting respiratory signals from the electrocardiogram (ECG) or photoplethysmogram (PPG). The extracted respiratory signals may be influenced by several technical and physiological factors. In this study, our aim was to determine how technical and physiological factors influence the quality of respiratory signals. Using a variety of techniques 15 respiratory signals were extracted from the ECG, and 11 from PPG signals collected from 57 healthy subjects. The quality of each respiratory signal was assessed by calculating its correlation with a reference oral-nasal pressure respiratory signal using Pearson's correlation coefficient. Relevant results informing device design and clinical application were obtained. The results informing device design were: (i) seven out of 11 respiratory signals were of higher quality when extracted from finger PPG compared to ear PPG; (ii) laboratory equipment did not provide higher quality of respiratory signals than a clinical monitor; (iii) the ECG provided higher quality respiratory signals than the PPG; (iv) during downsampling of the ECG and PPG significant reductions in quality were first observed at sampling frequencies of <250 Hz and <16 Hz respectively. The results informing clinical application were: (i) frequency modulation-based respiratory signals were generally of lower quality in elderly subjects compared to young subjects; (ii) the qualities of 23 out of 26 respiratory signals were reduced at elevated BRs; (iii) there were no differences associated with gender. Recommendations based on the results are provided regarding device designs for BR estimation, and clinical applications. The dataset and code used in this study are publicly available.
Higgins, Denice; Rohrlach, Adam B.; Kaidonis, John; Townsend, Grant; Austin, Jeremy J.
2015-01-01
Major advances in genetic analysis of skeletal remains have been made over the last decade, primarily due to improvements in post-DNA-extraction techniques. Despite this, a key challenge for DNA analysis of skeletal remains is the limited yield of DNA recovered from these poorly preserved samples. Enhanced DNA recovery by improved sampling and extraction techniques would allow further advancements. However, little is known about the post-mortem kinetics of DNA degradation and whether the rate of degradation varies between nuclear and mitochondrial DNA or across different skeletal tissues. This knowledge, along with information regarding ante-mortem DNA distribution within skeletal elements, would inform sampling protocols facilitating development of improved extraction processes. Here we present a combined genetic and histological examination of DNA content and rates of DNA degradation in the different tooth tissues of 150 human molars over short-medium post-mortem intervals. DNA was extracted from coronal dentine, root dentine, cementum and pulp of 114 teeth via a silica column method and the remaining 36 teeth were examined histologically. Real time quantification assays based on two nuclear DNA fragments (67 bp and 156 bp) and one mitochondrial DNA fragment (77 bp) showed nuclear and mitochondrial DNA degraded exponentially, but at different rates, depending on post-mortem interval and soil temperature. In contrast to previous studies, we identified differential survival of nuclear and mtDNA in different tooth tissues. Futhermore histological examination showed pulp and dentine were rapidly affected by loss of structural integrity, and pulp was completely destroyed in a relatively short time period. Conversely, cementum showed little structural change over the same time period. Finally, we confirm that targeted sampling of cementum from teeth buried for up to 16 months can provide a reliable source of nuclear DNA for STR-based genotyping using standard extraction methods, without the need for specialised equipment or large-volume demineralisation steps. PMID:25992635
Web image retrieval using an effective topic and content-based technique
NASA Astrophysics Data System (ADS)
Lee, Ching-Cheng; Prabhakara, Rashmi
2005-03-01
There has been an exponential growth in the amount of image data that is available on the World Wide Web since the early development of Internet. With such a large amount of information and image available and its usefulness, an effective image retrieval system is thus greatly needed. In this paper, we present an effective approach with both image matching and indexing techniques that improvise on existing integrated image retrieval methods. This technique follows a two-phase approach, integrating query by topic and query by example specification methods. In the first phase, The topic-based image retrieval is performed by using an improved text information retrieval (IR) technique that makes use of the structured format of HTML documents. This technique consists of a focused crawler that not only provides for the user to enter the keyword for the topic-based search but also, the scope in which the user wants to find the images. In the second phase, we use query by example specification to perform a low-level content-based image match in order to retrieve smaller and relatively closer results of the example image. From this, information related to the image feature is automatically extracted from the query image. The main objective of our approach is to develop a functional image search and indexing technique and to demonstrate that better retrieval results can be achieved.
Availability of heavy metals in minesoils measured by different methods
NASA Astrophysics Data System (ADS)
Lago, Manoel; Arenas, Daniel; Vega, Flora; Andrade, Luisa
2013-04-01
Most of environmental regulations concerning soil pollution commonly include the total heavy metal content as the reference for determining contamination levels. Nevertheless the total content includes all different chemical forms and it rarely gives information on mobility, availability and toxicity (Pueyo et al., 2004). To be able to determine the concentrations of contaminants that cause toxicity it is important to study the available content, the one that can interact with an organism and be incorporated in its structure (Vangronsveld and Cunningham, 1998). There are many techniques that determine the operationally defined as available content in soils. Most of them use a reagent that causes the displacement of the ions by electrostatic attraction (Pueyo et al., 2004). The aim of this work is to compare the agreement among different extractants (Cl2Ca, EDTA, DTPA, bidistilled water (BDW) and low molecular weight organic acids (LMWOA) when Ni and Zn concentrations are measured in the extractions from five mine soils (Touro, Spain). The sequence of soils according to total contents of Ni and Zn is S4>S5>S1>S3>S2 and S4>S1>S5>S2>S3, respectively. In all cases Zn total contents are higher than Ni varying from two times higher (S5) to four times higher (S2). Zn concentration is also higher than Ni in the Cl2Ca extractions but the opposite happens in DTPA extractions. Both metal concentrations in the EDTA, BDW and LMWOA extractions are quite similar in each soil. This first approximation already shows there is no agreement among the different techniques used for determining heavy metal availability in soils. Nevertheless it was found that soils sequence according to Zn and Ni concentrations in all available extractions techniques (with the exception of BDW) is the same. According to the Ni and Zn contents in Cl2Ca, DTPA, EDTA and LMWOA extractions the sequence is S3> S4> S5> S1> S2. The S3 is the soil with the highest content of available Ni and Zn whilst it is the soil with the lowest total Zn content and one of those with the lowest Ni one. Even the sequence obtained from BDW extractions is different (S4> S3> S2> S1> S5) the S3 soil also possess one of the highest amounts of available Ni and Zn. Therefore the information given by the BDW technique is different than the other ones used for determining available contents of Ni and Zn since DTPA, Cl2Ca, EDTA and LMWOA cause the displacement of both ions from soil matrix towards the soil solution. Acknowledgments This research was supported by Project CGL2010-16765 (MICINN-FEDER). F.A. Vega and D. Arenas-Lago acknowledge the Ministry of Science and Innovation and the University of Vigo for the Ramón y Cajal and FPI-MICINN, respectively. References Pueyo, M., López-Sánchez, J.F., Rauret, G. 2004. Analytica Chimica Acta. 504. 217-226. Vangronsveld, J.; Cunningham, S.D. 1998. Metal-Contaminated Soils: In-Situ Inactivation and Phytoremediation. Springer-Verlag, Berlin, Germany.
Review of online coupling of sample preparation techniques with liquid chromatography.
Pan, Jialiang; Zhang, Chengjiang; Zhang, Zhuomin; Li, Gongke
2014-03-07
Sample preparation is still considered as the bottleneck of the whole analytical procedure, and efforts has been conducted towards the automation, improvement of sensitivity and accuracy, and low comsuption of organic solvents. Development of online sample preparation techniques (SP) coupled with liquid chromatography (LC) is a promising way to achieve these goals, which has attracted great attention. This article reviews the recent advances on the online SP-LC techniques. Various online SP techniques have been described and summarized, including solid-phase-based extraction, liquid-phase-based extraction assisted with membrane, microwave assisted extraction, ultrasonic assisted extraction, accelerated solvent extraction and supercritical fluids extraction. Specially, the coupling approaches of online SP-LC systems and the corresponding interfaces have been discussed and reviewed in detail, such as online injector, autosampler combined with transport unit, desorption chamber and column switching. Typical applications of the online SP-LC techniques have been summarized. Then the problems and expected trends in this field are attempted to be discussed and proposed in order to encourage the further development of online SP-LC techniques. Copyright © 2014 Elsevier B.V. All rights reserved.
Data mining and medical world: breast cancers' diagnosis, treatment, prognosis and challenges.
Oskouei, Rozita Jamili; Kor, Nasroallah Moradi; Maleki, Saeid Abbasi
2017-01-01
The amount of data in electronic and real world is constantly on the rise. Therefore, extracting useful knowledge from the total available data is very important and time consuming task. Data mining has various techniques for extracting valuable information or knowledge from data. These techniques are applicable for all data that are collected inall fields of science. Several research investigations are published about applications of data mining in various fields of sciences such as defense, banking, insurances, education, telecommunications, medicine and etc. This investigation attempts to provide a comprehensive survey about applications of data mining techniques in breast cancer diagnosis, treatment & prognosis till now. Further, the main challenges in these area is presented in this investigation. Since several research studies currently are going on in this issues, therefore, it is necessary to have a complete survey about all researches which are completed up to now, along with the results of those studies and important challenges which are currently exist in this area for helping young researchers and presenting to them the main problems that are still exist in this area.
Data mining and medical world: breast cancers’ diagnosis, treatment, prognosis and challenges
Oskouei, Rozita Jamili; Kor, Nasroallah Moradi; Maleki, Saeid Abbasi
2017-01-01
The amount of data in electronic and real world is constantly on the rise. Therefore, extracting useful knowledge from the total available data is very important and time consuming task. Data mining has various techniques for extracting valuable information or knowledge from data. These techniques are applicable for all data that are collected inall fields of science. Several research investigations are published about applications of data mining in various fields of sciences such as defense, banking, insurances, education, telecommunications, medicine and etc. This investigation attempts to provide a comprehensive survey about applications of data mining techniques in breast cancer diagnosis, treatment & prognosis till now. Further, the main challenges in these area is presented in this investigation. Since several research studies currently are going on in this issues, therefore, it is necessary to have a complete survey about all researches which are completed up to now, along with the results of those studies and important challenges which are currently exist in this area for helping young researchers and presenting to them the main problems that are still exist in this area. PMID:28401016
Dhanani, Tushar; Singh, Raghuraj; Reddy, Nagaraja; Trivedi, A; Kumar, Satyanshu
2017-05-01
Senna is an important medicinal plant and is used in many Ayurvedic formulations. Dianthraquinone glucosides are the main bioactive phytochemicals present in leaves and pods of senna. The extraction efficiency in terms of yield and composition of the extract of senna prepared using both conventional (cold percolation at room temperature and refluxing) and non conventional (ultrasound and microwave assisted solvent extraction as well as supercritical fluid extraction) techniques were compared in the present study. Also a rapid reverse phase HPLC-PDA detection method was developed and validated for the simultaneous determination of sennoside A and sennoside B in the different extracts of senna leaves. Ultrasound and microwave assisted solvent extraction techniques were more effective in terms of yield and composition of the extracts compared to cold percolation at room temperature and refluxing methods of extraction.
Lu, Chunxia; Wang, Hongxin; Lv, Wenping; Ma, Chaoyang; Lou, Zaixiang; Xie, Jun; Liu, Bo
2012-01-01
Ionic liquid was used as extraction solvents and applied to the extraction of tannins from Galla chinensis in the simultaneous ultrasonic- and microwave-assisted extraction (UMAE) technique. Several parameters of UMAE were optimised, and the results were compared with of the conventional extraction techniques. Under optimal conditions, the content of tannins was 630.2 ± 12.1 mg g⁻¹. Compared with the conventional heat-reflux extraction, maceration extraction, regular ultrasound- and microwave-assisted extraction, the proposed approach exhibited higher efficiency (11.7-22.0% enhanced) and shorter extraction time (from 6 h to 1 min). The tannins were then identified by ultraperformance liquid chromatography tandem mass spectrometry. This study suggests that ionic liquid-based UMAE is an efficient, rapid, simple and green sample preparation technique.
Protein-based stable isotope probing.
Jehmlich, Nico; Schmidt, Frank; Taubert, Martin; Seifert, Jana; Bastida, Felipe; von Bergen, Martin; Richnow, Hans-Hermann; Vogt, Carsten
2010-12-01
We describe a stable isotope probing (SIP) technique that was developed to link microbe-specific metabolic function to phylogenetic information. Carbon ((13)C)- or nitrogen ((15)N)-labeled substrates (typically with >98% heavy label) were used in cultivation experiments and the heavy isotope incorporation into proteins (protein-SIP) on growth was determined. The amount of incorporation provides a measure for assimilation of a substrate, and the sequence information from peptide analysis obtained by mass spectrometry delivers phylogenetic information about the microorganisms responsible for the metabolism of the particular substrate. In this article, we provide guidelines for incubating microbial cultures with labeled substrates and a protocol for protein-SIP. The protocol guides readers through the proteomics pipeline, including protein extraction, gel-free and gel-based protein separation, the subsequent mass spectrometric analysis of peptides and the calculation of the incorporation of stable isotopes into peptides. Extraction of proteins and the mass fingerprint measurements of unlabeled and labeled fractions can be performed in 2-3 d.
Nagarajan, Mahesh B; Coan, Paola; Huber, Markus B; Diemoz, Paul C; Wismüller, Axel
2015-01-01
Phase contrast X-ray computed tomography (PCI-CT) has been demonstrated as a novel imaging technique that can visualize human cartilage with high spatial resolution and soft tissue contrast. Different textural approaches have been previously investigated for characterizing chondrocyte organization on PCI-CT to enable classification of healthy and osteoarthritic cartilage. However, the large size of feature sets extracted in such studies motivates an investigation into algorithmic feature reduction for computing efficient feature representations without compromising their discriminatory power. For this purpose, geometrical feature sets derived from the scaling index method (SIM) were extracted from 1392 volumes of interest (VOI) annotated on PCI-CT images of ex vivo human patellar cartilage specimens. The extracted feature sets were subject to linear and non-linear dimension reduction techniques as well as feature selection based on evaluation of mutual information criteria. The reduced feature set was subsequently used in a machine learning task with support vector regression to classify VOIs as healthy or osteoarthritic; classification performance was evaluated using the area under the receiver-operating characteristic (ROC) curve (AUC). Our results show that the classification performance achieved by 9-D SIM-derived geometric feature sets (AUC: 0.96 ± 0.02) can be maintained with 2-D representations computed from both dimension reduction and feature selection (AUC values as high as 0.97 ± 0.02). Thus, such feature reduction techniques can offer a high degree of compaction to large feature sets extracted from PCI-CT images while maintaining their ability to characterize the underlying chondrocyte patterns.
NASA Technical Reports Server (NTRS)
Haste, Deepak; Azam, Mohammad; Ghoshal, Sudipto; Monte, James
2012-01-01
Health management (HM) in any engineering systems requires adequate understanding about the system s functioning; a sufficient amount of monitored data; the capability to extract, analyze, and collate information; and the capability to combine understanding and information for HM-related estimation and decision-making. Rotorcraft systems are, in general, highly complex. Obtaining adequate understanding about functioning of such systems is quite difficult, because of the proprietary (restricted access) nature of their designs and dynamic models. Development of an EIM (exact inverse map) solution for rotorcraft requires a process that can overcome the abovementioned difficulties and maximally utilize monitored information for HM facilitation via employing advanced analytic techniques. The goal was to develop a versatile HM solution for rotorcraft for facilitation of the Condition Based Maintenance Plus (CBM+) capabilities. The effort was geared towards developing analytic and reasoning techniques, and proving the ability to embed the required capabilities on a rotorcraft platform, paving the way for implementing the solution on an aircraft-level system for consolidation and reporting. The solution for rotorcraft can he used offboard or embedded directly onto a rotorcraft system. The envisioned solution utilizes available monitored and archived data for real-time fault detection and identification, failure precursor identification, and offline fault detection and diagnostics, health condition forecasting, optimal guided troubleshooting, and maintenance decision support. A variant of the onboard version is a self-contained hardware and software (HW+SW) package that can be embedded on rotorcraft systems. The HM solution comprises components that gather/ingest data and information, perform information/feature extraction, analyze information in conjunction with the dependency/diagnostic model of the target system, facilitate optimal guided troubleshooting, and offer decision support for optimal maintenance.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Takeoka, Masahiro; Fujiwara, Mikio; Mizuno, Jun
2004-05-01
Quantum-information theory predicts that when the transmission resource is doubled in quantum channels, the amount of information transmitted can be increased more than twice by quantum-channel coding technique, whereas the increase is at most twice in classical information theory. This remarkable feature, the superadditive quantum-coding gain, can be implemented by appropriate choices of code words and corresponding quantum decoding which requires a collective quantum measurement. Recently, an experimental demonstration was reported [M. Fujiwara et al., Phys. Rev. Lett. 90, 167906 (2003)]. The purpose of this paper is to describe our experiment in detail. Particularly, a design strategy of quantum-collective decodingmore » in physical quantum circuits is emphasized. We also address the practical implication of the gain on communication performance by introducing the quantum-classical hybrid coding scheme. We show how the superadditive quantum-coding gain, even in a small code length, can boost the communication performance of conventional coding techniques.« less
Medical Image Tamper Detection Based on Passive Image Authentication.
Ulutas, Guzin; Ustubioglu, Arda; Ustubioglu, Beste; V Nabiyev, Vasif; Ulutas, Mustafa
2017-12-01
Telemedicine has gained popularity in recent years. Medical images can be transferred over the Internet to enable the telediagnosis between medical staffs and to make the patient's history accessible to medical staff from anywhere. Therefore, integrity protection of the medical image is a serious concern due to the broadcast nature of the Internet. Some watermarking techniques are proposed to control the integrity of medical images. However, they require embedding of extra information (watermark) into image before transmission. It decreases visual quality of the medical image and can cause false diagnosis. The proposed method uses passive image authentication mechanism to detect the tampered regions on medical images. Structural texture information is obtained from the medical image by using local binary pattern rotation invariant (LBPROT) to make the keypoint extraction techniques more successful. Keypoints on the texture image are obtained with scale invariant feature transform (SIFT). Tampered regions are detected by the method by matching the keypoints. The method improves the keypoint-based passive image authentication mechanism (they do not detect tampering when the smooth region is used for covering an object) by using LBPROT before keypoint extraction because smooth regions also have texture information. Experimental results show that the method detects tampered regions on the medical images even if the forged image has undergone some attacks (Gaussian blurring/additive white Gaussian noise) or the forged regions are scaled/rotated before pasting.
High Resolution Imaging of the Sun with CORONAS-1
NASA Technical Reports Server (NTRS)
Karovska, Margarita
1998-01-01
We applied several image restoration and enhancement techniques, to CORONAS-I images. We carried out the characterization of the Point Spread Function (PSF) using the unique capability of the Blind Iterative Deconvolution (BID) technique, which recovers the real PSF at a given location and time of observation, when limited a priori information is available on its characteristics. We also applied image enhancement technique to extract the small scale structure imbeded in bright large scale structures on the disk and on the limb. The results demonstrate the capability of the image post-processing to substantially increase the yield from the space observations by improving the resolution and reducing noise in the images.
Eichmiller, Jessica J; Miller, Loren M; Sorensen, Peter W
2016-01-01
Few studies have examined capture and extraction methods for environmental DNA (eDNA) to identify techniques optimal for detection and quantification. In this study, precipitation, centrifugation and filtration eDNA capture methods and six commercially available DNA extraction kits were evaluated for their ability to detect and quantify common carp (Cyprinus carpio) mitochondrial DNA using quantitative PCR in a series of laboratory experiments. Filtration methods yielded the most carp eDNA, and a glass fibre (GF) filter performed better than a similar pore size polycarbonate (PC) filter. Smaller pore sized filters had higher regression slopes of biomass to eDNA, indicating that they were potentially more sensitive to changes in biomass. Comparison of DNA extraction kits showed that the MP Biomedicals FastDNA SPIN Kit yielded the most carp eDNA and was the most sensitive for detection purposes, despite minor inhibition. The MoBio PowerSoil DNA Isolation Kit had the lowest coefficient of variation in extraction efficiency between lake and well water and had no detectable inhibition, making it most suitable for comparisons across aquatic environments. Of the methods tested, we recommend using a 1.5 μm GF filter, followed by extraction with the MP Biomedicals FastDNA SPIN Kit for detection. For quantification of eDNA, filtration through a 0.2-0.6 μm pore size PC filter, followed by extraction with MoBio PowerSoil DNA Isolation Kit was optimal. These results are broadly applicable for laboratory studies on carps and potentially other cyprinids. The recommendations can also be used to inform choice of methodology for field studies. © 2015 John Wiley & Sons Ltd.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harmon, S; Jeraj, R; Galavis, P
Purpose: Sensitivity of PET-derived texture features to reconstruction methods has been reported for features extracted from axial planes; however, studies often utilize three dimensional techniques. This work aims to quantify the impact of multi-plane (3D) vs. single-plane (2D) feature extraction on radiomics-based analysis, including sensitivity to reconstruction parameters and potential loss of spatial information. Methods: Twenty-three patients with solid tumors underwent [{sup 18}F]FDG PET/CT scans under identical protocols. PET data were reconstructed using five sets of reconstruction parameters. Tumors were segmented using an automatic, in-house algorithm robust to reconstruction variations. 50 texture features were extracted using two Methods: 2D patchesmore » along axial planes and 3D patches. For each method, sensitivity of features to reconstruction parameters was calculated as percent difference relative to the average value across reconstructions. Correlations between feature values were compared when using 2D and 3D extraction. Results: 21/50 features showed significantly different sensitivity to reconstruction parameters when extracted in 2D vs 3D (wilcoxon α<0.05), assessed by overall range of variation, Rangevar(%). Eleven showed greater sensitivity to reconstruction in 2D extraction, primarily first-order and co-occurrence features (average Rangevar increase 83%). The remaining ten showed higher variation in 3D extraction (average Range{sub var}increase 27%), mainly co-occurence and greylevel run-length features. Correlation of feature value extracted in 2D and feature value extracted in 3D was poor (R<0.5) in 12/50 features, including eight co-occurrence features. Feature-to-feature correlations in 2D were marginally higher than 3D, ∣R∣>0.8 in 16% and 13% of all feature combinations, respectively. Larger sensitivity to reconstruction parameters were seen for inter-feature correlation in 2D(σ=6%) than 3D (σ<1%) extraction. Conclusion: Sensitivity and correlation of various texture features were shown to significantly differ between 2D and 3D extraction. Additionally, inter-feature correlations were more sensitive to reconstruction variation using single-plane extraction. This work highlights a need for standardized feature extraction/selection techniques in radiomics.« less
Zhang, Mingyuan; Fiol, Guilherme Del; Grout, Randall W.; Jonnalagadda, Siddhartha; Medlin, Richard; Mishra, Rashmi; Weir, Charlene; Liu, Hongfang; Mostafa, Javed; Fiszman, Marcelo
2014-01-01
Online knowledge resources such as Medline can address most clinicians’ patient care information needs. Yet, significant barriers, notably lack of time, limit the use of these sources at the point of care. The most common information needs raised by clinicians are treatment-related. Comparative effectiveness studies allow clinicians to consider multiple treatment alternatives for a particular problem. Still, solutions are needed to enable efficient and effective consumption of comparative effectiveness research at the point of care. Objective Design and assess an algorithm for automatically identifying comparative effectiveness studies and extracting the interventions investigated in these studies. Methods The algorithm combines semantic natural language processing, Medline citation metadata, and machine learning techniques. We assessed the algorithm in a case study of treatment alternatives for depression. Results Both precision and recall for identifying comparative studies was 0.83. A total of 86% of the interventions extracted perfectly or partially matched the gold standard. Conclusion Overall, the algorithm achieved reasonable performance. The method provides building blocks for the automatic summarization of comparative effectiveness research to inform point of care decision-making. PMID:23920677
Data Assimilation to Extract Soil Moisture Information From SMAP Observations
NASA Technical Reports Server (NTRS)
Kolassa, J.; Reichle, R. H.; Liu, Q.; Alemohammad, S. H.; Gentine, P.
2017-01-01
Statistical techniques permit the retrieval of soil moisture estimates in a model climatology while retaining the spatial and temporal signatures of the satellite observations. As a consequence, they can be used to reduce the need for localized bias correction techniques typically implemented in data assimilation (DA) systems that tend to remove some of the independent information provided by satellite observations. Here, we use a statistical neural network (NN) algorithm to retrieve SMAP (Soil Moisture Active Passive) surface soil moisture estimates in the climatology of the NASA Catchment land surface model. Assimilating these estimates without additional bias correction is found to significantly reduce the model error and increase the temporal correlation against SMAP CalVal in situ observations over the contiguous United States. A comparison with assimilation experiments using traditional bias correction techniques shows that the NN approach better retains the independent information provided by the SMAP observations and thus leads to larger model skill improvements during the assimilation. A comparison with the SMAP Level 4 product shows that the NN approach is able to provide comparable skill improvements and thus represents a viable assimilation approach.
Memon, Abdul Hakeem; Hamil, Mohammad Shahrul Ridzuan; Laghari, Madeeha; Rithwan, Fahim; Zhari, Salman; Saeed, Mohammed Ali Ahmed; Ismail, Zhari; Majid, Amin Malik Shah Abdul
2016-09-01
Syzygium campanulatum Korth is a plant, which is a rich source of secondary metabolites (especially flavanones, chalcone, and triterpenoids). In our present study, three conventional solvent extraction (CSE) techniques and supercritical fluid extraction (SFE) techniques were performed to achieve a maximum recovery of two flavanones, chalcone, and two triterpenoids from S. campanulatum leaves. Furthermore, a Box-Behnken design was constructed for the SFE technique using pressure, temperature, and particle size as independent variables, and yields of crude extract, individual and total secondary metabolites as the dependent variables. In the CSE procedure, twenty extracts were produced using ten different solvents and three techniques (maceration, soxhletion, and reflux). An enriched extract of five secondary metabolites was collected using n-hexane:methanol (1:1) soxhletion. Using food-grade ethanol as a modifier, the SFE methods produced a higher recovery (25.5%‒84.9%) of selected secondary metabolites as compared to the CSE techniques (0.92%‒66.00%).
Memon, Abdul Hakeem; Hamil, Mohammad Shahrul Ridzuan; Laghari, Madeeha; Rithwan, Fahim; Zhari, Salman; Saeed, Mohammed Ali Ahmed; Ismail, Zhari; Majid, Amin Malik Shah Abdul
2016-01-01
Syzygium campanulatum Korth is a plant, which is a rich source of secondary metabolites (especially flavanones, chalcone, and triterpenoids). In our present study, three conventional solvent extraction (CSE) techniques and supercritical fluid extraction (SFE) techniques were performed to achieve a maximum recovery of two flavanones, chalcone, and two triterpenoids from S. campanulatum leaves. Furthermore, a Box-Behnken design was constructed for the SFE technique using pressure, temperature, and particle size as independent variables, and yields of crude extract, individual and total secondary metabolites as the dependent variables. In the CSE procedure, twenty extracts were produced using ten different solvents and three techniques (maceration, soxhletion, and reflux). An enriched extract of five secondary metabolites was collected using n-hexane:methanol (1:1) soxhletion. Using food-grade ethanol as a modifier, the SFE methods produced a higher recovery (25.5%‒84.9%) of selected secondary metabolites as compared to the CSE techniques (0.92%‒66.00%). PMID:27604860
Extraction of kiwi seed oil: Soxhlet versus four different non-conventional techniques.
Cravotto, Giancarlo; Bicchi, Carlo; Mantegna, Stefano; Binello, Arianna; Tomao, Valerie; Chemat, Farid
2011-06-01
Kiwi seed oil has a nutritionally interesting fatty acid profile, but a rather low oxidative stability, which requires careful extraction procedures and adequate packaging and storage. For these reasons and with the aim to achieve process intensification with shorter extraction time, lower energy consumption and higher yields, four different non-conventional techniques were experimented. Kiwi seeds were extracted in hexane using classic Soxhlet as well as under power ultrasound (US), microwaves (MWs; closed vessel) and MW-integrated Soxhlet. Supercritical CO₂ was also employed and compared to the other techniques in term of yield, extraction time, fatty acid profiles and organoleptic properties. All these non-conventional techniques are fast, effective and safe. A sensory evaluation test showed the presence of off-flavours in oil samples extracted by Soxhlet and US, an indicator of partial degradation.
Hassan, Afifa Afifi
1982-01-01
The gas evolution and the strontium carbonate precipitation techniques to extract dissolved inorganic carbon (DIC) for stable carbon isotope analysis were investigated. Theoretical considerations, involving thermodynamic calculations and computer simulation pointed out several possible sources of error in delta carbon-13 measurements of the DIC and demonstrated the need for experimental evaluation of the magnitude of the error. An alternative analytical technique, equilibration with out-gassed vapor phase, is proposed. The experimental studies revealed that delta carbon-13 of the DIC extracted from a 0.01 molar NaHC03 solution by both techniques agreed within 0.1 per mil with the delta carbon-13 of the DIC extracted by the precipitation technique, and an increase of only 0.27 per mil in that extracted by the gas evolution technique. The efficiency of extraction of DIC decreased with sulfate concentration in the precipitation technique but was independent of sulfate concentration in the gas evolution technique. Both the precipitation and gas evolution technique were found to be satisfactory for extraction of DIC from different kinds of natural water for stable carbon isotope analysis, provided appropriate precautions are observed in handling the samples. For example, it was found that diffusion of atmospheric carbon dioxide does alter the delta carbon-13 of the samples contained in polyethylene bottles; filtration and drying in the air change the delta carbon-13 of the samples contained in polyethylene bottles; filtration and drying in the air change the delta carbon-13 of the precipitation technique; hot manganese dioxide purification changes the delta carbon-13 of carbon dioxide. (USGS)
The 1984 NASA/ASEE summer faculty fellowship program
NASA Technical Reports Server (NTRS)
1984-01-01
The assessment of forest productivity and associated nitrogen flux in a number of conifer ecosystems is described. As a base line study of acid precipitation in the Sierra Nevada, involved is the extraction and integration of a number of data planes describing the terrain, soils, lithology, vegetation cover and structure, and microclimate of the region. The development of automated techniques to extract topographic networks (stream canyons and ridge lines) for use as a landscrape skeleton to organize and integrate data sets into an efficient geographical information system is examined. The software is written in both FORTRAN and C, and is portable to a number of different computer environments with minimal modification.
Population Estimation in Singapore Based on Remote Sensing and Open Data
NASA Astrophysics Data System (ADS)
Guo, H.; Cao, K.; Wang, P.
2017-09-01
Population estimation statistics are widely used in government, commercial and educational sectors for a variety of purposes. With growing emphases on real-time and detailed population information, data users nowadays have switched from traditional census data to more technology-based data source such as LiDAR point cloud and High-Resolution Satellite Imagery. Nevertheless, such data are costly and periodically unavailable. In this paper, the authors use West Coast District, Singapore as a case study to investigate the applicability and effectiveness of using satellite image from Google Earth for extraction of building footprint and population estimation. At the same time, volunteered geographic information (VGI) is also utilized as ancillary data for building footprint extraction. Open data such as Open Street Map OSM could be employed to enhance the extraction process. In view of challenges in building shadow extraction, this paper discusses several methods including buffer, mask and shape index to improve accuracy. It also illustrates population estimation methods based on building height and number of floor estimates. The results show that the accuracy level of housing unit method on population estimation can reach 92.5 %, which is remarkably accurate. This paper thus provides insights into techniques for building extraction and fine-scale population estimation, which will benefit users such as urban planners in terms of policymaking and urban planning of Singapore.
Estimation of option-implied risk-neutral into real-world density by using calibration function
NASA Astrophysics Data System (ADS)
Bahaludin, Hafizah; Abdullah, Mimi Hafizah
2017-04-01
Option prices contain crucial information that can be used as a reflection of future development of an underlying assets' price. The main objective of this study is to extract the risk-neutral density (RND) and the risk-world density (RWD) of option prices. A volatility function technique is applied by using a fourth order polynomial interpolation to obtain the RNDs. Then, a calibration function is used to convert the RNDs into RWDs. There are two types of calibration function which are parametric and non-parametric calibrations. The density is extracted from the Dow Jones Industrial Average (DJIA) index options with a one month constant maturity from January 2009 until December 2015. The performance of RNDs and RWDs extracted are evaluated by using a density forecasting test. This study found out that the RWDs obtain can provide an accurate information regarding the price of the underlying asset in future compared to that of the RNDs. In addition, empirical evidence suggests that RWDs from a non-parametric calibration has a better accuracy than other densities.
NASA Technical Reports Server (NTRS)
Lulla, Kamlesh P.; Helfert, Michael R.
1989-01-01
Sambhar Salt Lake is the largest salt lake (230 sq km) in India, situated in the northwest near Jaipur. Analysis of Space Shuttle photographs of this ephemeral lake reveals that water levels and lake basin land-use information can be extracted by both the digital and manual analysis techniques. Seasonal characteristics captured by the two Shuttle photos used in this study show that additional land use/cover categories can be mapped from the dry season photos. This additional information is essential for precise cartographic updates, and provides seasonal hydrologic profiles and inputs for potential mesoscale climate modeling. This paper extends the digitization and mensuration techniques originally developed for space photography and applied to other regions (e.g., Lake Chad, Africa, and Great Salt Lake, USA).
Reconstructing biochemical pathways from time course data.
Srividhya, Jeyaraman; Crampin, Edmund J; McSharry, Patrick E; Schnell, Santiago
2007-03-01
Time series data on biochemical reactions reveal transient behavior, away from chemical equilibrium, and contain information on the dynamic interactions among reacting components. However, this information can be difficult to extract using conventional analysis techniques. We present a new method to infer biochemical pathway mechanisms from time course data using a global nonlinear modeling technique to identify the elementary reaction steps which constitute the pathway. The method involves the generation of a complete dictionary of polynomial basis functions based on the law of mass action. Using these basis functions, there are two approaches to model construction, namely the general to specific and the specific to general approach. We demonstrate that our new methodology reconstructs the chemical reaction steps and connectivity of the glycolytic pathway of Lactococcus lactis from time course experimental data.
Pineda-Vargas, C A; Eisa, M E M; Rodgers, A L
2009-03-01
The micro-PIXE and RBS techniques are used to investigate the matrix as well as the trace elemental composition of calcium-rich human tissues on a microscopic scale. This paper deals with the spatial distribution of trace metals in hard human tissues such as kidney stone concretions, undertaken at the nuclear microprobe (NMP) facility. Relevant information about ion beam techniques used for material characterization will be discussed. Mapping correlation between different trace metals to extract information related to micro-regions composition will be illustrated with an application using proton energies of 1.5 and 3.0 MeV and applied to a comparative study for human kidney stone concretions nucleation region analysis from two different population groups (Sudan and South Africa).
Band Excitation for Scanning Probe Microscopy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jesse, Stephen
2017-01-02
The Band Excitation (BE) technique for scanning probe microscopy uses a precisely determined waveform that contains specific frequencies to excite the cantilever or sample in an atomic force microscope to extract more information, and more reliable information from a sample. There are a myriad of details and complexities associated with implementing the BE technique. There is therefore a need to have a user friendly interface that allows typical microscopists access to this methodology. This software enables users of atomic force microscopes to easily: build complex band-excitation waveforms, set-up the microscope scanning conditions, configure the input and output electronics for generatemore » the waveform as a voltage signal and capture the response of the system, perform analysis on the captured response, and display the results of the measurement.« less
A study of actions in operative notes.
Wang, Yan; Pakhomov, Serguei; Burkart, Nora E; Ryan, James O; Melton, Genevieve B
2012-01-01
Operative notes contain rich information about techniques, instruments, and materials used in procedures. To assist development of effective information extraction (IE) techniques for operative notes, we investigated the sublanguage used to describe actions within the operative report 'procedure description' section. Deep parsing results of 362,310 operative notes with an expanded Stanford parser using the SPECIALIST Lexicon resulted in 200 verbs (92% coverage) including 147 action verbs. Nominal action predicates for each action verb were gathered from WordNet, SPECIALIST Lexicon, New Oxford American Dictionary and Stedman's Medical Dictionary. Coverage gaps were seen in existing lexical, domain, and semantic resources (Unified Medical Language System (UMLS) Metathesaurus, SPECIALIST Lexicon, WordNet and FrameNet). Our findings demonstrate the need to construct surgical domain-specific semantic resources for IE from operative notes.
Interrogating Bronchoalveolar Lavage Samples via Exclusion-Based Analyte Extraction.
Tokar, Jacob J; Warrick, Jay W; Guckenberger, David J; Sperger, Jamie M; Lang, Joshua M; Ferguson, J Scott; Beebe, David J
2017-06-01
Although average survival rates for lung cancer have improved, earlier and better diagnosis remains a priority. One promising approach to assisting earlier and safer diagnosis of lung lesions is bronchoalveolar lavage (BAL), which provides a sample of lung tissue as well as proteins and immune cells from the vicinity of the lesion, yet diagnostic sensitivity remains a challenge. Reproducible isolation of lung epithelia and multianalyte extraction have the potential to improve diagnostic sensitivity and provide new information for developing personalized therapeutic approaches. We present the use of a recently developed exclusion-based, solid-phase-extraction technique called SLIDE (Sliding Lid for Immobilized Droplet Extraction) to facilitate analysis of BAL samples. We developed a SLIDE protocol for lung epithelial cell extraction and biomarker staining of patient BALs, testing both EpCAM and Trop2 as capture antigens. We characterized captured cells using TTF1 and p40 as immunostaining biomarkers of adenocarcinoma and squamous cell carcinoma, respectively. We achieved up to 90% (EpCAM) and 84% (Trop2) extraction efficiency of representative tumor cell lines. We then used the platform to process two patient BAL samples in parallel within the same sample plate to demonstrate feasibility and observed that Trop2-based extraction potentially extracts more target cells than EpCAM-based extraction.
Application of fermentation for isoflavone extraction from soy molasses
NASA Astrophysics Data System (ADS)
Duru, K. C.; Kovaleva, E. G.; Glukhareva, T. V.
2017-09-01
Extraction of isoflavones from soy products remains a major challenge for researchers. Different extraction techniques have been employed but the need to use a cheap green extraction technique remains the main focus. This study applied fermentation of soy molasses using Saccharomyces cerevisiae for extraction of isoflavones and compared this technique to the conventional extraction method. The aluminum chloride colorimetric method was used for the determination of total flavonoid content of extracts. The highest yield was observed from extraction using ethyl acetate after fermentation of soy molasses and the lowest one was given by the extract from conventional extraction method. The DPPH radical scavenging activities of the extracts were also compared. The extract obtained using ethyl acetate after fermentation showed the highest antioxidant activity (0.0269 meq), while extract from conventional extraction had the lowest antioxidant activity (0.0055 meq). The effect of time on daidzein yield was studied using HPLC standard addition method. Daidzein concentration was higher in extract obtained at t = 80 min (3.82 ± 0.11 mg of daidzein /g of extract) as compared to that obtained at t = 60 min (2.89 ± 0.10 mg of daidzein /g of extract).
Enzyme assisted extraction of biomolecules as an approach to novel extraction technology: A review.
Nadar, Shamraja S; Rao, Priyanka; Rathod, Virendra K
2018-06-01
An interest in the development of extraction techniques of biomolecules from various natural sources has increased in recent years due to their potential applications particularly for food and nutraceutical purposes. The presence of polysaccharides such as hemicelluloses, starch, pectin inside the cell wall, reduces the extraction efficiency of conventional extraction techniques. Conventional techniques also suffer from low extraction yields, time inefficiency and inferior extract quality due to traces of organic solvents present in them. Hence, there is a need of the green and novel extraction methods to recover biomolecules. The present review provides a holistic insight to various aspects related to enzyme aided extraction. Applications of enzymes in the recovery of various biomolecules such as polyphenols, oils, polysaccharides, flavours and colorants have been highlighted. Additionally, the employment of hyphenated extraction technologies can overcome some of the major drawbacks of enzyme based extraction such as longer extraction time and immoderate use of solvents. This review also includes hyphenated intensification techniques by coupling conventional methods with ultrasound, microwave, high pressure and supercritical carbon dioxide. The last section gives an insight on application of enzyme immobilization as a strategy for large scale extraction. Immobilization of enzymes on magnetic nanoparticles can be employed to enhance the operational performance of the system by multiple use of expensive enzymes making them industrially and economically feasible. Copyright © 2018 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Trease, Lynn L.; Trease, Harold E.; Fowler, John
2007-03-15
One of the critical steps toward performing computational biology simulations, using mesh based integration methods, is in using topologically faithful geometry derived from experimental digital image data as the basis for generating the computational meshes. Digital image data representations contain both the topology of the geometric features and experimental field data distributions. The geometric features that need to be captured from the digital image data are three-dimensional, therefore the process and tools we have developed work with volumetric image data represented as data-cubes. This allows us to take advantage of 2D curvature information during the segmentation and feature extraction process.more » The process is basically: 1) segmenting to isolate and enhance the contrast of the features that we wish to extract and reconstruct, 2) extracting the geometry of the features in an isosurfacing technique, and 3) building the computational mesh using the extracted feature geometry. “Quantitative” image reconstruction and feature extraction is done for the purpose of generating computational meshes, not just for producing graphics "screen" quality images. For example, the surface geometry that we extract must represent a closed water-tight surface.« less
Extraction Techniques for Polycyclic Aromatic Hydrocarbons in Soils
Lau, E. V.; Gan, S.; Ng, H. K.
2010-01-01
This paper aims to provide a review of the analytical extraction techniques for polycyclic aromatic hydrocarbons (PAHs) in soils. The extraction technologies described here include Soxhlet extraction, ultrasonic and mechanical agitation, accelerated solvent extraction, supercritical and subcritical fluid extraction, microwave-assisted extraction, solid phase extraction and microextraction, thermal desorption and flash pyrolysis, as well as fluidised-bed extraction. The influencing factors in the extraction of PAHs from soil such as temperature, type of solvent, soil moisture, and other soil characteristics are also discussed. The paper concludes with a review of the models used to describe the kinetics of PAH desorption from soils during solvent extraction. PMID:20396670
Structuring and extracting knowledge for the support of hypothesis generation in molecular biology
Roos, Marco; Marshall, M Scott; Gibson, Andrew P; Schuemie, Martijn; Meij, Edgar; Katrenko, Sophia; van Hage, Willem Robert; Krommydas, Konstantinos; Adriaans, Pieter W
2009-01-01
Background Hypothesis generation in molecular and cellular biology is an empirical process in which knowledge derived from prior experiments is distilled into a comprehensible model. The requirement of automated support is exemplified by the difficulty of considering all relevant facts that are contained in the millions of documents available from PubMed. Semantic Web provides tools for sharing prior knowledge, while information retrieval and information extraction techniques enable its extraction from literature. Their combination makes prior knowledge available for computational analysis and inference. While some tools provide complete solutions that limit the control over the modeling and extraction processes, we seek a methodology that supports control by the experimenter over these critical processes. Results We describe progress towards automated support for the generation of biomolecular hypotheses. Semantic Web technologies are used to structure and store knowledge, while a workflow extracts knowledge from text. We designed minimal proto-ontologies in OWL for capturing different aspects of a text mining experiment: the biological hypothesis, text and documents, text mining, and workflow provenance. The models fit a methodology that allows focus on the requirements of a single experiment while supporting reuse and posterior analysis of extracted knowledge from multiple experiments. Our workflow is composed of services from the 'Adaptive Information Disclosure Application' (AIDA) toolkit as well as a few others. The output is a semantic model with putative biological relations, with each relation linked to the corresponding evidence. Conclusion We demonstrated a 'do-it-yourself' approach for structuring and extracting knowledge in the context of experimental research on biomolecular mechanisms. The methodology can be used to bootstrap the construction of semantically rich biological models using the results of knowledge extraction processes. Models specific to particular experiments can be constructed that, in turn, link with other semantic models, creating a web of knowledge that spans experiments. Mapping mechanisms can link to other knowledge resources such as OBO ontologies or SKOS vocabularies. AIDA Web Services can be used to design personalized knowledge extraction procedures. In our example experiment, we found three proteins (NF-Kappa B, p21, and Bax) potentially playing a role in the interplay between nutrients and epigenetic gene regulation. PMID:19796406
Chemat, Farid; Rombaut, Natacha; Sicaire, Anne-Gaëlle; Meullemiestre, Alice; Fabiano-Tixier, Anne-Sylvie; Abert-Vian, Maryline
2017-01-01
This review presents a complete picture of current knowledge on ultrasound-assisted extraction (UAE) in food ingredients and products, nutraceutics, cosmetic, pharmaceutical and bioenergy applications. It provides the necessary theoretical background and some details about extraction by ultrasound, the techniques and their combinations, the mechanisms (fragmentation, erosion, capillarity, detexturation, and sonoporation), applications from laboratory to industry, security, and environmental impacts. In addition, the ultrasound extraction procedures and the important parameters influencing its performance are also included, together with the advantages and the drawbacks of each UAE techniques. Ultrasound-assisted extraction is a research topic, which affects several fields of modern plant-based chemistry. All the reported applications have shown that ultrasound-assisted extraction is a green and economically viable alternative to conventional techniques for food and natural products. The main benefits are decrease of extraction and processing time, the amount of energy and solvents used, unit operations, and CO 2 emissions. Copyright © 2016 Elsevier B.V. All rights reserved.
2D DOST based local phase pattern for face recognition
NASA Astrophysics Data System (ADS)
Moniruzzaman, Md.; Alam, Mohammad S.
2017-05-01
A new two dimensional (2-D) Discrete Orthogonal Stcokwell Transform (DOST) based Local Phase Pattern (LPP) technique has been proposed for efficient face recognition. The proposed technique uses 2-D DOST as preliminary preprocessing and local phase pattern to form robust feature signature which can effectively accommodate various 3D facial distortions and illumination variations. The S-transform, is an extension of the ideas of the continuous wavelet transform (CWT), is also known for its local spectral phase properties in time-frequency representation (TFR). It provides a frequency dependent resolution of the time-frequency space and absolutely referenced local phase information while maintaining a direct relationship with the Fourier spectrum which is unique in TFR. After utilizing 2-D Stransform as the preprocessing and build local phase pattern from extracted phase information yield fast and efficient technique for face recognition. The proposed technique shows better correlation discrimination compared to alternate pattern recognition techniques such as wavelet or Gabor based face recognition. The performance of the proposed method has been tested using the Yale and extended Yale facial database under different environments such as illumination variation and 3D changes in facial expressions. Test results show that the proposed technique yields better performance compared to alternate time-frequency representation (TFR) based face recognition techniques.
Magnetic force microscopy method and apparatus to detect and image currents in integrated circuits
Campbell, Ann. N.; Anderson, Richard E.; Cole, Jr., Edward I.
1995-01-01
A magnetic force microscopy method and improved magnetic tip for detecting and quantifying internal magnetic fields resulting from current of integrated circuits. Detection of the current is used for failure analysis, design verification, and model validation. The interaction of the current on the integrated chip with a magnetic field can be detected using a cantilevered magnetic tip. Enhanced sensitivity for both ac and dc current and voltage detection is achieved with voltage by an ac coupling or a heterodyne technique. The techniques can be used to extract information from analog circuits.
Magnetic force microscopy method and apparatus to detect and image currents in integrated circuits
Campbell, A.N.; Anderson, R.E.; Cole, E.I. Jr.
1995-11-07
A magnetic force microscopy method and improved magnetic tip for detecting and quantifying internal magnetic fields resulting from current of integrated circuits are disclosed. Detection of the current is used for failure analysis, design verification, and model validation. The interaction of the current on the integrated chip with a magnetic field can be detected using a cantilevered magnetic tip. Enhanced sensitivity for both ac and dc current and voltage detection is achieved with voltage by an ac coupling or a heterodyne technique. The techniques can be used to extract information from analog circuits. 17 figs.
New techniques for positron emission tomography in the study of human neurological disorders
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kuhl, D.E.
1993-01-01
This progress report describes accomplishments of four programs. The four programs are entitled (1) Faster,simpler processing of positron-computing precursors: New physicochemical approaches, (2) Novel solid phase reagents and methods to improve radiosynthesis and isotope production, (3) Quantitative evaluation of the extraction of information from PET images, and (4) Optimization of tracer kinetic methods for radioligand studies in PET.
NASA Astrophysics Data System (ADS)
Dolloff, John; Hottel, Bryant; Edwards, David; Theiss, Henry; Braun, Aaron
2017-05-01
This paper presents an overview of the Full Motion Video-Geopositioning Test Bed (FMV-GTB) developed to investigate algorithm performance and issues related to the registration of motion imagery and subsequent extraction of feature locations along with predicted accuracy. A case study is included corresponding to a video taken from a quadcopter. Registration of the corresponding video frames is performed without the benefit of a priori sensor attitude (pointing) information. In particular, tie points are automatically measured between adjacent frames using standard optical flow matching techniques from computer vision, an a priori estimate of sensor attitude is then computed based on supplied GPS sensor positions contained in the video metadata and a photogrammetric/search-based structure from motion algorithm, and then a Weighted Least Squares adjustment of all a priori metadata across the frames is performed. Extraction of absolute 3D feature locations, including their predicted accuracy based on the principles of rigorous error propagation, is then performed using a subset of the registered frames. Results are compared to known locations (check points) over a test site. Throughout this entire process, no external control information (e.g. surveyed points) is used other than for evaluation of solution errors and corresponding accuracy.
Spatio-Temporal Pattern Mining on Trajectory Data Using Arm
NASA Astrophysics Data System (ADS)
Khoshahval, S.; Farnaghi, M.; Taleai, M.
2017-09-01
Preliminary mobile was considered to be a device to make human connections easier. But today the consumption of this device has been evolved to a platform for gaming, web surfing and GPS-enabled application capabilities. Embedding GPS in handheld devices, altered them to significant trajectory data gathering facilities. Raw GPS trajectory data is a series of points which contains hidden information. For revealing hidden information in traces, trajectory data analysis is needed. One of the most beneficial concealed information in trajectory data is user activity patterns. In each pattern, there are multiple stops and moves which identifies users visited places and tasks. This paper proposes an approach to discover user daily activity patterns from GPS trajectories using association rules. Finding user patterns needs extraction of user's visited places from stops and moves of GPS trajectories. In order to locate stops and moves, we have implemented a place recognition algorithm. After extraction of visited points an advanced association rule mining algorithm, called Apriori was used to extract user activity patterns. This study outlined that there are useful patterns in each trajectory that can be emerged from raw GPS data using association rule mining techniques in order to find out about multiple users' behaviour in a system and can be utilized in various location-based applications.
Multiscale morphological filtering for analysis of noisy and complex images
NASA Astrophysics Data System (ADS)
Kher, A.; Mitra, S.
Images acquired with passive sensing techniques suffer from illumination variations and poor local contrasts that create major difficulties in interpretation and identification tasks. On the other hand, images acquired with active sensing techniques based on monochromatic illumination are degraded with speckle noise. Mathematical morphology offers elegant techniques to handle a wide range of image degradation problems. Unlike linear filters, morphological filters do not blur the edges and hence maintain higher image resolution. Their rich mathematical framework facilitates the design and analysis of these filters as well as their hardware implementation. Morphological filters are easier to implement and are more cost effective and efficient than several conventional linear filters. Morphological filters to remove speckle noise while maintaining high resolution and preserving thin image regions that are particularly vulnerable to speckle noise were developed and applied to SAR imagery. These filters used combination of linear (one-dimensional) structuring elements in different (typically four) orientations. Although this approach preserves more details than the simple morphological filters using two-dimensional structuring elements, the limited orientations of one-dimensional elements approximate the fine details of the region boundaries. A more robust filter designed recently overcomes the limitation of the fixed orientations. This filter uses a combination of concave and convex structuring elements. Morphological operators are also useful in extracting features from visible and infrared imagery. A multiresolution image pyramid obtained with successive filtering and a subsampling process aids in the removal of the illumination variations and enhances local contrasts. A morphology-based interpolation scheme was also introduced to reduce intensity discontinuities created in any morphological filtering task. The generality of morphological filtering techniques in extracting information from a wide variety of images obtained with active and passive sensing techniques is discussed. Such techniques are particularly useful in obtaining more information from fusion of complex images by different sensors such as SAR, visible, and infrared.
Multiscale Morphological Filtering for Analysis of Noisy and Complex Images
NASA Technical Reports Server (NTRS)
Kher, A.; Mitra, S.
1993-01-01
Images acquired with passive sensing techniques suffer from illumination variations and poor local contrasts that create major difficulties in interpretation and identification tasks. On the other hand, images acquired with active sensing techniques based on monochromatic illumination are degraded with speckle noise. Mathematical morphology offers elegant techniques to handle a wide range of image degradation problems. Unlike linear filters, morphological filters do not blur the edges and hence maintain higher image resolution. Their rich mathematical framework facilitates the design and analysis of these filters as well as their hardware implementation. Morphological filters are easier to implement and are more cost effective and efficient than several conventional linear filters. Morphological filters to remove speckle noise while maintaining high resolution and preserving thin image regions that are particularly vulnerable to speckle noise were developed and applied to SAR imagery. These filters used combination of linear (one-dimensional) structuring elements in different (typically four) orientations. Although this approach preserves more details than the simple morphological filters using two-dimensional structuring elements, the limited orientations of one-dimensional elements approximate the fine details of the region boundaries. A more robust filter designed recently overcomes the limitation of the fixed orientations. This filter uses a combination of concave and convex structuring elements. Morphological operators are also useful in extracting features from visible and infrared imagery. A multiresolution image pyramid obtained with successive filtering and a subsampling process aids in the removal of the illumination variations and enhances local contrasts. A morphology-based interpolation scheme was also introduced to reduce intensity discontinuities created in any morphological filtering task. The generality of morphological filtering techniques in extracting information from a wide variety of images obtained with active and passive sensing techniques is discussed. Such techniques are particularly useful in obtaining more information from fusion of complex images by different sensors such as SAR, visible, and infrared.
A Foot-Mounted Inertial Measurement Unit (IMU) Positioning Algorithm Based on Magnetic Constraint
Zou, Jiaheng
2018-01-01
With the development of related applications, indoor positioning techniques have been more and more widely developed. Based on Wi-Fi, Bluetooth low energy (BLE) and geomagnetism, indoor positioning techniques often rely on the physical location of fingerprint information. The focus and difficulty of establishing the fingerprint database are in obtaining a relatively accurate physical location with as little given information as possible. This paper presents a foot-mounted inertial measurement unit (IMU) positioning algorithm under the loop closure constraint based on magnetic information. It can provide relatively reliable position information without maps and geomagnetic information and provides a relatively accurate coordinate for the collection of a fingerprint database. In the experiment, the features extracted by the multi-level Fourier transform method proposed in this paper are validated and the validity of loop closure matching is tested with a RANSAC-based method. Moreover, the loop closure detection results show that the cumulative error of the trajectory processed by the graph optimization algorithm is significantly suppressed, presenting a good accuracy. The average error of the trajectory under loop closure constraint is controlled below 2.15 m. PMID:29494542
A Foot-Mounted Inertial Measurement Unit (IMU) Positioning Algorithm Based on Magnetic Constraint.
Wang, Yan; Li, Xin; Zou, Jiaheng
2018-03-01
With the development of related applications, indoor positioning techniques have been more and more widely developed. Based on Wi-Fi, Bluetooth low energy (BLE) and geomagnetism, indoor positioning techniques often rely on the physical location of fingerprint information. The focus and difficulty of establishing the fingerprint database are in obtaining a relatively accurate physical location with as little given information as possible. This paper presents a foot-mounted inertial measurement unit (IMU) positioning algorithm under the loop closure constraint based on magnetic information. It can provide relatively reliable position information without maps and geomagnetic information and provides a relatively accurate coordinate for the collection of a fingerprint database. In the experiment, the features extracted by the multi-level Fourier transform method proposed in this paper are validated and the validity of loop closure matching is tested with a RANSAC-based method. Moreover, the loop closure detection results show that the cumulative error of the trajectory processed by the graph optimization algorithm is significantly suppressed, presenting a good accuracy. The average error of the trajectory under loop closure constraint is controlled below 2.15 m.
Huang, Guiqi; Dong, Sheying; Zhang, Mengfei; Zhang, Haihan; Huang, Tinglin
2016-09-15
Sample pretreatment is the critical section for residue monitoring of hazardous pollutants. In this paper, using the cellulose fabric as host matrix, three extraction sorbents such as poly (tetrahydrofuran) (PTHF), poly (ethylene glycol) (PEG) and poly (dimethyldiphenylsiloxane) (PDMDPS), were prepared on the surface of the cellulose fabric. Two practical extraction techniques including stir bar fabric phase sorptive extraction (stir bar-FPSE) and magnetic stir fabric phase sorptive extraction (magnetic stir-FPSE) have been designed, which allow stirring of fabric phase sorbent during the whole extraction process. In the meantime, three brominated flame retardants (BFRs) [tetrabromobisphenol A (TBBPA), tetrabromobisphenol A bisallylether (TBBPA-BAE), tetrabromobisphenol A bis(2,3-dibromopropyl)ether (TBBPA-BDBPE)] in the water sample were selected as model analytes for the practical evaluation of the proposed two techniques using high-performance liquid chromatography (HPLC). Moreover, various experimental conditions affecting extraction process such as the type of fabric phase, extraction time, the amount of salt and elution conditions were also investigated. Due to the large sorbent loading capacity and unique stirring performance, both techniques possessed high extraction capability and fast extraction equilibrium. Under the optimized conditions, high recoveries (90-99%) and low limits of detection (LODs) (0.01-0.05 μg L(-1)) were achieved. In addition, the reproducibility was obtained by evaluating the intraday and interday precisions with relative standard deviations (RSDs) less than 5.1% and 6.8%, respectively. The results indicated that two pretreatment techniques were promising and practical for monitoring of hazardous pollutants in the water sample. Due to low solvent consumption and high repeated use performance, proposed techniques also could meet green analytical criteria. Copyright © 2016 Elsevier Ltd. All rights reserved.
Geologic and mineral and water resources investigations in western Colorado using ERTS-1 data
NASA Technical Reports Server (NTRS)
Knepper, D. H. (Principal Investigator)
1974-01-01
The author has identified the following significant results. Most of the geologic information in ERTS-1 imagery can be extracted from bulk processed black and white transparencies by a skilled interpreter using standard photogeologic techniques. In central and western Colorado, the detectability of lithologic contacts on ERTS-1 imagery is closely related to the time of year the imagery was acquired. Geologic structures are the most readily extractable type of geologic information contained in ERTS images. Major tectonic features and associated minor structures can be rapidly mapped, allowing the geologic setting of a large region to be quickly accessed. Trends of geologic structures in younger sedimentary appear to strongly parallel linear trends in older metamorphic and igneous basement terrain. Linears and color anomalies mapped from ERTS imagery are closely related to loci of known mineralization in the Colorado mineral belt.
Multi-dimensional super-resolution imaging enables surface hydrophobicity mapping
NASA Astrophysics Data System (ADS)
Bongiovanni, Marie N.; Godet, Julien; Horrocks, Mathew H.; Tosatto, Laura; Carr, Alexander R.; Wirthensohn, David C.; Ranasinghe, Rohan T.; Lee, Ji-Eun; Ponjavic, Aleks; Fritz, Joelle V.; Dobson, Christopher M.; Klenerman, David; Lee, Steven F.
2016-12-01
Super-resolution microscopy allows biological systems to be studied at the nanoscale, but has been restricted to providing only positional information. Here, we show that it is possible to perform multi-dimensional super-resolution imaging to determine both the position and the environmental properties of single-molecule fluorescent emitters. The method presented here exploits the solvatochromic and fluorogenic properties of nile red to extract both the emission spectrum and the position of each dye molecule simultaneously enabling mapping of the hydrophobicity of biological structures. We validated this by studying synthetic lipid vesicles of known composition. We then applied both to super-resolve the hydrophobicity of amyloid aggregates implicated in neurodegenerative diseases, and the hydrophobic changes in mammalian cell membranes. Our technique is easily implemented by inserting a transmission diffraction grating into the optical path of a localization-based super-resolution microscope, enabling all the information to be extracted simultaneously from a single image plane.
Multi-dimensional super-resolution imaging enables surface hydrophobicity mapping
Bongiovanni, Marie N.; Godet, Julien; Horrocks, Mathew H.; Tosatto, Laura; Carr, Alexander R.; Wirthensohn, David C.; Ranasinghe, Rohan T.; Lee, Ji-Eun; Ponjavic, Aleks; Fritz, Joelle V.; Dobson, Christopher M.; Klenerman, David; Lee, Steven F.
2016-01-01
Super-resolution microscopy allows biological systems to be studied at the nanoscale, but has been restricted to providing only positional information. Here, we show that it is possible to perform multi-dimensional super-resolution imaging to determine both the position and the environmental properties of single-molecule fluorescent emitters. The method presented here exploits the solvatochromic and fluorogenic properties of nile red to extract both the emission spectrum and the position of each dye molecule simultaneously enabling mapping of the hydrophobicity of biological structures. We validated this by studying synthetic lipid vesicles of known composition. We then applied both to super-resolve the hydrophobicity of amyloid aggregates implicated in neurodegenerative diseases, and the hydrophobic changes in mammalian cell membranes. Our technique is easily implemented by inserting a transmission diffraction grating into the optical path of a localization-based super-resolution microscope, enabling all the information to be extracted simultaneously from a single image plane. PMID:27929085
Spatial Uncertainty Modeling of Fuzzy Information in Images for Pattern Classification
Pham, Tuan D.
2014-01-01
The modeling of the spatial distribution of image properties is important for many pattern recognition problems in science and engineering. Mathematical methods are needed to quantify the variability of this spatial distribution based on which a decision of classification can be made in an optimal sense. However, image properties are often subject to uncertainty due to both incomplete and imprecise information. This paper presents an integrated approach for estimating the spatial uncertainty of vagueness in images using the theory of geostatistics and the calculus of probability measures of fuzzy events. Such a model for the quantification of spatial uncertainty is utilized as a new image feature extraction method, based on which classifiers can be trained to perform the task of pattern recognition. Applications of the proposed algorithm to the classification of various types of image data suggest the usefulness of the proposed uncertainty modeling technique for texture feature extraction. PMID:25157744
Nutritional and Biochemical Profiling of Leucopaxillus candidus (Bres.) Singer Wild Mushroom.
Vieira, Vanessa; Barros, Lillian; Martins, Anabela; Ferreira, Isabel C F R
2016-01-15
The wild mushroom Leucopaxillus candidus (Bres.) Singer was studied for the first time to obtain information about its chemical composition, nutritional value and bioactivity. Free sugars, fatty acids, tocopherols, organic and phenolic acids were analysed by chromatographic techniques coupled to different detectors. L. candidus methanolic extract was tested regarding antioxidant potential (reducing power, radical scavenging activity and lipid peroxidation inhibition). L. candidus was shown to be an interesting species in terms of nutritional value, with high content in proteins and carbohydrates, but low fat levels, with the prevalence of polyunsaturated fatty acids. Mannitol was the most abundant free sugar and β-tocopherol was the main tocopherol isoform. Other compounds detected were oxalic and fumaric acids, p-hydroxybenzoic and cinnamic acids. The methanolic extract revealed antioxidant activity and did not show hepatoxicity in porcine liver primary cells. The present study provides new information about L. candidus.
Mutual information, neural networks and the renormalization group
NASA Astrophysics Data System (ADS)
Koch-Janusz, Maciej; Ringel, Zohar
2018-06-01
Physical systems differing in their microscopic details often display strikingly similar behaviour when probed at macroscopic scales. Those universal properties, largely determining their physical characteristics, are revealed by the powerful renormalization group (RG) procedure, which systematically retains `slow' degrees of freedom and integrates out the rest. However, the important degrees of freedom may be difficult to identify. Here we demonstrate a machine-learning algorithm capable of identifying the relevant degrees of freedom and executing RG steps iteratively without any prior knowledge about the system. We introduce an artificial neural network based on a model-independent, information-theoretic characterization of a real-space RG procedure, which performs this task. We apply the algorithm to classical statistical physics problems in one and two dimensions. We demonstrate RG flow and extract the Ising critical exponent. Our results demonstrate that machine-learning techniques can extract abstract physical concepts and consequently become an integral part of theory- and model-building.
Yamamoto, Kazuo; Iriyama, Yasutoshi; Hirayama, Tsukasa
2017-02-08
All-solid-state Li-ion batteries having incombustible solid electrolytes are promising energy storage devices because they have significant advantages in terms of safety, lifetime and energy density. Electrochemical reactions, namely, Li-ion insertion/extraction reactions, commonly occur around the nanometer-scale interfaces between the electrodes and solid electrolytes. Thus, transmission electron microscopy (TEM) is an appropriate technique to directly observe such reactions, providing important information for understanding the fundamental solid-state electrochemistry and improving battery performance. In this review, we introduce two types of TEM techniques for operando observations of battery reactions, spatially resolved electron energy-loss spectroscopy in a TEM mode for direct detection of the Li concentration profiles and electron holography for observing the electric potential changes due to Li-ion insertion/extraction reactions. We visually show how Li-ion insertion/extractions affect the crystal structures, electronic structures, and local electric potential during the charge-discharge processes in these batteries. © The Author 2016. Published by Oxford University Press on behalf of The Japanese Society of Microscopy. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Super-pixel extraction based on multi-channel pulse coupled neural network
NASA Astrophysics Data System (ADS)
Xu, GuangZhu; Hu, Song; Zhang, Liu; Zhao, JingJing; Fu, YunXia; Lei, BangJun
2018-04-01
Super-pixel extraction techniques group pixels to form over-segmented image blocks according to the similarity among pixels. Compared with the traditional pixel-based methods, the image descripting method based on super-pixel has advantages of less calculation, being easy to perceive, and has been widely used in image processing and computer vision applications. Pulse coupled neural network (PCNN) is a biologically inspired model, which stems from the phenomenon of synchronous pulse release in the visual cortex of cats. Each PCNN neuron can correspond to a pixel of an input image, and the dynamic firing pattern of each neuron contains both the pixel feature information and its context spatial structural information. In this paper, a new color super-pixel extraction algorithm based on multi-channel pulse coupled neural network (MPCNN) was proposed. The algorithm adopted the block dividing idea of SLIC algorithm, and the image was divided into blocks with same size first. Then, for each image block, the adjacent pixels of each seed with similar color were classified as a group, named a super-pixel. At last, post-processing was adopted for those pixels or pixel blocks which had not been grouped. Experiments show that the proposed method can adjust the number of superpixel and segmentation precision by setting parameters, and has good potential for super-pixel extraction.
Wang, Ya-Qi; Wu, Zhen-Feng; Ke, Gang; Yang, Ming
2014-12-31
An effective vacuum assisted extraction (VAE) technique was proposed for the first time and applied to extract bioactive components from Andrographis paniculata. The process was carefully optimized by response surface methodology (RSM). Under the optimized experimental conditions, the best results were obtained using a boiling temperature of 65 °C, 50% ethanol concentration, 16 min of extraction time, one extraction cycles and a 12:1 liquid-solid ratio. Compared with conventional ultrasonic assisted extraction and heat reflux extraction, the VAE technique gave shorter extraction times and remarkable higher extraction efficiency, which indicated that a certain degree of vacuum gave the solvent a better penetration of the solvent into the pores and between the matrix particles, and enhanced the process of mass transfer. The present results demonstrated that VAE is an efficient, simple and fast method for extracting bioactive components from A. paniculata, which shows great potential for becoming an alternative technique for industrial scale-up applications.
Shortle, E; O'Grady, M N; Gilroy, D; Furey, A; Quinn, N; Kerry, J P
2014-12-01
Six extracts were prepared from hawthorn (Crataegus monogyna) leaves and flowers (HLF) and berries (HB) using solid-liquid [traditional (T) (HLFT, HBT), sonicated (S) (HLFS, HBS)] and supercritical fluid (C) extraction (HLFC, HBC) techniques. The antioxidant activities of HLF and HB extracts were characterised using in vitro antioxidant assays (TPC, DPPH, FRAP) and in 25% bovine muscle (longissimus lumborum) homogenates (lipid oxidation (TBARS), oxymyoglobin (% of total myoglobin)) after 24h storage at 4°C. Hawthorn extracts exhibited varying degrees of antioxidant potency. In vitro and muscle homogenate (TBARS) antioxidant activity followed the order: HLFS>HLFT and HBT>HBS. In supercritical fluid extracts, HLFC>HBC (in vitro antioxidant activity) and HLFC≈HBC (TBARS). All extracts (except HBS) reduced oxymyoglobin oxidation. The HLFS extract had the highest antioxidant activity in all test systems. Supercritical fluid extraction (SFE) exhibited potential as a technique for the manufacture of functional ingredients (antioxidants) from hawthorn for use in muscle foods. Copyright © 2014 Elsevier Ltd. All rights reserved.
Localization Using Visual Odometry and a Single Downward-Pointing Camera
NASA Technical Reports Server (NTRS)
Swank, Aaron J.
2012-01-01
Stereo imaging is a technique commonly employed for vision-based navigation. For such applications, two images are acquired from different vantage points and then compared using transformations to extract depth information. The technique is commonly used in robotics for obstacle avoidance or for Simultaneous Localization And Mapping, (SLAM). Yet, the process requires a number of image processing steps and therefore tends to be CPU-intensive, which limits the real-time data rate and use in power-limited applications. Evaluated here is a technique where a monocular camera is used for vision-based odometry. In this work, an optical flow technique with feature recognition is performed to generate odometry measurements. The visual odometry sensor measurements are intended to be used as control inputs or measurements in a sensor fusion algorithm using low-cost MEMS based inertial sensors to provide improved localization information. Presented here are visual odometry results which demonstrate the challenges associated with using ground-pointing cameras for visual odometry. The focus is for rover-based robotic applications for localization within GPS-denied environments.
Herzi, Nejia; Bouajila, Jalloul; Camy, Séverine; Romdhane, Mehrez; Condoret, Jean-Stéphane
2013-12-15
In the present study, three techniques of extraction: hydrodistillation (HD), solvent extraction (conventional 'Soxhlet' technique) and an innovative technique, i.e., the supercritical fluid extraction (SFE), were applied to ground Tetraclinis articulata leaves and compared for extraction duration, extraction yield, and chemical composition of the extracts as well as their antioxidant activities. The extracts were analyzed by GC-FID and GC-MS. The antioxidant activity was measured using two methods: ABTS(•+) and DPPH(•). The yield obtained using HD, SFE, hexane and ethanol Soxhlet extractions were found to be 0.6, 1.6, 40.4 and 21.2-27.4 g/kg respectively. An original result of this study is that the best antioxidant activity was obtained with an SFE extract (41 mg/L). The SFE method offers some noteworthy advantages over traditional alternatives, such as shorter extraction times, low environmental impact, and a clean, non-thermally-degraded final product. Also, a good correlation between the phenolic contents and the antioxidant activity was observed with extracts obtained by SFE at 9 MPa. Copyright © 2013 Elsevier Ltd. All rights reserved.
Probability Distribution Extraction from TEC Estimates based on Kernel Density Estimation
NASA Astrophysics Data System (ADS)
Demir, Uygar; Toker, Cenk; Çenet, Duygu
2016-07-01
Statistical analysis of the ionosphere, specifically the Total Electron Content (TEC), may reveal important information about its temporal and spatial characteristics. One of the core metrics that express the statistical properties of a stochastic process is its Probability Density Function (pdf). Furthermore, statistical parameters such as mean, variance and kurtosis, which can be derived from the pdf, may provide information about the spatial uniformity or clustering of the electron content. For example, the variance differentiates between a quiet ionosphere and a disturbed one, whereas kurtosis differentiates between a geomagnetic storm and an earthquake. Therefore, valuable information about the state of the ionosphere (and the natural phenomena that cause the disturbance) can be obtained by looking at the statistical parameters. In the literature, there are publications which try to fit the histogram of TEC estimates to some well-known pdf.s such as Gaussian, Exponential, etc. However, constraining a histogram to fit to a function with a fixed shape will increase estimation error, and all the information extracted from such pdf will continue to contain this error. In such techniques, it is highly likely to observe some artificial characteristics in the estimated pdf which is not present in the original data. In the present study, we use the Kernel Density Estimation (KDE) technique to estimate the pdf of the TEC. KDE is a non-parametric approach which does not impose a specific form on the TEC. As a result, better pdf estimates that almost perfectly fit to the observed TEC values can be obtained as compared to the techniques mentioned above. KDE is particularly good at representing the tail probabilities, and outliers. We also calculate the mean, variance and kurtosis of the measured TEC values. The technique is applied to the ionosphere over Turkey where the TEC values are estimated from the GNSS measurement from the TNPGN-Active (Turkish National Permanent GNSS Network) network. This study is supported by by TUBITAK 115E915 and Joint TUBITAK 114E092 and AS CR14/001 projects.
Quantitative Hyperspectral Reflectance Imaging
Klein, Marvin E.; Aalderink, Bernard J.; Padoan, Roberto; de Bruin, Gerrit; Steemers, Ted A.G.
2008-01-01
Hyperspectral imaging is a non-destructive optical analysis technique that can for instance be used to obtain information from cultural heritage objects unavailable with conventional colour or multi-spectral photography. This technique can be used to distinguish and recognize materials, to enhance the visibility of faint or obscured features, to detect signs of degradation and study the effect of environmental conditions on the object. We describe the basic concept, working principles, construction and performance of a laboratory instrument specifically developed for the analysis of historical documents. The instrument measures calibrated spectral reflectance images at 70 wavelengths ranging from 365 to 1100 nm (near-ultraviolet, visible and near-infrared). By using a wavelength tunable narrow-bandwidth light-source, the light energy used to illuminate the measured object is minimal, so that any light-induced degradation can be excluded. Basic analysis of the hyperspectral data includes a qualitative comparison of the spectral images and the extraction of quantitative data such as mean spectral reflectance curves and statistical information from user-defined regions-of-interest. More sophisticated mathematical feature extraction and classification techniques can be used to map areas on the document, where different types of ink had been applied or where one ink shows various degrees of degradation. The developed quantitative hyperspectral imager is currently in use by the Nationaal Archief (National Archives of The Netherlands) to study degradation effects of artificial samples and original documents, exposed in their permanent exhibition area or stored in their deposit rooms. PMID:27873831
Li, Yan; Yan, Xiu-ping
2015-09-01
Trace metals may be adopted by biological systems to assist in the syntheses and metabolic functions of genes (DNA and RNA) and proteins in the environment. These metals may be beneficial or may pose a risk to humans and other life forms. Novel hybrid techniques are required for studies on the interaction between different metal species and biomolecules, which is significant for biology, biochemistry, nutrition, agriculture, medicine, pharmacy, and environmental science. In recent years, our group dwells on new hyphenated techniques based on capillary electrophoresis (CE), electrothermal atomic absorption spectrometry (ETAAS), and inductively coupled plasma mass spectroscopy (ICP-MS), and their application for different metal species interaction with biomolecules such as DNA, HSA, and GSH. The CE-ETAAS assay and CE-ICP-MS assay allow sensitively probing the level of biomolecules such as DNA damage by different metal species and extracting the kinetic and thermodynamic information on the interactions of different metal species with biomolecules, provides direct evidences for the formation of different metal species--biomolecule adducts. In addition, the consequent structural information were extracted from circular dichroism (CD) and X-ray photoelectron spectroscopy (XPS), Raman spectroscopy, and Fourier transform infrared (FTIR) spectroscopy. The present works represent the most complete and extensive study to date on the interactions between different metal species with biomolecules, and also provide new evidences for and insights into the interactions of different metal species with biomolecules for further understanding of the toxicological effects of metal species.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fernandes, P. A.; Lynch, K. A.
Here, we define the observational parameter regime necessary for observing low-altitude ionospheric origins of high-latitude ion upflow/outflow. We present measurement challenges and identify a new analysis technique which mitigates these impediments. To probe the initiation of auroral ion upflow, it is necessary to examine the thermal ion population at 200-350 km, where typical thermal energies are tenths of eV. Interpretation of the thermal ion distribution function measurement requires removal of payload sheath and ram effects. We use a 3-D Maxwellian model to quantify how observed ionospheric parameters such as density, temperature, and flows affect in situ measurements of the thermalmore » ion distribution function. We define the viable acceptance window of a typical top-hat electrostatic analyzer in this regime and show that the instrument's energy resolution prohibits it from directly observing the shape of the particle spectra. To extract detailed information about measured particle population, we define two intermediate parameters from the measured distribution function, then use a Maxwellian model to replicate possible measured parameters for comparison to the data. Liouville's theorem and the thin-sheath approximation allow us to couple the measured and modeled intermediate parameters such that measurements inside the sheath provide information about plasma outside the sheath. We apply this technique to sounding rocket data to show that careful windowing of the data and Maxwellian models allows for extraction of the best choice of geophysical parameters. More widespread use of this analysis technique will help our community expand its observational database of the seed regions of ionospheric outflows.« less
Ambient Mass Spectrometry Imaging Using Direct Liquid Extraction Techniques
DOE Office of Scientific and Technical Information (OSTI.GOV)
Laskin, Julia; Lanekoff, Ingela
2015-11-13
Mass spectrometry imaging (MSI) is a powerful analytical technique that enables label-free spatial localization and identification of molecules in complex samples.1-4 MSI applications range from forensics5 to clinical research6 and from understanding microbial communication7-8 to imaging biomolecules in tissues.1, 9-10 Recently, MSI protocols have been reviewed.11 Ambient ionization techniques enable direct analysis of complex samples under atmospheric pressure without special sample pretreatment.3, 12-16 In fact, in ambient ionization mass spectrometry, sample processing (e.g., extraction, dilution, preconcentration, or desorption) occurs during the analysis.17 This substantially speeds up analysis and eliminates any possible effects of sample preparation on the localization of moleculesmore » in the sample.3, 8, 12-14, 18-20 Venter and co-workers have classified ambient ionization techniques into three major categories based on the sample processing steps involved: 1) liquid extraction techniques, in which analyte molecules are removed from the sample and extracted into a solvent prior to ionization; 2) desorption techniques capable of generating free ions directly from substrates; and 3) desorption techniques that produce larger particles subsequently captured by an electrospray plume and ionized.17 This review focuses on localized analysis and ambient imaging of complex samples using a subset of ambient ionization methods broadly defined as “liquid extraction techniques” based on the classification introduced by Venter and co-workers.17 Specifically, we include techniques where analyte molecules are desorbed from solid or liquid samples using charged droplet bombardment, liquid extraction, physisorption, chemisorption, mechanical force, laser ablation, or laser capture microdissection. Analyte extraction is followed by soft ionization that generates ions corresponding to intact species. Some of the key advantages of liquid extraction techniques include the ease of operation, ability to analyze samples in their native environments, speed of analysis, and ability to tune the extraction solvent composition to a problem at hand. For example, solvent composition may be optimized for efficient extraction of different classes of analytes from the sample or for quantification or online derivatization through reactive analysis. In this review, we will: 1) introduce individual liquid extraction techniques capable of localized analysis and imaging, 2) describe approaches for quantitative MSI experiments free of matrix effects, 3) discuss advantages of reactive analysis for MSI experiments, and 4) highlight selected applications (published between 2012 and 2015) that focus on imaging and spatial profiling of molecules in complex biological and environmental samples.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
An, Ho-Myoung; Kim, Hee-Dong; Kim, Tae Geun, E-mail: tgkim1@korea.ac.kr
Graphical abstract: The degradation tendency extracted by CP technique was almost the same in both the bulk-type and TFT-type cells. - Highlights: • D{sub it} is directly investigated from bulk-type and TFT-type CTF memory. • Charge pumping technique was employed to analyze the D{sub it} information. • To apply the CP technique to monitor the reliability of the 3D NAND flash. - Abstract: The energy distribution and density of interface traps (D{sub it}) are directly investigated from bulk-type and thin-film transistor (TFT)-type charge trap flash memory cells with tunnel oxide degradation, under program/erase (P/E) cycling using a charge pumping (CP)more » technique, in view of application in a 3-demension stackable NAND flash memory cell. After P/E cycling in bulk-type devices, the interface trap density gradually increased from 1.55 × 10{sup 12} cm{sup −2} eV{sup −1} to 3.66 × 10{sup 13} cm{sup −2} eV{sup −1} due to tunnel oxide damage, which was consistent with the subthreshold swing and transconductance degradation after P/E cycling. Its distribution moved toward shallow energy levels with increasing cycling numbers, which coincided with the decay rate degradation with short-term retention time. The tendency extracted with the CP technique for D{sub it} of the TFT-type cells was similar to those of bulk-type cells.« less
Ghahramanloo, Kourosh Hasanzadeh; Kamalidehghan, Behnam; Akbari Javar, Hamid; Teguh Widodo, Riyanto; Majidzadeh, Keivan; Noordin, Mohamed Ibrahim
2017-01-01
The objective of this study was to compare the oil extraction yield and essential oil composition of Indian and Iranian Nigella sativa L. extracted by using Supercritical Fluid Extraction (SFE) and solvent extraction methods. In this study, a gas chromatography equipped with a mass spectrophotometer detector was employed for qualitative analysis of the essential oil composition of Indian and Iranian N. sativa L. The results indicated that the main fatty acid composition identified in the essential oils extracted by using SFE and solvent extraction were linoleic acid (22.4%–61.85%) and oleic acid (1.64%–18.97%). Thymoquinone (0.72%–21.03%) was found to be the major volatile compound in the extracted N. sativa oil. It was observed that the oil extraction efficiency obtained from SFE was significantly (P<0.05) higher than that achieved by the solvent extraction technique. The present study showed that SFE can be used as a more efficient technique for extraction of N. Sativa L. essential oil, which is composed of higher linoleic acid and thymoquinone contents compared to the essential oil obtained by the solvent extraction technique. PMID:28814830
Ghahramanloo, Kourosh Hasanzadeh; Kamalidehghan, Behnam; Akbari Javar, Hamid; Teguh Widodo, Riyanto; Majidzadeh, Keivan; Noordin, Mohamed Ibrahim
2017-01-01
The objective of this study was to compare the oil extraction yield and essential oil composition of Indian and Iranian Nigella sativa L. extracted by using Supercritical Fluid Extraction (SFE) and solvent extraction methods. In this study, a gas chromatography equipped with a mass spectrophotometer detector was employed for qualitative analysis of the essential oil composition of Indian and Iranian N. sativa L. The results indicated that the main fatty acid composition identified in the essential oils extracted by using SFE and solvent extraction were linoleic acid (22.4%-61.85%) and oleic acid (1.64%-18.97%). Thymoquinone (0.72%-21.03%) was found to be the major volatile compound in the extracted N. sativa oil. It was observed that the oil extraction efficiency obtained from SFE was significantly ( P <0.05) higher than that achieved by the solvent extraction technique. The present study showed that SFE can be used as a more efficient technique for extraction of N. Sativa L. essential oil, which is composed of higher linoleic acid and thymoquinone contents compared to the essential oil obtained by the solvent extraction technique.
Analysis of intracranial pressure: past, present, and future.
Di Ieva, Antonio; Schmitz, Erika M; Cusimano, Michael D
2013-12-01
The monitoring of intracranial pressure (ICP) is an important tool in medicine for its ability to portray the brain's compliance status. The bedside monitor displays the ICP waveform and intermittent mean values to guide physicians in the management of patients, particularly those having sustained a traumatic brain injury. Researchers in the fields of engineering and physics have investigated various mathematical analysis techniques applicable to the waveform in order to extract additional diagnostic and prognostic information, although they largely remain limited to research applications. The purpose of this review is to present the current techniques used to monitor and interpret ICP and explore the potential of using advanced mathematical techniques to provide information about system perturbations from states of homeostasis. We discuss the limits of each proposed technique and we propose that nonlinear analysis could be a reliable approach to describe ICP signals over time, with the fractal dimension as a potential predictive clinically meaningful biomarker. Our goal is to stimulate translational research that can move modern analysis of ICP using these techniques into widespread practical use, and to investigate to the clinical utility of a tool capable of simplifying multiple variables obtained from various sensors.
Assessing clutter reduction in parallel coordinates using image processing techniques
NASA Astrophysics Data System (ADS)
Alhamaydh, Heba; Alzoubi, Hussein; Almasaeid, Hisham
2018-01-01
Information visualization has appeared as an important research field for multidimensional data and correlation analysis in recent years. Parallel coordinates (PCs) are one of the popular techniques to visual high-dimensional data. A problem with the PCs technique is that it suffers from crowding, a clutter which hides important data and obfuscates the information. Earlier research has been conducted to reduce clutter without loss in data content. We introduce the use of image processing techniques as an approach for assessing the performance of clutter reduction techniques in PC. We use histogram analysis as our first measure, where the mean feature of the color histograms of the possible alternative orderings of coordinates for the PC images is calculated and compared. The second measure is the extracted contrast feature from the texture of PC images based on gray-level co-occurrence matrices. The results show that the best PC image is the one that has the minimal mean value of the color histogram feature and the maximal contrast value of the texture feature. In addition to its simplicity, the proposed assessment method has the advantage of objectively assessing alternative ordering of PC visualization.
NASA Astrophysics Data System (ADS)
Martinis, Sandro; Clandillon, Stephen; Twele, André; Huber, Claire; Plank, Simon; Maxant, Jérôme; Cao, Wenxi; Caspard, Mathilde; May, Stéphane
2016-04-01
Optical and radar satellite remote sensing have proven to provide essential crisis information in case of natural disasters, humanitarian relief activities and civil security issues in a growing number of cases through mechanisms such as the Copernicus Emergency Management Service (EMS) of the European Commission or the International Charter 'Space and Major Disasters'. The aforementioned programs and initiatives make use of satellite-based rapid mapping services aimed at delivering reliable and accurate crisis information after natural hazards. Although these services are increasingly operational, they need to be continuously updated and improved through research and development (R&D) activities. The principal objective of ASAPTERRA (Advancing SAR and Optical Methods for Rapid Mapping), the ESA-funded R&D project being described here, is to improve, automate and, hence, speed-up geo-information extraction procedures in the context of natural hazards response. This is performed through the development, implementation, testing and validation of novel image processing methods using optical and Synthetic Aperture Radar (SAR) data. The methods are mainly developed based on data of the German radar satellites TerraSAR-X and TanDEM-X, the French satellite missions Pléiades-1A/1B as well as the ESA missions Sentinel-1/2 with the aim to better characterize the potential and limitations of these sensors and their synergy. The resulting algorithms and techniques are evaluated in real case applications during rapid mapping activities. The project is focussed on three types of natural hazards: floods, landslides and fires. Within this presentation an overview of the main methodological developments in each topic is given and demonstrated in selected test areas. The following developments are presented in the context of flood mapping: a fully automated Sentinel-1 based processing chain for detecting open flood surfaces, a method for the improved detection of flooded vegetation in Sentinel-1data using Entropy/Alpha decomposition, unsupervised Wishart Classification, and object-based post-classification as well as semi-automatic approaches for extracting inundated areas and flood traces in rural and urban areas from VHR and HR optical imagery using machine learning techniques. Methodological developments related to fires are the implementation of fast and robust methods for mapping burnt scars using change detection procedures using SAR (Sentinel-1, TerraSAR-X) and HR optical (e.g. SPOT, Sentinel-2) data as well as the extraction of 3D surface and volume change information from Pléiades stereo-pairs. In the context of landslides, fast and transferable change detection procedures based on SAR (TerraSAR-X) and optical (SPOT) data as well methods for extracting the extent of landslides only based on polarimetric VHR SAR (TerraSAR-X) data are presented.
High Quality Topic Extraction from Business News Explains Abnormal Financial Market Volatility
Hisano, Ryohei; Sornette, Didier; Mizuno, Takayuki; Ohnishi, Takaaki; Watanabe, Tsutomu
2013-01-01
Understanding the mutual relationships between information flows and social activity in society today is one of the cornerstones of the social sciences. In financial economics, the key issue in this regard is understanding and quantifying how news of all possible types (geopolitical, environmental, social, financial, economic, etc.) affects trading and the pricing of firms in organized stock markets. In this article, we seek to address this issue by performing an analysis of more than 24 million news records provided by Thompson Reuters and of their relationship with trading activity for 206 major stocks in the S&P US stock index. We show that the whole landscape of news that affects stock price movements can be automatically summarized via simple regularized regressions between trading activity and news information pieces decomposed, with the help of simple topic modeling techniques, into their “thematic” features. Using these methods, we are able to estimate and quantify the impacts of news on trading. We introduce network-based visualization techniques to represent the whole landscape of news information associated with a basket of stocks. The examination of the words that are representative of the topic distributions confirms that our method is able to extract the significant pieces of information influencing the stock market. Our results show that one of the most puzzling stylized facts in financial economies, namely that at certain times trading volumes appear to be “abnormally large,” can be partially explained by the flow of news. In this sense, our results prove that there is no “excess trading,” when restricting to times when news is genuinely novel and provides relevant financial information. PMID:23762258
v3NLP Framework: Tools to Build Applications for Extracting Concepts from Clinical Text
Divita, Guy; Carter, Marjorie E.; Tran, Le-Thuy; Redd, Doug; Zeng, Qing T; Duvall, Scott; Samore, Matthew H.; Gundlapalli, Adi V.
2016-01-01
Introduction: Substantial amounts of clinically significant information are contained only within the narrative of the clinical notes in electronic medical records. The v3NLP Framework is a set of “best-of-breed” functionalities developed to transform this information into structured data for use in quality improvement, research, population health surveillance, and decision support. Background: MetaMap, cTAKES and similar well-known natural language processing (NLP) tools do not have sufficient scalability out of the box. The v3NLP Framework evolved out of the necessity to scale-up these tools up and provide a framework to customize and tune techniques that fit a variety of tasks, including document classification, tuned concept extraction for specific conditions, patient classification, and information retrieval. Innovation: Beyond scalability, several v3NLP Framework-developed projects have been efficacy tested and benchmarked. While v3NLP Framework includes annotators, pipelines and applications, its functionalities enable developers to create novel annotators and to place annotators into pipelines and scaled applications. Discussion: The v3NLP Framework has been successfully utilized in many projects including general concept extraction, risk factors for homelessness among veterans, and identification of mentions of the presence of an indwelling urinary catheter. Projects as diverse as predicting colonization with methicillin-resistant Staphylococcus aureus and extracting references to military sexual trauma are being built using v3NLP Framework components. Conclusion: The v3NLP Framework is a set of functionalities and components that provide Java developers with the ability to create novel annotators and to place those annotators into pipelines and applications to extract concepts from clinical text. There are scale-up and scale-out functionalities to process large numbers of records. PMID:27683667
Knowledge Discovery and Data Mining in Iran's Climatic Researches
NASA Astrophysics Data System (ADS)
Karimi, Mostafa
2013-04-01
Advances in measurement technology and data collection is the database gets larger. Large databases require powerful tools for analysis data. Iterative process of acquiring knowledge from information obtained from data processing is done in various forms in all scientific fields. However, when the data volume large, and many of the problems the Traditional methods cannot respond. in the recent years, use of databases in various scientific fields, especially atmospheric databases in climatology expanded. in addition, increases in the amount of data generated by the climate models is a challenge for analysis of it for extraction of hidden pattern and knowledge. The approach to this problem has been made in recent years uses the process of knowledge discovery and data mining techniques with the use of the concepts of machine learning, artificial intelligence and expert (professional) systems is overall performance. Data manning is analytically process for manning in massive volume data. The ultimate goal of data mining is access to information and finally knowledge. climatology is a part of science that uses variety and massive volume data. Goal of the climate data manning is Achieve to information from variety and massive atmospheric and non-atmospheric data. in fact, Knowledge Discovery performs these activities in a logical and predetermined and almost automatic process. The goal of this research is study of uses knowledge Discovery and data mining technique in Iranian climate research. For Achieve This goal, study content (descriptive) analysis and classify base method and issue. The result shown that in climatic research of Iran most clustering, k-means and wards applied and in terms of issues precipitation and atmospheric circulation patterns most introduced. Although several studies in geography and climate issues with statistical techniques such as clustering and pattern extraction is done, Due to the nature of statistics and data mining, but cannot say for internal climate studies in data mining and knowledge discovery techniques are used. However, it is necessary to use the KDD Approach and DM techniques in the climatic studies, specific interpreter of climate modeling result.
NASA Astrophysics Data System (ADS)
Han, Jaemaro; Zhao, Xin; Lee, Jong Keun; Kim, Jae Young
2014-05-01
Arsenic compounds are considered carcinogen and easily enter drinking water supplies with their natural abundance. US Environmental Protection Agency is finalizing a regulation to reduce the public health risks from arsenic in drinking water by revising the current drinking water standard for arsenic from 50 ppb to 10 ppb in 2001 (USEPA, 2001). Therefore, soil remediation is also growing field to prevent contamination of groundwater as well as crop cultivation. Soil washing is adjusted as ex-situ soil remediation technique which reduces volume of the contaminated soil. The technique is composed of physical separation and chemical extraction to extract target metal contamination in the soil. Chemical extraction methods have been developed solubilizing contaminants containing reagents such as acids or chelating agents. And acid extraction is proven as the most commonly used technology to treat heavy metals in soil, sediment, and sludge (FRTR, 2007). Due to the unique physical and chemical properties, magnetic iron oxide have been used in diverse areas including information technology and biomedicine. Magnetic iron oxides also can be used as adsorbent to heavy metal enhancing removal efficiency of arsenic concentration. In this study, magnetite is used as the washing agent with acid extraction condition so that the injected oxide can be separated by magnetic field. Soil samples were collected from three separate areas in the Janghang smelter site and energy crops-grown soil to have synergy effect with phytoremediation. Each sample was air-dried and sieved (2mm). Soil washing condition was adjusted on pH in the range of 0-12 with hydrogen chloride and sodium hydroxide. After performing soil washing procedure, arsenic-extracted samples were analyzed for arsenic concentration by inductively coupled plasma optical emission spectrometer (ICP-OES). All the soils have exceeded worrisome level of soil contamination for region 1 (25mg/kg) so the soil remediation techniques are needed to be applied. The objective of this study is to investigate soil washing efficiency using magnetic iron oxide and derive the availability of the washing technique to the arsenic-contaminated field soils. Acknowledgement This study was supported by Korea Ministry of Environment as 'Knowledge-based environmental service (Waste to Energy) Human Resource Development Project'.
D Tracking Based Augmented Reality for Cultural Heritage Data Management
NASA Astrophysics Data System (ADS)
Battini, C.; Landi, G.
2015-02-01
The development of contactless documentation techniques is allowing researchers to collect high volumes of three-dimensional data in a short time but with high levels of accuracy. The digitalisation of cultural heritage opens up the possibility of using image processing and analysis, and computer graphics techniques, to preserve this heritage for future generations; augmenting it with additional information or with new possibilities for its enjoyment and use. The collection of precise datasets about cultural heritage status is crucial for its interpretation, its conservation and during the restoration processes. The application of digital-imaging solutions for various feature extraction, image data-analysis techniques, and three-dimensional reconstruction of ancient artworks, allows the creation of multidimensional models that can incorporate information coming from heterogeneous data sets, research results and historical sources. Real objects can be scanned and reconstructed virtually, with high levels of data accuracy and resolution. Real-time visualisation software and hardware is rapidly evolving and complex three-dimensional models can be interactively visualised and explored on applications developed for mobile devices. This paper will show how a 3D reconstruction of an object, with multiple layers of information, can be stored and visualised through a mobile application that will allow interaction with a physical object for its study and analysis, using 3D Tracking based Augmented Reality techniques.
Layout Slam with Model Based Loop Closure for 3d Indoor Corridor Reconstruction
NASA Astrophysics Data System (ADS)
Baligh Jahromi, A.; Sohn, G.; Jung, J.; Shahbazi, M.; Kang, J.
2018-05-01
In this paper, we extend a recently proposed visual Simultaneous Localization and Mapping (SLAM) techniques, known as Layout SLAM, to make it robust against error accumulations, abrupt changes of camera orientation and miss-association of newly visited parts of the scene to the previously visited landmarks. To do so, we present a novel technique of loop closing based on layout model matching; i.e., both model information (topology and geometry of reconstructed models) and image information (photometric features) are used to address a loop-closure detection. The advantages of using the layout-related information in the proposed loop-closing technique are twofold. First, it imposes a metric constraint on the global map consistency and, thus, adjusts the mapping scale drifts. Second, it can reduce matching ambiguity in the context of indoor corridors, where the scene is homogenously textured and extracting sufficient amount of distinguishable point features is a challenging task. To test the impact of the proposed technique on the performance of Layout SLAM, we have performed the experiments on wide-angle videos captured by a handheld camera. This dataset was collected from the indoor corridors of a building at York University. The obtained results demonstrate that the proposed method successfully detects the instances of loops while producing very limited trajectory errors.
NASA Astrophysics Data System (ADS)
Imms, Ryan; Hu, Sijung; Azorin-Peris, Vicente; Trico, Michaël.; Summers, Ron
2014-03-01
Non-contact imaging photoplethysmography (PPG) is a recent development in the field of physiological data acquisition, currently undergoing a large amount of research to characterize and define the range of its capabilities. Contact-based PPG techniques have been broadly used in clinical scenarios for a number of years to obtain direct information about the degree of oxygen saturation for patients. With the advent of imaging techniques, there is strong potential to enable access to additional information such as multi-dimensional blood perfusion and saturation mapping. The further development of effective opto-physiological monitoring techniques is dependent upon novel modelling techniques coupled with improved sensor design and effective signal processing methodologies. The biometric signal and imaging processing platform (bSIPP) provides a comprehensive set of features for extraction and analysis of recorded iPPG data, enabling direct comparison with other biomedical diagnostic tools such as ECG and EEG. Additionally, utilizing information about the nature of tissue structure has enabled the generation of an engineering model describing the behaviour of light during its travel through the biological tissue. This enables the estimation of the relative oxygen saturation and blood perfusion in different layers of the tissue to be calculated, which has the potential to be a useful diagnostic tool.
The limit of the film extraction technique for annular two-phase flow in a small tube
DOE Office of Scientific and Technical Information (OSTI.GOV)
Helm, D.E.; Lopez de Bertodano, M.; Beus, S.G.
1999-07-01
The limit of the liquid film extraction technique was identified in air-water and Freon-113 annular two-phase flow loops. The purpose of this research is to find the limit of the entrainment rate correlation obtained by Lopez de Bertodano et. al. (1998). The film extraction technique involves the suction of the liquid film through a porous tube and has been widely used to obtain annular flow entrainment and entrainment rate data. In these experiments there are two extraction probes. After the first extraction the entrained droplets in the gas core deposit on the tube wall. A new liquid film develops entirelymore » from liquid deposition and a second liquid film extraction is performed. While it is assumed that the entire liquid film is removed after the first extraction unit, this is not true for high liquid flow. At high liquid film flows the interfacial structure of the film becomes frothy. Then the entire liquid film cannot be removed at the first extraction unit, but continues on and is extracted at the second extraction unit. A simple model to characterize the limit of the extraction technique was obtained based on the hypothesis that the transition occurs due to a change in the wave structure. The resulting dimensionless correlation agrees with the data.« less
The limit of the film extraction technique for annular two-phase flow in a small tube
DOE Office of Scientific and Technical Information (OSTI.GOV)
Helm, D.E.; Lopez de Bertodano, M.; Beus, S.G.
1999-07-01
The limit of the liquid film extraction technique was identified in air-water and Freon-113 annular two-phase flow loops. The purpose of this research is to find the limit of the entrainment rate correlation obtained by Lopez de Bertodano et al. (1998). The film extraction technique involves the suction of the liquid film through a porous tube and has been widely used to obtain annular flow entrainment and entrainment rate data. In the experiments there are two extraction probes. After the first extraction the entrained droplets in the gas core deposit on the tube wall. A new liquid film develops entirelymore » from liquid deposition and a second liquid film extraction is performed. While it is assumed that the entire liquid film is removed after the first extraction unit, this is not true for high liquid flow. At high liquid film flows the interfacial structure of the film becomes frothy. Then the entire liquid film cannot be removed at the first extraction unit, but continues on and is extracted at the second extraction unit. A simple model to characterize the limit of the extraction technique was obtained based on the hypothesis that the transition occurs due to a change in the wave structure. The resulting dimensionless correlation agrees with the data.« less
3D Feature Extraction for Unstructured Grids
NASA Technical Reports Server (NTRS)
Silver, Deborah
1996-01-01
Visualization techniques provide tools that help scientists identify observed phenomena in scientific simulation. To be useful, these tools must allow the user to extract regions, classify and visualize them, abstract them for simplified representations, and track their evolution. Object Segmentation provides a technique to extract and quantify regions of interest within these massive datasets. This article explores basic algorithms to extract coherent amorphous regions from two-dimensional and three-dimensional scalar unstructured grids. The techniques are applied to datasets from Computational Fluid Dynamics and those from Finite Element Analysis.
Deep learning with convolutional neural network in radiology.
Yasaka, Koichiro; Akai, Hiroyuki; Kunimatsu, Akira; Kiryu, Shigeru; Abe, Osamu
2018-04-01
Deep learning with a convolutional neural network (CNN) is gaining attention recently for its high performance in image recognition. Images themselves can be utilized in a learning process with this technique, and feature extraction in advance of the learning process is not required. Important features can be automatically learned. Thanks to the development of hardware and software in addition to techniques regarding deep learning, application of this technique to radiological images for predicting clinically useful information, such as the detection and the evaluation of lesions, etc., are beginning to be investigated. This article illustrates basic technical knowledge regarding deep learning with CNNs along the actual course (collecting data, implementing CNNs, and training and testing phases). Pitfalls regarding this technique and how to manage them are also illustrated. We also described some advanced topics of deep learning, results of recent clinical studies, and the future directions of clinical application of deep learning techniques.
New techniques for test development for tactical auto-pilots using microprocessors
NASA Astrophysics Data System (ADS)
Shemeta, E. H.
1980-07-01
This paper reports on a demonstration of the application of the method to generate system level tests for a typical tactical missile autopilot. The test algorithms are based on the autopilot control law. When loaded on the tester with appropriate control information, the complete autopilot is tested to establish if the specified control law requirements are met. Thus, the test procedure not only checks to see if the hardware is functional, but also checks the operational software. The technique also uses a 'learning' mode to allow minor timing or functional deviations from the expected responses to be incorporated in the test procedures. A potential application of this test development technique is the extraction of production test data for the various subassemblies. The technique will 'learn' the input-output patterns forming the basis for developement and production tests. If successful, these new techniques should allow the test development process to keep pace with semiconductor progress.
The Extraction of One-Dimensional Flow Properties from Multi-Dimensional Data Sets
NASA Technical Reports Server (NTRS)
Baurle, Robert A.; Gaffney, Richard L., Jr.
2007-01-01
The engineering design and analysis of air-breathing propulsion systems relies heavily on zero- or one-dimensional properties (e.g. thrust, total pressure recovery, mixing and combustion efficiency, etc.) for figures of merit. The extraction of these parameters from experimental data sets and/or multi-dimensional computational data sets is therefore an important aspect of the design process. A variety of methods exist for extracting performance measures from multi-dimensional data sets. Some of the information contained in the multi-dimensional flow is inevitably lost when any one-dimensionalization technique is applied. Hence, the unique assumptions associated with a given approach may result in one-dimensional properties that are significantly different than those extracted using alternative approaches. The purpose of this effort is to examine some of the more popular methods used for the extraction of performance measures from multi-dimensional data sets, reveal the strengths and weaknesses of each approach, and highlight various numerical issues that result when mapping data from a multi-dimensional space to a space of one dimension.
The Art of Extracting One-Dimensional Flow Properties from Multi-Dimensional Data Sets
NASA Technical Reports Server (NTRS)
Baurle, R. A.; Gaffney, R. L.
2007-01-01
The engineering design and analysis of air-breathing propulsion systems relies heavily on zero- or one-dimensional properties (e:g: thrust, total pressure recovery, mixing and combustion efficiency, etc.) for figures of merit. The extraction of these parameters from experimental data sets and/or multi-dimensional computational data sets is therefore an important aspect of the design process. A variety of methods exist for extracting performance measures from multi-dimensional data sets. Some of the information contained in the multi-dimensional flow is inevitably lost when any one-dimensionalization technique is applied. Hence, the unique assumptions associated with a given approach may result in one-dimensional properties that are significantly different than those extracted using alternative approaches. The purpose of this effort is to examine some of the more popular methods used for the extraction of performance measures from multi-dimensional data sets, reveal the strengths and weaknesses of each approach, and highlight various numerical issues that result when mapping data from a multi-dimensional space to a space of one dimension.
NASA Astrophysics Data System (ADS)
Wang, Ke; Guo, Ping; Luo, A.-Li
2017-03-01
Spectral feature extraction is a crucial procedure in automated spectral analysis. This procedure starts from the spectral data and produces informative and non-redundant features, facilitating the subsequent automated processing and analysis with machine-learning and data-mining techniques. In this paper, we present a new automated feature extraction method for astronomical spectra, with application in spectral classification and defective spectra recovery. The basic idea of our approach is to train a deep neural network to extract features of spectra with different levels of abstraction in different layers. The deep neural network is trained with a fast layer-wise learning algorithm in an analytical way without any iterative optimization procedure. We evaluate the performance of the proposed scheme on real-world spectral data. The results demonstrate that our method is superior regarding its comprehensive performance, and the computational cost is significantly lower than that for other methods. The proposed method can be regarded as a new valid alternative general-purpose feature extraction method for various tasks in spectral data analysis.
Sample preparation techniques for the determination of trace residues and contaminants in foods.
Ridgway, Kathy; Lalljie, Sam P D; Smith, Roger M
2007-06-15
The determination of trace residues and contaminants in complex matrices, such as food, often requires extensive sample extraction and preparation prior to instrumental analysis. Sample preparation is often the bottleneck in analysis and there is a need to minimise the number of steps to reduce both time and sources of error. There is also a move towards more environmentally friendly techniques, which use less solvent and smaller sample sizes. Smaller sample size becomes important when dealing with real life problems, such as consumer complaints and alleged chemical contamination. Optimal sample preparation can reduce analysis time, sources of error, enhance sensitivity and enable unequivocal identification, confirmation and quantification. This review considers all aspects of sample preparation, covering general extraction techniques, such as Soxhlet and pressurised liquid extraction, microextraction techniques such as liquid phase microextraction (LPME) and more selective techniques, such as solid phase extraction (SPE), solid phase microextraction (SPME) and stir bar sorptive extraction (SBSE). The applicability of each technique in food analysis, particularly for the determination of trace organic contaminants in foods is discussed.
The role of abusive states of being in interrogation.
Putnam, Frank W
2013-01-01
Interrogation, the questioning of persons detained by police, military, or intelligence organizations, is designed to extract information that a subject may resist disclosing. Interrogation techniques are frequently predicated on inducing mental states of despair, dread, dependency, and debility that weaken an individual's resistance. Descriptions of techniques from 2 Central Intelligence Agency training manuals are illustrated by examples from interviews of and writings by Murat Kurnaz, who was held at Guantánamo Bay Detention Camp for 5 years. Interrogation techniques are designed to create a destabilizing sense of shock; undermine an individual's grasp on reality; and provoke internal psychological division, self-conflict, and confusion. The long-term effects of interrogation often include posttraumatic stress disorder as well as states of anxiety, depression, and depersonalization.
[Composition of chicken and quail eggs].
Closa, S J; Marchesich, C; Cabrera, M; Morales, J C
1999-06-01
Qualified food composition data on lipids composition are needed to evaluate intakes as a risk factor in the development of heart disease. Proximal composition, cholesterol and fatty acid content of chicken and quail eggs, usually consumed or traded, were analysed. Proximal composition were determined using AOAC (1984) specific techniques; lipids were extracted by a Folch's modified technique and cholesterol and fatty acids were determined by gas chromatography. Results corroborate the stability of eggs composition. Cholesterol content of quail eggs is similar to chicken eggs, but it is almost the half content of data registered in Handbook 8. Differences may be attributed to the analytical methodology used to obtain them. This study provides data obtained with up-date analytical techniques and accessory information useful for food composition tables.
Protein fold recognition using geometric kernel data fusion.
Zakeri, Pooya; Jeuris, Ben; Vandebril, Raf; Moreau, Yves
2014-07-01
Various approaches based on features extracted from protein sequences and often machine learning methods have been used in the prediction of protein folds. Finding an efficient technique for integrating these different protein features has received increasing attention. In particular, kernel methods are an interesting class of techniques for integrating heterogeneous data. Various methods have been proposed to fuse multiple kernels. Most techniques for multiple kernel learning focus on learning a convex linear combination of base kernels. In addition to the limitation of linear combinations, working with such approaches could cause a loss of potentially useful information. We design several techniques to combine kernel matrices by taking more involved, geometry inspired means of these matrices instead of convex linear combinations. We consider various sequence-based protein features including information extracted directly from position-specific scoring matrices and local sequence alignment. We evaluate our methods for classification on the SCOP PDB-40D benchmark dataset for protein fold recognition. The best overall accuracy on the protein fold recognition test set obtained by our methods is ∼ 86.7%. This is an improvement over the results of the best existing approach. Moreover, our computational model has been developed by incorporating the functional domain composition of proteins through a hybridization model. It is observed that by using our proposed hybridization model, the protein fold recognition accuracy is further improved to 89.30%. Furthermore, we investigate the performance of our approach on the protein remote homology detection problem by fusing multiple string kernels. The MATLAB code used for our proposed geometric kernel fusion frameworks are publicly available at http://people.cs.kuleuven.be/∼raf.vandebril/homepage/software/geomean.php?menu=5/. © The Author 2014. Published by Oxford University Press.
Nagarajan, Mahesh B.; Coan, Paola; Huber, Markus B.; Diemoz, Paul C.; Wismüller, Axel
2015-01-01
Phase contrast X-ray computed tomography (PCI-CT) has been demonstrated as a novel imaging technique that can visualize human cartilage with high spatial resolution and soft tissue contrast. Different textural approaches have been previously investigated for characterizing chondrocyte organization on PCI-CT to enable classification of healthy and osteoarthritic cartilage. However, the large size of feature sets extracted in such studies motivates an investigation into algorithmic feature reduction for computing efficient feature representations without compromising their discriminatory power. For this purpose, geometrical feature sets derived from the scaling index method (SIM) were extracted from 1392 volumes of interest (VOI) annotated on PCI-CT images of ex vivo human patellar cartilage specimens. The extracted feature sets were subject to linear and non-linear dimension reduction techniques as well as feature selection based on evaluation of mutual information criteria. The reduced feature set was subsequently used in a machine learning task with support vector regression to classify VOIs as healthy or osteoarthritic; classification performance was evaluated using the area under the receiver-operating characteristic (ROC) curve (AUC). Our results show that the classification performance achieved by 9-D SIM-derived geometric feature sets (AUC: 0.96 ± 0.02) can be maintained with 2-D representations computed from both dimension reduction and feature selection (AUC values as high as 0.97 ± 0.02). Thus, such feature reduction techniques can offer a high degree of compaction to large feature sets extracted from PCI-CT images while maintaining their ability to characterize the underlying chondrocyte patterns. PMID:25710875
Efficacy Evaluation of Different Wavelet Feature Extraction Methods on Brain MRI Tumor Detection
NASA Astrophysics Data System (ADS)
Nabizadeh, Nooshin; John, Nigel; Kubat, Miroslav
2014-03-01
Automated Magnetic Resonance Imaging brain tumor detection and segmentation is a challenging task. Among different available methods, feature-based methods are very dominant. While many feature extraction techniques have been employed, it is still not quite clear which of feature extraction methods should be preferred. To help improve the situation, we present the results of a study in which we evaluate the efficiency of using different wavelet transform features extraction methods in brain MRI abnormality detection. Applying T1-weighted brain image, Discrete Wavelet Transform (DWT), Discrete Wavelet Packet Transform (DWPT), Dual Tree Complex Wavelet Transform (DTCWT), and Complex Morlet Wavelet Transform (CMWT) methods are applied to construct the feature pool. Three various classifiers as Support Vector Machine, K Nearest Neighborhood, and Sparse Representation-Based Classifier are applied and compared for classifying the selected features. The results show that DTCWT and CMWT features classified with SVM, result in the highest classification accuracy, proving of capability of wavelet transform features to be informative in this application.
Extracting Product Features and Opinion Words Using Pattern Knowledge in Customer Reviews
Lynn, Khin Thidar
2013-01-01
Due to the development of e-commerce and web technology, most of online Merchant sites are able to write comments about purchasing products for customer. Customer reviews expressed opinion about products or services which are collectively referred to as customer feedback data. Opinion extraction about products from customer reviews is becoming an interesting area of research and it is motivated to develop an automatic opinion mining application for users. Therefore, efficient method and techniques are needed to extract opinions from reviews. In this paper, we proposed a novel idea to find opinion words or phrases for each feature from customer reviews in an efficient way. Our focus in this paper is to get the patterns of opinion words/phrases about the feature of product from the review text through adjective, adverb, verb, and noun. The extracted features and opinions are useful for generating a meaningful summary that can provide significant informative resource to help the user as well as merchants to track the most suitable choice of product. PMID:24459430
LexValueSets: An Approach for Context-Driven Value Sets Extraction
Pathak, Jyotishman; Jiang, Guoqian; Dwarkanath, Sridhar O.; Buntrock, James D.; Chute, Christopher G.
2008-01-01
The ability to model, share and re-use value sets across multiple medical information systems is an important requirement. However, generating value sets semi-automatically from a terminology service is still an unresolved issue, in part due to the lack of linkage to clinical context patterns that provide the constraints in defining a concept domain and invocation of value sets extraction. Towards this goal, we develop and evaluate an approach for context-driven automatic value sets extraction based on a formal terminology model. The crux of the technique is to identify and define the context patterns from various domains of discourse and leverage them for value set extraction using two complementary ideas based on (i) local terms provided by the Subject Matter Experts (extensional) and (ii) semantic definition of the concepts in coding schemes (intensional). A prototype was implemented based on SNOMED CT rendered in the LexGrid terminology model and a preliminary evaluation is presented. PMID:18998955
Hyperpolarized xenon NMR and MRI signal amplification by gas extraction
Zhou, Xin; Graziani, Dominic; Pines, Alexander
2009-01-01
A method is reported for enhancing the sensitivity of NMR of dissolved xenon by detecting the signal after extraction to the gas phase. We demonstrate hyperpolarized xenon signal amplification by gas extraction (Hyper-SAGE) in both NMR spectra and magnetic resonance images with time-of-flight information. Hyper-SAGE takes advantage of a change in physical phase to increase the density of polarized gas in the detection coil. At equilibrium, the concentration of gas-phase xenon is ≈10 times higher than that of the dissolved-phase gas. After extraction the xenon density can be further increased by several orders of magnitude by compression and/or liquefaction. Additionally, being a remote detection technique, the Hyper-SAGE effect is further enhanced in situations where the sample of interest would occupy only a small proportion of the traditional NMR receiver. Coupled with targeted xenon biosensors, Hyper-SAGE offers another path to highly sensitive molecular imaging of specific cell markers by detection of exhaled xenon gas. PMID:19805177
Extracting product features and opinion words using pattern knowledge in customer reviews.
Htay, Su Su; Lynn, Khin Thidar
2013-01-01
Due to the development of e-commerce and web technology, most of online Merchant sites are able to write comments about purchasing products for customer. Customer reviews expressed opinion about products or services which are collectively referred to as customer feedback data. Opinion extraction about products from customer reviews is becoming an interesting area of research and it is motivated to develop an automatic opinion mining application for users. Therefore, efficient method and techniques are needed to extract opinions from reviews. In this paper, we proposed a novel idea to find opinion words or phrases for each feature from customer reviews in an efficient way. Our focus in this paper is to get the patterns of opinion words/phrases about the feature of product from the review text through adjective, adverb, verb, and noun. The extracted features and opinions are useful for generating a meaningful summary that can provide significant informative resource to help the user as well as merchants to track the most suitable choice of product.
NASA Technical Reports Server (NTRS)
Swayze, Gregg A.; Clark, Roger N.
1995-01-01
The rapid development of sophisticated imaging spectrometers and resulting flood of imaging spectrometry data has prompted a rapid parallel development of spectral-information extraction technology. Even though these extraction techniques have evolved along different lines (band-shape fitting, endmember unmixing, near-infrared analysis, neural-network fitting, and expert systems to name a few), all are limited by the spectrometer's signal to noise (S/N) and spectral resolution in producing useful information. This study grew from a need to quantitatively determine what effects these parameters have on our ability to differentiate between mineral absorption features using a band-shape fitting algorithm. We chose to evaluate the AVIRIS, HYDICE, MIVIS, GERIS, VIMS, NIMS, and ASTER instruments because they collect data over wide S/N and spectral-resolution ranges. The study evaluates the performance of the Tricorder algorithm, in differentiating between mineral spectra in the 0.4-2.5 micrometer spectral region. The strength of the Tricorder algorithm is in its ability to produce an easily understood comparison of band shape that can concentrate on small relevant portions of the spectra, giving it an advantage over most unmixing schemes, and in that it need not spend large amounts of time reoptimizing each time a new mineral component is added to its reference library, as is the case with neural-network schemes. We believe the flexibility of the Tricorder algorithm is unparalleled among spectral-extraction techniques and that the results from this study, although dealing with minerals, will have direct applications to spectral identification in other disciplines.
Mapping Urban Ecosystem Services Using High Resolution Aerial Photography
NASA Astrophysics Data System (ADS)
Pilant, A. N.; Neale, A.; Wilhelm, D.
2010-12-01
Ecosystem services (ES) are the many life-sustaining benefits we receive from nature: e.g., clean air and water, food and fiber, cultural-aesthetic-recreational benefits, pollination and flood control. The ES concept is emerging as a means of integrating complex environmental and economic information to support informed environmental decision making. The US EPA is developing a web-based National Atlas of Ecosystem Services, with a component for urban ecosystems. Currently, the only wall-to-wall, national scale land cover data suitable for this analysis is the National Land Cover Data (NLCD) at 30 m spatial resolution with 5 and 10 year updates. However, aerial photography is acquired at higher spatial resolution (0.5-3 m) and more frequently (1-5 years, typically) for most urban areas. Land cover was mapped in Raleigh, NC using freely available USDA National Agricultural Imagery Program (NAIP) with 1 m ground sample distance to test the suitability of aerial photography for urban ES analysis. Automated feature extraction techniques were used to extract five land cover classes, and an accuracy assessment was performed using standard techniques. Results will be presented that demonstrate applications to mapping ES in urban environments: greenways, corridors, fragmentation, habitat, impervious surfaces, dark and light pavement (urban heat island). Automated feature extraction results mapped over NAIP color aerial photograph. At this scale, we can look at land cover and related ecosystem services at the 2-10 m scale. Small features such as individual trees and sidewalks are visible and mappable. Classified aerial photo of Downtown Raleigh NC Red: impervious surface Dark Green: trees Light Green: grass Tan: soil
Liu, X; Abd El-Aty, A M; Shim, J-H
2011-10-01
Nigella sativa L. (black cumin), commonly known as black seed, is a member of the Ranunculaceae family. This seed is used as a natural remedy in many Middle Eastern and Far Eastern countries. Extracts prepared from N. sativa have, for centuries, been used for medical purposes. Thus far, the organic compounds in N. sativa, including alkaloids, steroids, carbohydrates, flavonoids, fatty acids, etc. have been fairly well characterized. Herein, we summarize some new extraction techniques, including microwave assisted extraction (MAE) and supercritical extraction techniques (SFE), in addition to the classical method of hydrodistillation (HD), which have been employed for isolation and various analytical techniques used for the identification of secondary metabolites in black seed. We believe that some compounds contained in N. sativa remain to be identified, and that high-throughput screening could help to identify new compounds. A study addressing environmentally-friendly techniques that have minimal or no environmental effects is currently underway in our laboratory.
NASA Astrophysics Data System (ADS)
Yuan, V. W.
2002-12-01
In previous attempts to determine the internal temperature in systems subjected to dynamic loading, experimenters have usually relied on surface-based optical techniques that are often hampered by insufficient information regarding the emissivity of the surfaces under study. Neutron Resonance Spectroscopy (NRS) is a technique that uses Doppler-broadened neutron resonances to measure internal temperatures in dynamically-loaded samples. NRS has developed its own target-moderator assembly to provide single pulses with an order of magnitude higher brightness than the Lujan production target. The resonance line shapes from which temperature information is extracted are also influenced by non-temperature-dependent broadening from the moderator and detector phosphorescence. Dynamic NRS experiments have been performed to measure the temperature in a silver sheet jet and behind the passage of a shock wave in molybdenum.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Farfour, Mohammed; Yoon, Wang Jung; Yoon-Geun
Defining and understanding hydrocarbon expressions in seismic expression is main concern of geoscientists in oil and gas exploration and production. Over the last decades several mathematical approaches have been developed in this regard. Most of approaches have addressed information in amplitude of seismic data. Recently, more attention has been drawn towards frequency related information in order to extract frequency behaviors of hydrocarbons bearing sediments. Spectrally decomposing seismic data into individual frequencies found to be an excellent tool for investigating geological formations and their pore fluids. To accomplish this, several mathematical approaches have been invoked. Continuous wavelet transform and Short Timemore » Window Fourier transform are widely used techniques for this purpose. This paper gives an overview of some widely used mathematical technique in hydrocarbon reservoir detection and mapping. This is followed by an application on real data from Boonsville field.« less
Clipping the cosmos: the bias and bispectrum of large scale structure.
Simpson, Fergus; James, J Berian; Heavens, Alan F; Heymans, Catherine
2011-12-30
A large fraction of the information collected by cosmological surveys is simply discarded to avoid length scales which are difficult to model theoretically. We introduce a new technique which enables the extraction of useful information from the bispectrum of galaxies well beyond the conventional limits of perturbation theory. Our results strongly suggest that this method increases the range of scales where the relation between the bispectrum and power spectrum in tree-level perturbation theory may be applied, from k(max) ∼ 0.1 to ∼0.7 hMpc(-1). This leads to correspondingly large improvements in the determination of galaxy bias. Since the clipped matter power spectrum closely follows the linear power spectrum, there is the potential to use this technique to probe the growth rate of linear perturbations and confront theories of modified gravity with observation.
Ratiu, Ileana-Andreea; Al-Suod, Hossam; Ligor, Magdalena; Ligor, Tomasz; Railean-Plugaru, Viorica; Buszewski, Bogusław
2018-03-15
Cyclitols are phytochemicals naturally occurring in plant material, which attracted an increasing interest due to multiple medicinal attributes, among which the most important are the antidiabetic, antioxidant, and anticancer properties. Due to their valuable properties, sugars are used in the food industry as sweeteners, preservatives, texture modifiers, fermentation substrates, and flavoring and coloring agents. In this study, we report for the first time the quantitative analysis of sugars and cyclitols isolated from Solidago virgaurea L., which was used for the selection of the optimal solvent and extraction technique that can provide the best possible yield. Moreover, the quantities of sugars and cyclitols extracted from two other species, Solidago canadensis and Solidago gigantea, were investigated using the best extraction method and the most appropriate solvent. Comparative analysis of natural plant extracts obtained using five different techniques-maceration, Soxhlet extraction, pressurized liquid extraction, ultrasound-assisted extraction, and supercritical fluid extraction-was performed in order to decide the most suitable, efficient, and economically convenient extraction method. Three different solvents were used. Analysis of samples has been performed by solid-phase extraction for purification and pre-concentration, followed by derivation and GC-MS analysis. Highest efficiency for the total amount of obtained compounds has been reached by PLE, when water was used as a solvent. d-pinitol amount was almost similar for every solvent and for all the extraction techniques involved. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Fujimori, Kiyoshi; Lee, Hans; Sloey, Christopher; Ricci, Margaret S; Wen, Zai-Qing; Phillips, Joseph; Nashed-Samuel, Yasser
2016-01-01
Certain types of glass vials used as primary containers for liquid formulations of biopharmaceutical drug products have been observed with delamination that produced small glass like flakes termed lamellae under certain conditions during storage. The cause of this delamination is in part related to the glass surface defects, which renders the vials susceptible to flaking, and lamellae are formed during the high-temperature melting and annealing used for vial fabrication and shaping. The current European Pharmacopoeia method to assess glass vial quality utilizes acid titration of vial extract pools to determine hydrolytic resistance or alkalinity. Four alternative techniques with improved throughput, convenience, and/or comprehension were examined by subjecting seven lots of vials to analysis by all techniques. The first three new techniques of conductivity, flame photometry, and inductively coupled plasma mass spectrometry measured the same sample pools as acid titration. All three showed good correlation with alkalinity: conductivity (R(2) = 0.9951), flame photometry sodium (R(2) = 0.9895), and several elements by inductively coupled plasma mass spectrometry [(sodium (R(2) = 0.9869), boron (R(2) = 0.9796), silicon (R(2) = 0.9426), total (R(2) = 0.9639)]. The fourth technique processed the vials under conditions that promote delamination, termed accelerated lamellae formation, and then inspected those vials visually for lamellae. The visual inspection results without the lot with different processing condition correlated well with alkalinity (R(2) = 0.9474). Due to vial processing differences affecting alkalinity measurements and delamination propensity differently, the ratio of silicon and sodium measurements from inductively coupled plasma mass spectrometry was the most informative technique to assess overall vial quality and vial propensity for lamellae formation. The other techniques of conductivity, flame photometry, and accelerated lamellae formation condition may still be suitable for routine screening of vial lots produced under consistent processes. Recently, delamination that produced small glass like flakes termed lamellae has been observed in glass vials that are commonly used as primary containers for pharmaceutical drug products under certain conditions during storage. The main cause of these lamellae was the quality of the glass itself related to the manufacturing process. Current European Pharmacopoeia method to assess glass vial quality utilizes acid titration of vial extract pools to determine hydrolytic resistance or alkalinity. As alternative to the European Pharmacopoeia method, four other techniques were assessed. Three new techniques of conductivity, flame photometry, and inductively coupled plasma mass spectrometry measured the vial extract pool as acid titration to quantify quality, and they demonstrated good correlation with original alkalinity. The fourth technique processed the vials under conditions that promote delamination, termed accelerated lamellae formation, and the vials were then inspected visually for lamellae. The accelerated lamellae formation technique also showed good correlation with alkalinity. Of the new four techniques, inductively coupled plasma mass spectrometry was the most informative technique to assess overall vial quality even with differences in processing between vial lots. Other three techniques were still suitable for routine screening of vial lots produced under consistent processes. © PDA, Inc. 2016.
Biometric feature embedding using robust steganography technique
NASA Astrophysics Data System (ADS)
Rashid, Rasber D.; Sellahewa, Harin; Jassim, Sabah A.
2013-05-01
This paper is concerned with robust steganographic techniques to hide and communicate biometric data in mobile media objects like images, over open networks. More specifically, the aim is to embed binarised features extracted using discrete wavelet transforms and local binary patterns of face images as a secret message in an image. The need for such techniques can arise in law enforcement, forensics, counter terrorism, internet/mobile banking and border control. What differentiates this problem from normal information hiding techniques is the added requirement that there should be minimal effect on face recognition accuracy. We propose an LSB-Witness embedding technique in which the secret message is already present in the LSB plane but instead of changing the cover image LSB values, the second LSB plane will be changed to stand as a witness/informer to the receiver during message recovery. Although this approach may affect the stego quality, it is eliminating the weakness of traditional LSB schemes that is exploited by steganalysis techniques for LSB, such as PoV and RS steganalysis, to detect the existence of secrete message. Experimental results show that the proposed method is robust against PoV and RS attacks compared to other variants of LSB. We also discussed variants of this approach and determine capacity requirements for embedding face biometric feature vectors while maintain accuracy of face recognition.
Query-oriented evidence extraction to support evidence-based medicine practice.
Sarker, Abeed; Mollá, Diego; Paris, Cecile
2016-02-01
Evidence-based medicine practice requires medical practitioners to rely on the best available evidence, in addition to their expertise, when making clinical decisions. The medical domain boasts a large amount of published medical research data, indexed in various medical databases such as MEDLINE. As the size of this data grows, practitioners increasingly face the problem of information overload, and past research has established the time-associated obstacles faced by evidence-based medicine practitioners. In this paper, we focus on the problem of automatic text summarisation to help practitioners quickly find query-focused information from relevant documents. We utilise an annotated corpus that is specialised for the task of evidence-based summarisation of text. In contrast to past summarisation approaches, which mostly rely on surface level features to identify salient pieces of texts that form the summaries, our approach focuses on the use of corpus-based statistics, and domain-specific lexical knowledge for the identification of summary contents. We also apply a target-sentence-specific summarisation technique that reduces the problem of underfitting that persists in generic summarisation models. In automatic evaluations run over a large number of annotated summaries, our extractive summarisation technique statistically outperforms various baseline and benchmark summarisation models with a percentile rank of 96.8%. A manual evaluation shows that our extractive summarisation approach is capable of selecting content with high recall and precision, and may thus be used to generate bottom-line answers to practitioners' queries. Our research shows that the incorporation of specialised data and domain-specific knowledge can significantly improve text summarisation performance in the medical domain. Due to the vast amounts of medical text available, and the high growth of this form of data, we suspect that such summarisation techniques will address the time-related obstacles associated with evidence-based medicine. Copyright © 2015 Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kuhl, D.E.
1993-06-01
This progress report describes accomplishments of four programs. The four programs are entitled (1) Faster,simpler processing of positron-computing precursors: New physicochemical approaches, (2) Novel solid phase reagents and methods to improve radiosynthesis and isotope production, (3) Quantitative evaluation of the extraction of information from PET images, and (4) Optimization of tracer kinetic methods for radioligand studies in PET.
NASA Technical Reports Server (NTRS)
Alvarado, U. R. (Editor)
1980-01-01
The adequacy of current technology in terms of stage of maturity, of sensing, support systems, and information extraction was assessed relative to oil spills, waste pollution, and inputs to pollution trajectory models. Needs for advanced techniques are defined and the characteristics of a future satellite system are determined based on the requirements of U.S. agencies involved in pollution monitoring.
Alternative and Efficient Extraction Methods for Marine-Derived Compounds
Grosso, Clara; Valentão, Patrícia; Ferreres, Federico; Andrade, Paula B.
2015-01-01
Marine ecosystems cover more than 70% of the globe’s surface. These habitats are occupied by a great diversity of marine organisms that produce highly structural diverse metabolites as a defense mechanism. In the last decades, these metabolites have been extracted and isolated in order to test them in different bioassays and assess their potential to fight human diseases. Since traditional extraction techniques are both solvent- and time-consuming, this review emphasizes alternative extraction techniques, such as supercritical fluid extraction, pressurized solvent extraction, microwave-assisted extraction, ultrasound-assisted extraction, pulsed electric field-assisted extraction, enzyme-assisted extraction, and extraction with switchable solvents and ionic liquids, applied in the search for marine compounds. Only studies published in the 21st century are considered. PMID:26006714
SPECTRAL LINE DE-CONFUSION IN AN INTENSITY MAPPING SURVEY
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cheng, Yun-Ting; Bock, James; Bradford, C. Matt
2016-12-01
Spectral line intensity mapping (LIM) has been proposed as a promising tool to efficiently probe the cosmic reionization and the large-scale structure. Without detecting individual sources, LIM makes use of all available photons and measures the integrated light in the source confusion limit to efficiently map the three-dimensional matter distribution on large scales as traced by a given emission line. One particular challenge is the separation of desired signals from astrophysical continuum foregrounds and line interlopers. Here we present a technique to extract large-scale structure information traced by emission lines from different redshifts, embedded in a three-dimensional intensity mapping data cube.more » The line redshifts are distinguished by the anisotropic shape of the power spectra when projected onto a common coordinate frame. We consider the case where high-redshift [C ii] lines are confused with multiple low-redshift CO rotational lines. We present a semi-analytic model for [C ii] and CO line estimates based on the cosmic infrared background measurements, and show that with a modest instrumental noise level and survey geometry, the large-scale [C ii] and CO power spectrum amplitudes can be successfully extracted from a confusion-limited data set, without external information. We discuss the implications and limits of this technique for possible LIM experiments.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boing, L.E.; Miller, R.L.
1983-10-01
This document presents, in summary form, generic conceptual information relevant to the decommissioning of a reference test reactor (RTR). All of the data presented were extracted from NUREG/CR-1756 and arranged in a form that will provide a basis for future comparison studies for the Evaluation of Nuclear Facility Decommissioning Projects (ENFDP) program. During the data extraction process no attempt was made to challenge any of the assumptions used in the original studies nor was any attempt made to update assumed methods or processes to state-of-the-art decommissioning techniques. In a few instances obvious errors were corrected after consultation with the studymore » author.« less
Geographical Text Analysis: A new approach to understanding nineteenth-century mortality.
Porter, Catherine; Atkinson, Paul; Gregory, Ian
2015-11-01
This paper uses a combination of Geographic Information Systems (GIS) and corpus linguistic analysis to extract and analyse disease related keywords from the Registrar-General's Decennial Supplements. Combined with known mortality figures, this provides, for the first time, a spatial picture of the relationship between the Registrar-General's discussion of disease and deaths in England and Wales in the nineteenth and early twentieth centuries. Techniques such as collocation, density analysis, the Hierarchical Regional Settlement matrix and regression analysis are employed to extract and analyse the data resulting in new insight into the relationship between the Registrar-General's published texts and the changing mortality patterns during this time. Copyright © 2015 Elsevier Ltd. All rights reserved.
Automatic movie skimming with general tempo analysis
NASA Astrophysics Data System (ADS)
Lee, Shih-Hung; Yeh, Chia-Hung; Kuo, C. C. J.
2003-11-01
Story units are extracted by general tempo analysis including tempos analysis including tempos of audio and visual information in this research. Although many schemes have been proposed to successfully segment video data into shots using basic low-level features, how to group shots into meaningful units called story units is still a challenging problem. By focusing on a certain type of video such as sport or news, we can explore models with the specific application domain knowledge. For movie contents, many heuristic rules based on audiovisual clues have been proposed with limited success. We propose a method to extract story units using general tempo analysis. Experimental results are given to demonstrate the feasibility and efficiency of the proposed technique.
Extraction of lead and ridge characteristics from SAR images of sea ice
NASA Technical Reports Server (NTRS)
Vesecky, John F.; Smith, Martha P.; Samadani, Ramin
1990-01-01
Image-processing techniques for extracting the characteristics of lead and pressure ridge features in SAR images of sea ice are reported. The methods are applied to a SAR image of the Beaufort Sea collected from the Seasat satellite on October 3, 1978. Estimates of lead and ridge statistics are made, e.g., lead and ridge density (number of lead or ridge pixels per unit area of image) and the distribution of lead area and orientation as well as ridge length and orientation. The information derived is useful in both ice science and polar operations for such applications as albedo and heat and momentum transfer estimates, as well as ship routing and offshore engineering.
Extraction and purification methods in downstream processing of plant-based recombinant proteins.
Łojewska, Ewelina; Kowalczyk, Tomasz; Olejniczak, Szymon; Sakowicz, Tomasz
2016-04-01
During the last two decades, the production of recombinant proteins in plant systems has been receiving increased attention. Currently, proteins are considered as the most important biopharmaceuticals. However, high costs and problems with scaling up the purification and isolation processes make the production of plant-based recombinant proteins a challenging task. This paper presents a summary of the information regarding the downstream processing in plant systems and provides a comprehensible overview of its key steps, such as extraction and purification. To highlight the recent progress, mainly new developments in the downstream technology have been chosen. Furthermore, besides most popular techniques, alternative methods have been described. Copyright © 2015 Elsevier Inc. All rights reserved.
A new shock wave assisted sandalwood oil extraction technique
NASA Astrophysics Data System (ADS)
Arunkumar, A. N.; Srinivasa, Y. B.; Ravikumar, G.; Shankaranarayana, K. H.; Rao, K. S.; Jagadeesh, G.
A new shock wave assisted oil extraction technique from sandalwood has been developed in the Shock Waves Lab, IISc, Bangalore. The fragrant oil extracted from sandalwood finds variety of applications in medicine and perfumery industries. In the present method sandal wood specimens (2.5mm diameter and 25mm in length)are subjected to shock wave loading (over pressure 15 bar)in a constant area shock tube, before extracting the sandal oil using non-destructive oil extraction technique. The results from the study indicates that both the rate of extraction as well as the quantity of oil obtained from sandal wood samples exposed to shock waves are higher (15-40 percent) compared to non-destructive oil extraction technique. The compressive squeezing of the interior oil pockets in the sandalwood specimen due to shock wave loading appears to be the main reason for enhancement in the oil extraction rate. This is confirmed by the presence of warty structures in the cross-section and micro-fissures in the radial direction of the wood samples exposed to shock waves in the scanning electron microscopic investigation. In addition the gas chromatographic studies do not show any change in the q uality of sandal oil extracted from samples exposed to shock waves.
Extracting hidden messages in steganographic images
Quach, Tu-Thach
2014-07-17
The eventual goal of steganalytic forensic is to extract the hidden messages embedded in steganographic images. A promising technique that addresses this problem partially is steganographic payload location, an approach to reveal the message bits, but not their logical order. It works by finding modified pixels, or residuals, as an artifact of the embedding process. This technique is successful against simple least-significant bit steganography and group-parity steganography. The actual messages, however, remain hidden as no logical order can be inferred from the located payload. This paper establishes an important result addressing this shortcoming: we show that the expected mean residualsmore » contain enough information to logically order the located payload provided that the size of the payload in each stego image is not fixed. The located payload can be ordered as prescribed by the mean residuals to obtain the hidden messages without knowledge of the embedding key, exposing the vulnerability of these embedding algorithms. We provide experimental results to support our analysis.« less
Analysis and automatic identification of sleep stages using higher order spectra.
Acharya, U Rajendra; Chua, Eric Chern-Pin; Chua, Kuang Chua; Min, Lim Choo; Tamura, Toshiyo
2010-12-01
Electroencephalogram (EEG) signals are widely used to study the activity of the brain, such as to determine sleep stages. These EEG signals are nonlinear and non-stationary in nature. It is difficult to perform sleep staging by visual interpretation and linear techniques. Thus, we use a nonlinear technique, higher order spectra (HOS), to extract hidden information in the sleep EEG signal. In this study, unique bispectrum and bicoherence plots for various sleep stages were proposed. These can be used as visual aid for various diagnostics application. A number of HOS based features were extracted from these plots during the various sleep stages (Wakefulness, Rapid Eye Movement (REM), Stage 1-4 Non-REM) and they were found to be statistically significant with p-value lower than 0.001 using ANOVA test. These features were fed to a Gaussian mixture model (GMM) classifier for automatic identification. Our results indicate that the proposed system is able to identify sleep stages with an accuracy of 88.7%.
BLIND EXTRACTION OF AN EXOPLANETARY SPECTRUM THROUGH INDEPENDENT COMPONENT ANALYSIS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Waldmann, I. P.; Tinetti, G.; Hollis, M. D. J.
2013-03-20
Blind-source separation techniques are used to extract the transmission spectrum of the hot-Jupiter HD189733b recorded by the Hubble/NICMOS instrument. Such a 'blind' analysis of the data is based on the concept of independent component analysis. The detrending of Hubble/NICMOS data using the sole assumption that nongaussian systematic noise is statistically independent from the desired light-curve signals is presented. By not assuming any prior or auxiliary information but the data themselves, it is shown that spectroscopic errors only about 10%-30% larger than parametric methods can be obtained for 11 spectral bins with bin sizes of {approx}0.09 {mu}m. This represents a reasonablemore » trade-off between a higher degree of objectivity for the non-parametric methods and smaller standard errors for the parametric de-trending. Results are discussed in light of previous analyses published in the literature. The fact that three very different analysis techniques yield comparable spectra is a strong indication of the stability of these results.« less
Fringe pattern demodulation with a two-frame digital phase-locked loop algorithm.
Gdeisat, Munther A; Burton, David R; Lalor, Michael J
2002-09-10
A novel technique called a two-frame digital phase-locked loop for fringe pattern demodulation is presented. In this scheme, two fringe patterns with different spatial carrier frequencies are grabbed for an object. A digital phase-locked loop algorithm tracks and demodulates the phase difference between both fringe patterns by employing the wrapped phase components of one of the fringe patterns as a reference to demodulate the second fringe pattern. The desired phase information can be extracted from the demodulated phase difference. We tested the algorithm experimentally using real fringe patterns. The technique is shown to be suitable for noncontact measurement of objects with rapid surface variations, and it outperforms the Fourier fringe analysis technique in this aspect. Phase maps produced withthis algorithm are noisy in comparison with phase maps generated with the Fourier fringe analysis technique.
Timber Resources Inventory and Monitoring Joint Research Project
NASA Technical Reports Server (NTRS)
Hill, C. L.
1985-01-01
Primary objectives were to develop remote sensing analysis techniques for extracting forest related information from LANDSAT Multispectral Scanner (MMS) and Thematic Mapper data and to determine the extent to which International Paper Company information needs can be addressed with remote sensing information. The company actively manages 8.4 million acres of forest land. Traditionally, their forest inventories, updated on a three year cycle, are conducted through field surveys and aerial photography. The results reside in a digital forest data base containing 240 descriptive parameteres for individual forest stands. The information in the data base is used to develop seasonal and long range management strategies. Forest stand condition assessements (species composition, age, and density stratification) and identification of silvicultural activities (site preparation, planting, thinning, and harvest) are addressed.
NASA Astrophysics Data System (ADS)
Bolton, J. S.; Gold, E.
1986-10-01
In a companion paper the cepstral technique for the measurement of reflection coefficients was described. In particular the concepts of extraction noise and extraction delay were introduced. They are considered further here, and, in addition, a means of extending the cepstral technique to accommodate surfaces having lengthy impulse responses is described. The character of extraction noise, a cepstral component which interferes with reflection measurements, is largely determined by the spectrum of the signal radiated from the source loudspeaker. Here the origin and effects of extraction noise are discussed and it is shown that inverse filtering techniques may be used to reduce extraction noise without making impractical demands of the electrical test signal or the source loudspeaker. The extraction delay, a factor which is introduced when removing the reflector impulse response from the power cepstrum, has previously been estimated by a cross-correlation technique. Here the importance of estimating the extraction delay accurately is emphasized by showing the effect of small spurious delays on the calculation of the normal impedance of a reflecting surface. The effects are shown to accord with theory, and it was found that the real part of the estimated surface normal impedance is very nearly maximized when the spurious delay is eliminated; this has suggested a new way of determining the extraction delay itself. Finally, the basic cepstral technique is suited only to the measurement of surfaces whose impulse responses are shorter than τ, the delay between the arrival of the direct and specularly reflected components at the measurement position. Here it is shown that this restriction can be eliminated, by using a process known as cepstral inversion, when the direct cepstrum has a duration less than τ and cepstral aliasing is insignificant. It is also possible to use this technique to deconvolve a signal from an echo sequence in the time domain, an operation previously associated with the complex cepstrum rather than with the power cepstrum as used here.