Science.gov

Sample records for automated relation extraction

  1. Hybrid curation of gene–mutation relations combining automated extraction and crowdsourcing

    PubMed Central

    Burger, John D.; Doughty, Emily; Khare, Ritu; Wei, Chih-Hsuan; Mishra, Rajashree; Aberdeen, John; Tresner-Kirsch, David; Wellner, Ben; Kann, Maricel G.; Lu, Zhiyong; Hirschman, Lynette

    2014-01-01

    Background: This article describes capture of biological information using a hybrid approach that combines natural language processing to extract biological entities and crowdsourcing with annotators recruited via Amazon Mechanical Turk to judge correctness of candidate biological relations. These techniques were applied to extract gene– mutation relations from biomedical abstracts with the goal of supporting production scale capture of gene–mutation–disease findings as an open source resource for personalized medicine. Results: The hybrid system could be configured to provide good performance for gene–mutation extraction (precision ∼82%; recall ∼70% against an expert-generated gold standard) at a cost of $0.76 per abstract. This demonstrates that crowd labor platforms such as Amazon Mechanical Turk can be used to recruit quality annotators, even in an application requiring subject matter expertise; aggregated Turker judgments for gene–mutation relations exceeded 90% accuracy. Over half of the precision errors were due to mismatches against the gold standard hidden from annotator view (e.g. incorrect EntrezGene identifier or incorrect mutation position extracted), or incomplete task instructions (e.g. the need to exclude nonhuman mutations). Conclusions: The hybrid curation model provides a readily scalable cost-effective approach to curation, particularly if coupled with expert human review to filter precision errors. We plan to generalize the framework and make it available as open source software. Database URL: http://www.mitre.org/publications/technical-papers/hybrid-curation-of-gene-mutation-relations-combining-automated PMID:25246425

  2. Automated Neuroanatomical Relation Extraction: A Linguistically Motivated Approach with a PVT Connectivity Graph Case Study

    PubMed Central

    Gökdeniz, Erinç; Özgür, Arzucan; Canbeyli, Reşit

    2016-01-01

    Identifying the relations among different regions of the brain is vital for a better understanding of how the brain functions. While a large number of studies have investigated the neuroanatomical and neurochemical connections among brain structures, their specific findings are found in publications scattered over a large number of years and different types of publications. Text mining techniques have provided the means to extract specific types of information from a large number of publications with the aim of presenting a larger, if not necessarily an exhaustive picture. By using natural language processing techniques, the present paper aims to identify connectivity relations among brain regions in general and relations relevant to the paraventricular nucleus of the thalamus (PVT) in particular. We introduce a linguistically motivated approach based on patterns defined over the constituency and dependency parse trees of sentences. Besides the presence of a relation between a pair of brain regions, the proposed method also identifies the directionality of the relation, which enables the creation and analysis of a directional brain region connectivity graph. The approach is evaluated over the manually annotated data sets of the WhiteText Project. In addition, as a case study, the method is applied to extract and analyze the connectivity graph of PVT, which is an important brain region that is considered to influence many functions ranging from arousal, motivation, and drug-seeking behavior to attention. The results of the PVT connectivity graph show that PVT may be a new target of research in mood assessment. PMID:27708573

  3. Automated Extraction of Flow Features

    NASA Technical Reports Server (NTRS)

    Dorney, Suzanne (Technical Monitor); Haimes, Robert

    2004-01-01

    Computational Fluid Dynamics (CFD) simulations are routinely performed as part of the design process of most fluid handling devices. In order to efficiently and effectively use the results of a CFD simulation, visualization tools are often used. These tools are used in all stages of the CFD simulation including pre-processing, interim-processing, and post-processing, to interpret the results. Each of these stages requires visualization tools that allow one to examine the geometry of the device, as well as the partial or final results of the simulation. An engineer will typically generate a series of contour and vector plots to better understand the physics of how the fluid is interacting with the physical device. Of particular interest are detecting features such as shocks, recirculation zones, and vortices (which will highlight areas of stress and loss). As the demand for CFD analyses continues to increase the need for automated feature extraction capabilities has become vital. In the past, feature extraction and identification were interesting concepts, but not required in understanding the physics of a steady flow field. This is because the results of the more traditional tools like; iso-surface, cuts and streamlines, were more interactive and easily abstracted so they could be represented to the investigator. These tools worked and properly conveyed the collected information at the expense of a great deal of interaction. For unsteady flow-fields, the investigator does not have the luxury of spending time scanning only one "snapshot" of the simulation. Automated assistance is required in pointing out areas of potential interest contained within the flow. This must not require a heavy compute burden (the visualization should not significantly slow down the solution procedure for (co-processing environments). Methods must be developed to abstract the feature of interest and display it in a manner that physically makes sense.

  4. Automated Extraction of Flow Features

    NASA Technical Reports Server (NTRS)

    Dorney, Suzanne (Technical Monitor); Haimes, Robert

    2005-01-01

    Computational Fluid Dynamics (CFD) simulations are routinely performed as part of the design process of most fluid handling devices. In order to efficiently and effectively use the results of a CFD simulation, visualization tools are often used. These tools are used in all stages of the CFD simulation including pre-processing, interim-processing, and post-processing, to interpret the results. Each of these stages requires visualization tools that allow one to examine the geometry of the device, as well as the partial or final results of the simulation. An engineer will typically generate a series of contour and vector plots to better understand the physics of how the fluid is interacting with the physical device. Of particular interest are detecting features such as shocks, re-circulation zones, and vortices (which will highlight areas of stress and loss). As the demand for CFD analyses continues to increase the need for automated feature extraction capabilities has become vital. In the past, feature extraction and identification were interesting concepts, but not required in understanding the physics of a steady flow field. This is because the results of the more traditional tools like; isc-surface, cuts and streamlines, were more interactive and easily abstracted so they could be represented to the investigator. These tools worked and properly conveyed the collected information at the expense of a great deal of interaction. For unsteady flow-fields, the investigator does not have the luxury of spending time scanning only one "snapshot" of the simulation. Automated assistance is required in pointing out areas of potential interest contained within the flow. This must not require a heavy compute burden (the visualization should not significantly slow down the solution procedure for co-processing environments). Methods must be developed to abstract the feature of interest and display it in a manner that physically makes sense.

  5. Automated DNA extraction from pollen in honey.

    PubMed

    Guertler, Patrick; Eicheldinger, Adelina; Muschler, Paul; Goerlich, Ottmar; Busch, Ulrich

    2014-04-15

    In recent years, honey has become subject of DNA analysis due to potential risks evoked by microorganisms, allergens or genetically modified organisms. However, so far, only a few DNA extraction procedures are available, mostly time-consuming and laborious. Therefore, we developed an automated DNA extraction method from pollen in honey based on a CTAB buffer-based DNA extraction using the Maxwell 16 instrument and the Maxwell 16 FFS Nucleic Acid Extraction System, Custom-Kit. We altered several components and extraction parameters and compared the optimised method with a manual CTAB buffer-based DNA isolation method. The automated DNA extraction was faster and resulted in higher DNA yield and sufficient DNA purity. Real-time PCR results obtained after automated DNA extraction are comparable to results after manual DNA extraction. No PCR inhibition was observed. The applicability of this method was further successfully confirmed by analysis of different routine honey samples. Copyright © 2013 Elsevier Ltd. All rights reserved.

  6. Automated Extraction of Secondary Flow Features

    NASA Technical Reports Server (NTRS)

    Dorney, Suzanne M.; Haimes, Robert

    2005-01-01

    The use of Computational Fluid Dynamics (CFD) has become standard practice in the design and development of the major components used for air and space propulsion. To aid in the post-processing and analysis phase of CFD many researchers now use automated feature extraction utilities. These tools can be used to detect the existence of such features as shocks, vortex cores and separation and re-attachment lines. The existence of secondary flow is another feature of significant importance to CFD engineers. Although the concept of secondary flow is relatively understood there is no commonly accepted mathematical definition for secondary flow. This paper will present a definition for secondary flow and one approach for automatically detecting and visualizing secondary flow.

  7. Multiple automated headspace in-tube extraction for the accurate analysis of relevant wine aroma compounds and for the estimation of their relative liquid-gas transfer rates.

    PubMed

    Zapata, Julián; Lopez, Ricardo; Herrero, Paula; Ferreira, Vicente

    2012-11-30

    An automated headspace in-tube extraction (ITEX) method combined with multiple headspace extraction (MHE) has been developed to provide simultaneously information about the accurate wine content in 20 relevant aroma compounds and about their relative transfer rates to the headspace and hence about the relative strength of their interactions with the matrix. In the method, 5 μL (for alcohols, acetates and carbonyl alcohols) or 200 μL (for ethyl esters) of wine sample were introduced in a 2 mL vial, heated at 35°C and extracted with 32 (for alcohols, acetates and carbonyl alcohols) or 16 (for ethyl esters) 0.5 mL pumping strokes in four consecutive extraction and analysis cycles. The application of the classical theory of Multiple Extractions makes it possible to obtain a highly reliable estimate of the total amount of volatile compound present in the sample and a second parameter, β, which is simply the proportion of volatile not transferred to the trap in one extraction cycle, but that seems to be a reliable indicator of the actual volatility of the compound in that particular wine. A study with 20 wines of different types and 1 synthetic sample has revealed the existence of significant differences in the relative volatility of 15 out of 20 odorants. Differences are particularly intense for acetaldehyde and other carbonyls, but are also notable for alcohols and long chain fatty acid ethyl esters. It is expected that these differences, linked likely to sulphur dioxide and some unknown specific compositional aspects of the wine matrix, can be responsible for relevant sensory changes, and may even be the cause explaining why the same aroma composition can produce different aroma perceptions in two different wines. Copyright © 2012 Elsevier B.V. All rights reserved.

  8. Special Relations in Automated Deduction,

    DTIC Science & Technology

    1985-05-01

    ABSTRACT Two deduction rules are introduced to give streamlined treatment to relations of special importance in an automated theorem-proving system...a+1,a,b). We may also write e(tl, t 2 , • • , t,e)’’ to indicate that precisely k or I replacements are made in the expressione (Sl ,52, ... , 1 0

  9. Acceleration of Automated HI Source Extraction

    NASA Astrophysics Data System (ADS)

    Badenhorst, S. J.; Blyth, S.; Kuttel, M. M.

    2013-10-01

    We aim to enable fast automated extraction of neutral hydrogen (HI) sources from large survey data sets. This requires both handling the large files (>5 TB) to be produced by next-generation interferometers and acceleration of the source extraction algorithm. We develop an efficient multithreaded implementation of the A'Trous wavelet reconstruction algorithm, which we evaluate against the serial implementation in the DUCHAMP package. We also evaluate three memory management libraries (Mmap, Boost and Stxxl) that enable processing of data files too large to fit into main memory, to establish which provides the best performance.

  10. Automated Fluid Feature Extraction from Transient Simulations

    NASA Technical Reports Server (NTRS)

    Haimes, Robert; Lovely, David

    1999-01-01

    In the past, feature extraction and identification were interesting concepts, but not required to understand the underlying physics of a steady flow field. This is because the results of the more traditional tools like iso-surfaces, cuts and streamlines were more interactive and easily abstracted so they could be represented to the investigator. These tools worked and properly conveyed the collected information at the expense of much interaction. For unsteady flow-fields, the investigator does not have the luxury of spending time scanning only one "snap-shot" of the simulation. Automated assistance is required in pointing out areas of potential interest contained within the flow. This must not require a heavy compute burden (the visualization should not significantly slow down the solution procedure for co-processing environments like pV3). And methods must be developed to abstract the feature and display it in a manner that physically makes sense. The following is a list of the important physical phenomena found in transient (and steady-state) fluid flow: (1) Shocks, (2) Vortex cores, (3) Regions of recirculation, (4) Boundary layers, (5) Wakes. Three papers and an initial specification for the (The Fluid eXtraction tool kit) FX Programmer's guide were included. The papers, submitted to the AIAA Computational Fluid Dynamics Conference, are entitled : (1) Using Residence Time for the Extraction of Recirculation Regions, (2) Shock Detection from Computational Fluid Dynamics results and (3) On the Velocity Gradient Tensor and Fluid Feature Extraction.

  11. Automated Fluid Feature Extraction from Transient Simulations

    NASA Technical Reports Server (NTRS)

    Haimes, Robert

    2000-01-01

    In the past, feature extraction and identification were interesting concepts, but not required in understanding the physics of a steady flow field. This is because the results of the more traditional tools like iso-surfaces, cuts and streamlines, were more interactive and easily abstracted so they could be represented to the investigator. These tools worked and properly conveyed the collected information at the expense of a great deal of interaction. For unsteady flow-fields, the investigator does not have the luxury of spending time scanning only one 'snap-shot' of the simulation. Automated assistance is required in pointing out areas of potential interest contained within the flow. This must not require a heavy compute burden (the visualization should not significantly slow down the solution procedure for co-processing environments like pV3). And methods must be developed to abstract the feature and display it in a manner that physically makes sense.

  12. Automated Fluid Feature Extraction from Transient Simulations

    NASA Technical Reports Server (NTRS)

    Haimes, Robert

    1998-01-01

    In the past, feature extraction and identification were interesting concepts, but not required to understand the underlying physics of a steady flow field. This is because the results of the more traditional tools like iso-surfaces, cuts and streamlines were more interactive and easily abstracted so they could be represented to the investigator. These tools worked and properly conveyed the collected information at the expense of much interaction. For unsteady flow-fields, the investigator does not have the luxury of spending time scanning only one 'snap-shot' of the simulation. Automated assistance is required in pointing out areas of potential interest contained within the flow. This must not require a heavy compute burden (the visualization should not significantly slow down the solution procedure for co-processing environments like pV3). And methods must be developed to abstract the feature and display it in a manner that physically makes sense. The following is a list of the important physical phenomena found in transient (and steady-state) fluid flow: Shocks; Vortex ores; Regions of Recirculation; Boundary Layers; Wakes.

  13. ACME, a GIS tool for Automated Cirque Metric Extraction

    NASA Astrophysics Data System (ADS)

    Spagnolo, Matteo; Pellitero, Ramon; Barr, Iestyn D.; Ely, Jeremy C.; Pellicer, Xavier M.; Rea, Brice R.

    2017-02-01

    Regional scale studies of glacial cirque metrics provide key insights on the (palaeo) environment related to the formation of these erosional landforms. The growing availability of high resolution terrain models means that more glacial cirques can be identified and mapped in the future. However, the extraction of their metrics still largely relies on time consuming manual techniques or the combination of, more or less obsolete, GIS tools. In this paper, a newly coded toolbox is provided for the automated, and comparatively quick, extraction of 16 key glacial cirque metrics; including length, width, circularity, planar and 3D area, elevation, slope, aspect, plan closure and hypsometry. The set of tools, named ACME (Automated Cirque Metric Extraction), is coded in Python, runs in one of the most commonly used GIS packages (ArcGIS) and has a user friendly interface. A polygon layer of mapped cirques is required for all metrics, while a Digital Terrain Model and a point layer of cirque threshold midpoints are needed to run some of the tools. Results from ACME are comparable to those from other techniques and can be obtained rapidly, allowing large cirque datasets to be analysed and potentially important regional trends highlighted.

  14. Automated feature extraction for 3-dimensional point clouds

    NASA Astrophysics Data System (ADS)

    Magruder, Lori A.; Leigh, Holly W.; Soderlund, Alexander; Clymer, Bradley; Baer, Jessica; Neuenschwander, Amy L.

    2016-05-01

    Light detection and ranging (LIDAR) technology offers the capability to rapidly capture high-resolution, 3-dimensional surface data with centimeter-level accuracy for a large variety of applications. Due to the foliage-penetrating properties of LIDAR systems, these geospatial data sets can detect ground surfaces beneath trees, enabling the production of highfidelity bare earth elevation models. Precise characterization of the ground surface allows for identification of terrain and non-terrain points within the point cloud, and facilitates further discernment between natural and man-made objects based solely on structural aspects and relative neighboring parameterizations. A framework is presented here for automated extraction of natural and man-made features that does not rely on coincident ortho-imagery or point RGB attributes. The TEXAS (Terrain EXtraction And Segmentation) algorithm is used first to generate a bare earth surface from a lidar survey, which is then used to classify points as terrain or non-terrain. Further classifications are assigned at the point level by leveraging local spatial information. Similarly classed points are then clustered together into regions to identify individual features. Descriptions of the spatial attributes of each region are generated, resulting in the identification of individual tree locations, forest extents, building footprints, and 3-dimensional building shapes, among others. Results of the fully-automated feature extraction algorithm are then compared to ground truth to assess completeness and accuracy of the methodology.

  15. Automated feature extraction and classification from image sources

    USGS Publications Warehouse

    ,

    1995-01-01

    The U.S. Department of the Interior, U.S. Geological Survey (USGS), and Unisys Corporation have completed a cooperative research and development agreement (CRADA) to explore automated feature extraction and classification from image sources. The CRADA helped the USGS define the spectral and spatial resolution characteristics of airborne and satellite imaging sensors necessary to meet base cartographic and land use and land cover feature classification requirements and help develop future automated geographic and cartographic data production capabilities. The USGS is seeking a new commercial partner to continue automated feature extraction and classification research and development.

  16. Automated blood vessel extraction using local features on retinal images

    NASA Astrophysics Data System (ADS)

    Hatanaka, Yuji; Samo, Kazuki; Tajima, Mikiya; Ogohara, Kazunori; Muramatsu, Chisako; Okumura, Susumu; Fujita, Hiroshi

    2016-03-01

    An automated blood vessel extraction using high-order local autocorrelation (HLAC) on retinal images is presented. Although many blood vessel extraction methods based on contrast have been proposed, a technique based on the relation of neighbor pixels has not been published. HLAC features are shift-invariant; therefore, we applied HLAC features to retinal images. However, HLAC features are weak to turned image, thus a method was improved by the addition of HLAC features to a polar transformed image. The blood vessels were classified using an artificial neural network (ANN) with HLAC features using 105 mask patterns as input. To improve performance, the second ANN (ANN2) was constructed by using the green component of the color retinal image and the four output values of ANN, Gabor filter, double-ring filter and black-top-hat transformation. The retinal images used in this study were obtained from the "Digital Retinal Images for Vessel Extraction" (DRIVE) database. The ANN using HLAC output apparent white values in the blood vessel regions and could also extract blood vessels with low contrast. The outputs were evaluated using the area under the curve (AUC) based on receiver operating characteristics (ROC) analysis. The AUC of ANN2 was 0.960 as a result of our study. The result can be used for the quantitative analysis of the blood vessels.

  17. Automated DNA extraction for large numbers of plant samples.

    PubMed

    Mehle, Nataša; Nikolić, Petra; Rupar, Matevž; Boben, Jana; Ravnikar, Maja; Dermastia, Marina

    2013-01-01

    The method described here is a rapid, total DNA extraction procedure applicable to a large number of plant samples requiring pathogen detection. The procedure combines a simple and quick homogenization step of crude extracts with DNA extraction based upon the binding of DNA to magnetic beads. DNA is purified in an automated process in which the magnetic beads are transferred through a series of washing buffers. The eluted DNA is suitable for efficient amplification in PCR reactions.

  18. Automated sea floor extraction from underwater video

    NASA Astrophysics Data System (ADS)

    Kelly, Lauren; Rahmes, Mark; Stiver, James; McCluskey, Mike

    2016-05-01

    Ocean floor mapping using video is a method to simply and cost-effectively record large areas of the seafloor. Obtaining visual and elevation models has noteworthy applications in search and recovery missions. Hazards to navigation are abundant and pose a significant threat to the safety, effectiveness, and speed of naval operations and commercial vessels. This project's objective was to develop a workflow to automatically extract metadata from marine video and create image optical and elevation surface mosaics. Three developments made this possible. First, optical character recognition (OCR) by means of two-dimensional correlation, using a known character set, allowed for the capture of metadata from image files. Second, exploiting the image metadata (i.e., latitude, longitude, heading, camera angle, and depth readings) allowed for the determination of location and orientation of the image frame in mosaic. Image registration improved the accuracy of mosaicking. Finally, overlapping data allowed us to determine height information. A disparity map was created using the parallax from overlapping viewpoints of a given area and the relative height data was utilized to create a three-dimensional, textured elevation map.

  19. [DNA extraction from bones and teeth using AutoMate Express forensic DNA extraction system].

    PubMed

    Gao, Lin-Lin; Xu, Nian-Lai; Xie, Wei; Ding, Shao-Cheng; Wang, Dong-Jing; Ma, Li-Qin; Li, You-Ying

    2013-04-01

    To explore a new method in order to extract DNA from bones and teeth automatically. Samples of 33 bones and 15 teeth were acquired by freeze-mill method and manual method, respectively. DNA materials were extracted and quantified from the triturated samples by AutoMate Express forensic DNA extraction system. DNA extraction from bones and teeth were completed in 3 hours using the AutoMate Express forensic DNA extraction system. There was no statistical difference between the two methods in the DNA concentration of bones. Both bones and teeth got the good STR typing by freeze-mill method, and the DNA concentration of teeth was higher than those by manual method. AutoMate Express forensic DNA extraction system is a new method to extract DNA from bones and teeth, which can be applied in forensic practice.

  20. Automating data extraction in systematic reviews: a systematic review.

    PubMed

    Jonnalagadda, Siddhartha R; Goyal, Pawan; Huffman, Mark D

    2015-06-15

    Automation of the parts of systematic review process, specifically the data extraction step, may be an important strategy to reduce the time necessary to complete a systematic review. However, the state of the science of automatically extracting data elements from full texts has not been well described. This paper performs a systematic review of published and unpublished methods to automate data extraction for systematic reviews. We systematically searched PubMed, IEEEXplore, and ACM Digital Library to identify potentially relevant articles. We included reports that met the following criteria: 1) methods or results section described what entities were or need to be extracted, and 2) at least one entity was automatically extracted with evaluation results that were presented for that entity. We also reviewed the citations from included reports. Out of a total of 1190 unique citations that met our search criteria, we found 26 published reports describing automatic extraction of at least one of more than 52 potential data elements used in systematic reviews. For 25 (48 %) of the data elements used in systematic reviews, there were attempts from various researchers to extract information automatically from the publication text. Out of these, 14 (27 %) data elements were completely extracted, but the highest number of data elements extracted automatically by a single study was 7. Most of the data elements were extracted with F-scores (a mean of sensitivity and positive predictive value) of over 70 %. We found no unified information extraction framework tailored to the systematic review process, and published reports focused on a limited (1-7) number of data elements. Biomedical natural language processing techniques have not been fully utilized to fully or even partially automate the data extraction step of systematic reviews.

  1. Docking automation related technology, Phase 2 report

    SciTech Connect

    Jatko, W.B.; Goddard, J.S.; Gleason, S.S.; Ferrell, R.K.

    1995-04-01

    This report generalizes the progress for Phase II of the Docking Automated Related Technologies task component within the Modular Artillery Ammunition Delivery System (MAADS) technology demonstrator of the Future Armored Resupply Vehicle (FARV) project. This report also covers development activity at Oak Ridge National Laboratory (ORNL) during the period from January to July 1994.

  2. Extrinsic Evaluation of Automated Information Extraction Programs

    DTIC Science & Technology

    2010-05-01

    promising IE tools; the former was developed by the Natural Language Processing ( NLP ) Group of the University of Sheffield and the latter by the Center...Human Intelligence (STEF HUMINT ) message set, and a Google message set. Extracted information can be visualized or formatted and stored as Resource...Since the project involved training Automap on only one of the three corpora, the STEF HUMINT message set, the delete list was aimed at removing

  3. Automated extraction of radiation dose information for CT examinations.

    PubMed

    Cook, Tessa S; Zimmerman, Stefan; Maidment, Andrew D A; Kim, Woojin; Boonn, William W

    2010-11-01

    Exposure to radiation as a result of medical imaging is currently in the spotlight, receiving attention from Congress as well as the lay press. Although scanner manufacturers are moving toward including effective dose information in the Digital Imaging and Communications in Medicine headers of imaging studies, there is a vast repository of retrospective CT data at every imaging center that stores dose information in an image-based dose sheet. As such, it is difficult for imaging centers to participate in the ACR's Dose Index Registry. The authors have designed an automated extraction system to query their PACS archive and parse CT examinations to extract the dose information stored in each dose sheet. First, an open-source optical character recognition program processes each dose sheet and converts the information to American Standard Code for Information Interchange (ASCII) text. Each text file is parsed, and radiation dose information is extracted and stored in a database which can be queried using an existing pathology and radiology enterprise search tool. Using this automated extraction pipeline, it is possible to perform dose analysis on the >800,000 CT examinations in the PACS archive and generate dose reports for all of these patients. It is also possible to more effectively educate technologists, radiologists, and referring physicians about exposure to radiation from CT by generating report cards for interpreted and performed studies. The automated extraction pipeline enables compliance with the ACR's reporting guidelines and greater awareness of radiation dose to patients, thus resulting in improved patient care and management.

  4. Determination of steroid sex hormones and related synthetic compounds considered as endocrine disrupters in water by fully automated on-line solid-phase extraction-liquid chromatography-diode array detection.

    PubMed

    López de Alda, M J; Barceló, D

    2001-03-16

    In this study, a procedure for the simultaneous determination in water of six estrogens (estradiol, estriol, estrone, ethynyl estradiol, mestranol, and diethylstilbestrol) and three progestogens (progesterone, norethindrone, and levonorgestrel), selected based on their abundance in the human body, their estrogenic potency, and the extent of their use in contraceptive pills, was developed. The procedure, based on the on-line solid-phase extraction (SPE) of the water sample and subsequent analysis by liquid chromatography/diode array detection (LC/DAD), allows for the monitoring of up to 16 samples in a completely automated, unattended way. The SPE experimental conditions were optimized and the polymeric cartridge PLRP-S selected out of four different cartridges evaluated. The chromatographic separation was carried out on a LiChrospher 100 RP-18 and detection was performed at 200, 225, and 240 nm. The applicability of the method to the analysis of various environmental water samples, including drinking water, groundwater, surface water and sewage treatment plant effluents, was evaluated. Method detection limits were in the range 10-20 ng/l. The method precision and accuracy were satisfactory with recovery percentages ranging from 96 to 111% and relative standard deviations lower than 3%. The technique is also considerably cheap, fast, and easy, and, therefore, very adequate for routing monitoring. To the authors' knowledge it constitutes the first work describing a fully automated, on-line methodology for the continuous monitoring of these compounds in water.

  5. Automated vasculature extraction from placenta images

    NASA Astrophysics Data System (ADS)

    Almoussa, Nizar; Dutra, Brittany; Lampe, Bryce; Getreuer, Pascal; Wittman, Todd; Salafia, Carolyn; Vese, Luminita

    2011-03-01

    Recent research in perinatal pathology argues that analyzing properties of the placenta may reveal important information on how certain diseases progress. One important property is the structure of the placental blood vessels, which supply a fetus with all of its oxygen and nutrition. An essential step in the analysis of the vascular network pattern is the extraction of the blood vessels, which has only been done manually through a costly and time-consuming process. There is no existing method to automatically detect placental blood vessels; in addition, the large variation in the shape, color, and texture of the placenta makes it difficult to apply standard edge-detection algorithms. We describe a method to automatically detect and extract blood vessels from a given image by using image processing techniques and neural networks. We evaluate several local features for every pixel, in addition to a novel modification to an existing road detector. Pixels belonging to blood vessel regions have recognizable responses; hence, we use an artificial neural network to identify the pattern of blood vessels. A set of images where blood vessels are manually highlighted is used to train the network. We then apply the neural network to recognize blood vessels in new images. The network is effective in capturing the most prominent vascular structures of the placenta.

  6. Automated Image Registration Using Morphological Region of Interest Feature Extraction

    NASA Technical Reports Server (NTRS)

    Plaza, Antonio; LeMoigne, Jacqueline; Netanyahu, Nathan S.

    2005-01-01

    With the recent explosion in the amount of remotely sensed imagery and the corresponding interest in temporal change detection and modeling, image registration has become increasingly important as a necessary first step in the integration of multi-temporal and multi-sensor data for applications such as the analysis of seasonal and annual global climate changes, as well as land use/cover changes. The task of image registration can be divided into two major components: (1) the extraction of control points or features from images; and (2) the search among the extracted features for the matching pairs that represent the same feature in the images to be matched. Manual control feature extraction can be subjective and extremely time consuming, and often results in few usable points. Automated feature extraction is a solution to this problem, where desired target features are invariant, and represent evenly distributed landmarks such as edges, corners and line intersections. In this paper, we develop a novel automated registration approach based on the following steps. First, a mathematical morphology (MM)-based method is used to obtain a scale-orientation morphological profile at each image pixel. Next, a spectral dissimilarity metric such as the spectral information divergence is applied for automated extraction of landmark chips, followed by an initial approximate matching. This initial condition is then refined using a hierarchical robust feature matching (RFM) procedure. Experimental results reveal that the proposed registration technique offers a robust solution in the presence of seasonal changes and other interfering factors. Keywords-Automated image registration, multi-temporal imagery, mathematical morphology, robust feature matching.

  7. Automated Road Extraction from High Resolution Multispectral Imagery

    SciTech Connect

    Doucette, Peter J.; Agouris, Peggy; Stefanidis, Anthony

    2004-12-01

    Road networks represent a vital component of geospatial data sets in high demand, and thus contribute significantly to extraction labor costs. Multispectral imagery has only recently become widely available at high spatial resolutions, and modeling spectral content has received limited consideration for road extraction algorithms. This paper presents a methodology that exploits spectral content for fully automated road centerline extraction. Preliminary detection of road centerline pixel candidates is performed with Anti-parallel-edge Centerline Extraction (ACE). This is followed by constructing a road vector topology with a fuzzy grouping model that links nodes from a self-organized mapping of the ACE pixels. Following topology construction, a self-supervised road classification (SSRC) feedback loop is implemented to automate the process of training sample selection and refinement for a road class, as well deriving practical spectral definitions for non-road classes. SSRC demonstrates a potential to provide dramatic improvement in road extraction results by exploiting spectral content. Road centerline extraction results are presented for three 1m color-infrared suburban scenes, which show significant improvement following SSRC.

  8. Applications of the Automated SMAC Modal Parameter Extraction Package

    SciTech Connect

    MAYES,RANDALL L.; DORRELL,LARRY R.; KLENKE,SCOTT E.

    1999-10-29

    An algorithm known as SMAC (Synthesize Modes And Correlate), based on principles of modal filtering, has been in development for a few years. The new capabilities of the automated version are demonstrated on test data from a complex shell/payload system. Examples of extractions from impact and shaker data are shown. The automated algorithm extracts 30 to 50 modes in the bandwidth from each column of the frequency response function matrix. Examples of the synthesized Mode Indicator Functions (MIFs) compared with the actual MIFs show the accuracy of the technique. A data set for one input and 170 accelerometer outputs can typically be reduced in an hour. Application to a test with some complex modes is also demonstrated.

  9. Automated road extraction from aerial imagery by self-organization

    NASA Astrophysics Data System (ADS)

    Doucette, Peter J.

    To date, computer vision methods have largely focused on extraction from panchromatic imagery. Despite significant technological advances, road extraction algorithms have fallen short of satisfying rigorous production requirements. To that end, the objective of this thesis is to present a new approach for automating road detection from high-resolution multispectral imagery. This thesis considers three main research objectives: (1) development of a fully automated road extraction strategy in that interactive human supervision or input initializations are not required; (2) development of a globalized approach to road detection that is motivated by principles of self-organization; (3) meaningful exploitation of high-resolution multispectral imagery. Several new techniques are presented for fully automated road extraction from high-resolution imagery. The core algorithms implemented include (1) Anti-parallel edge Centerline Extractor (ACE), (2) Fuzzy Organization of Elongated Regions (FOrgER), and (3) Self-Organizing Road Finder (SORF). The ACE algorithm extends the idea of anti-parallel edge detection in a new approach that considers multi-layer images. The FOrgER algorithm is motivated by Gestalt grouping principles in perceptual organization. The FOrgER approach combines principles of self-organization with fuzzy inferencing to building road topology. Self-organization represents a learning paradigm that is neurobiologically motivated. Globalized analysis promotes lower sensitivity to fragmented information, and demonstrates robust capacity for handling scene clutter in high-resolution images. Finally, the SORF algorithm bridges concepts from ACE and FOrgER into a comprehensive and cooperative approach for fully automated road finding. By providing an exceptional breadth of input parameters, output metrics, modes of operation, and adaptability to various input, SORF is particularly well suited as an analytical research tool. Extraction results from the SORF

  10. Automated concept and relationship extraction for the semi-automated ontology management (SEAM) system.

    PubMed

    Doing-Harris, Kristina; Livnat, Yarden; Meystre, Stephane

    2015-01-01

    We develop medical-specialty specific ontologies that contain the settled science and common term usage. We leverage current practices in information and relationship extraction to streamline the ontology development process. Our system combines different text types with information and relationship extraction techniques in a low overhead modifiable system. Our SEmi-Automated ontology Maintenance (SEAM) system features a natural language processing pipeline for information extraction. Synonym and hierarchical groups are identified using corpus-based semantics and lexico-syntactic patterns. The semantic vectors we use are term frequency by inverse document frequency and context vectors. Clinical documents contain the terms we want in an ontology. They also contain idiosyncratic usage and are unlikely to contain the linguistic constructs associated with synonym and hierarchy identification. By including both clinical and biomedical texts, SEAM can recommend terms from those appearing in both document types. The set of recommended terms is then used to filter the synonyms and hierarchical relationships extracted from the biomedical corpus. We demonstrate the generality of the system across three use cases: ontologies for acute changes in mental status, Medically Unexplained Syndromes, and echocardiogram summary statements. Across the three uses cases, we held the number of recommended terms relatively constant by changing SEAM's parameters. Experts seem to find more than 300 recommended terms to be overwhelming. The approval rate of recommended terms increased as the number and specificity of clinical documents in the corpus increased. It was 60% when there were 199 clinical documents that were not specific to the ontology domain and 90% when there were 2879 documents very specific to the target domain. We found that fewer than 100 recommended synonym groups were also preferred. Approval rates for synonym recommendations remained low varying from 43% to 25% as the

  11. Improved Automated Seismic Event Extraction Using Machine Learning

    NASA Astrophysics Data System (ADS)

    Mackey, L.; Kleiner, A.; Jordan, M. I.

    2009-12-01

    Like many organizations engaged in seismic monitoring, the Preparatory Commission for the Comprehensive Test Ban Treaty Organization collects and processes seismic data from a large network of sensors. This data is continuously transmitted to a central data center, and bulletins of seismic events are automatically extracted. However, as for many such automated systems at present, the inaccuracy of this extraction necessitates substantial human analyst review effort. A significant opportunity for improvement thus lies in the fact that these systems currently fail to fully utilize the valuable repository of historical data provided by prior analyst reviews. In this work, we present the results of the application of machine learning approaches to several fundamental sub-tasks in seismic event extraction. These methods share as a common theme the use of historical analyst-reviewed bulletins as ground truth from which they extract relevant patterns to accomplish the desired goals. For instance, we demonstrate the effectiveness of classification and ranking methods for the identification of false events -- that is, those which will be invalidated and discarded by analysts -- in automated bulletins. We also show gains in the accuracy of seismic phase identification via the use of classification techniques to automatically assign seismic phase labels to station detections. Furthermore, we examine the potential of historical association data to inform the direct association of new signal detections with their corresponding seismic events. Empirical results are based upon parametric historical seismic detection and event data received from the Preparatory Commission for the Comprehensive Test Ban Treaty Organization.

  12. Automated genomic DNA extraction from saliva using the QIAxtractor.

    PubMed

    Keijzer, Henry; Endenburg, Silvia C; Smits, Marcel G; Koopmann, Miriam

    2010-05-01

    Venipuncture is an invasive procedure to obtain whole blood in order to obtain high quality and sufficient amounts of genomic DNA. Obtaining DNA from non-invasive sources is preferred by patients, medical doctors and researchers. Saliva collected with cotton swabs (Salivette) is increasingly being used to study chemical compounds, and it can also be a source of DNA. However, extracting DNA from Salivettes is very laborious and time consuming. Therefore, we developed a protocol for automated genomic DNA extraction from saliva collected in Salivette using the QIAxtractor. Saliva (0.1-2.0 mL) was collected by chewing on a Salivette for 1-2 min. A total of 70 samples, collected from healthy volunteers, were extracted with the QIAxtractor robot and a Qiagen DX reagent pack. Quantity and quality was assessed using UV spectrometry and real-time polymerase chain reaction (PCR) (substitution at position -729 in the CYP1A2 gene). The average DNA concentration from the saliva samples was 6.0 microg/mL (95% CI 5.4-6.6 microg/mL). In 100% of the saliva samples, PCR products were detected with an average cycle threshold of 23.1 (95% CI 22.6-23.6). DNA can be extracted in sufficient amounts from Salivette with a fully automated system with a short turnaround time. Real-time PCR can be performed with these samples.

  13. Automated RNA Extraction and Purification for Multiplexed Pathogen Detection

    SciTech Connect

    Bruzek, Amy K.; Bruckner-Lea, Cindy J.

    2005-01-01

    Pathogen detection has become an extremely important part of our nation?s defense in this post 9/11 world where the threat of bioterrorist attacks are a grim reality. When a biological attack takes place, response time is critical. The faster the biothreat is assessed, the faster countermeasures can be put in place to protect the health of the general public. Today some of the most widely used methods for detecting pathogens are either time consuming or not reliable [1]. Therefore, a method that can detect multiple pathogens that is inherently reliable, rapid, automated and field portable is needed. To that end, we are developing automated fluidics systems for the recovery, cleanup, and direct labeling of community RNA from suspect environmental samples. The advantage of using RNA for detection is that there are multiple copies of mRNA in a cell, whereas there are normally only one or two copies of DNA [2]. Because there are multiple copies of mRNA in a cell for highly expressed genes, no amplification of the genetic material may be necessary, and thus rapid and direct detection of only a few cells may be possible [3]. This report outlines the development of both manual and automated methods for the extraction and purification of mRNA. The methods were evaluated using cell lysates from Escherichia coli 25922 (nonpathogenic), Salmonella typhimurium (pathogenic), and Shigella spp (pathogenic). Automated RNA purification was achieved using a custom sequential injection fluidics system consisting of a syringe pump, a multi-port valve and a magnetic capture cell. mRNA was captured using silica coated superparamagnetic beads that were trapped in the tubing by a rare earth magnet. RNA was detected by gel electrophoresis and/or by hybridization of the RNA to microarrays. The versatility of the fluidics systems and the ability to automate these systems allows for quick and easy processing of samples and eliminates the need for an experienced operator.

  14. Arduino-based automation of a DNA extraction system.

    PubMed

    Kim, Kyung-Won; Lee, Mi-So; Ryu, Mun-Ho; Kim, Jong-Won

    2015-01-01

    There have been many studies to detect infectious diseases with the molecular genetic method. This study presents an automation process for a DNA extraction system based on microfluidics and magnetic bead, which is part of a portable molecular genetic test system. This DNA extraction system consists of a cartridge with chambers, syringes, four linear stepper actuators, and a rotary stepper actuator. The actuators provide a sequence of steps in the DNA extraction process, such as transporting, mixing, and washing for the gene specimen, magnetic bead, and reagent solutions. The proposed automation system consists of a PC-based host application and an Arduino-based controller. The host application compiles a G code sequence file and interfaces with the controller to execute the compiled sequence. The controller executes stepper motor axis motion, time delay, and input-output manipulation. It drives the stepper motor with an open library, which provides a smooth linear acceleration profile. The controller also provides a homing sequence to establish the motor's reference position, and hard limit checking to prevent any over-travelling. The proposed system was implemented and its functionality was investigated, especially regarding positioning accuracy and velocity profile.

  15. A simple automated instrument for DNA extraction in forensic casework.

    PubMed

    Montpetit, Shawn A; Fitch, Ian T; O'Donnell, Patrick T

    2005-05-01

    The Qiagen BioRobot EZ1 is a small, rapid, and reliable automated DNA extraction instrument capable of extracting DNA from up to six samples in as few as 20 min using magnetic bead technology. The San Diego Police Department Crime Laboratory has validated the BioRobot EZ1 for the DNA extraction of evidence and reference samples in forensic casework. The BioRobot EZ1 was evaluated for use on a variety of different evidence sample types including blood, saliva, and semen evidence. The performance of the BioRobot EZ1 with regard to DNA recovery and potential cross-contamination was also assessed. DNA yields obtained with the BioRobot EZ1 were comparable to those from organic extraction. The BioRobot EZ1 was effective at removing PCR inhibitors, which often co-purify with DNA in organic extractions. The incorporation of the BioRobot EZ1 into forensic casework has streamlined the DNA analysis process by reducing the need for labor-intensive phenol-chloroform extractions.

  16. Automated labeling of bibliographic data extracted from biomedical online journals

    NASA Astrophysics Data System (ADS)

    Kim, Jongwoo; Le, Daniel X.; Thoma, George R.

    2003-01-01

    A prototype system has been designed to automate the extraction of bibliographic data (e.g., article title, authors, abstract, affiliation and others) from online biomedical journals to populate the National Library of Medicine"s MEDLINE database. This paper describes a key module in this system: the labeling module that employs statistics and fuzzy rule-based algorithms to identify segmented zones in an article"s HTML pages as specific bibliographic data. Results from experiments conducted with 1,149 medical articles from forty-seven journal issues are presented.

  17. An automated approach for extracting Barrier Island morphology from digital elevation models

    NASA Astrophysics Data System (ADS)

    Wernette, Phillipe; Houser, Chris; Bishop, Michael P.

    2016-06-01

    The response and recovery of a barrier island to extreme storms depends on the elevation of the dune base and crest, both of which can vary considerably alongshore and through time. Quantifying the response to and recovery from storms requires that we can first identify and differentiate the dune(s) from the beach and back-barrier, which in turn depends on accurate identification and delineation of the dune toe, crest and heel. The purpose of this paper is to introduce a multi-scale automated approach for extracting beach, dune (dune toe, dune crest and dune heel), and barrier island morphology. The automated approach introduced here extracts the shoreline and back-barrier shoreline based on elevation thresholds, and extracts the dune toe, dune crest and dune heel based on the average relative relief (RR) across multiple spatial scales of analysis. The multi-scale automated RR approach to extracting dune toe, dune crest, and dune heel based upon relative relief is more objective than traditional approaches because every pixel is analyzed across multiple computational scales and the identification of features is based on the calculated RR values. The RR approach out-performed contemporary approaches and represents a fast objective means to define important beach and dune features for predicting barrier island response to storms. The RR method also does not require that the dune toe, crest, or heel are spatially continuous, which is important because dune morphology is likely naturally variable alongshore.

  18. Automated Feature Extraction of Foredune Morphology from Terrestrial Lidar Data

    NASA Astrophysics Data System (ADS)

    Spore, N.; Brodie, K. L.; Swann, C.

    2014-12-01

    Foredune morphology is often described in storm impact prediction models using the elevation of the dune crest and dune toe and compared with maximum runup elevations to categorize the storm impact and predicted responses. However, these parameters do not account for other foredune features that may make them more or less erodible, such as alongshore variations in morphology, vegetation coverage, or compaction. The goal of this work is to identify other descriptive features that can be extracted from terrestrial lidar data that may affect the rate of dune erosion under wave attack. Daily, mobile-terrestrial lidar surveys were conducted during a 6-day nor'easter (Hs = 4 m in 6 m water depth) along 20km of coastline near Duck, North Carolina which encompassed a variety of foredune forms in close proximity to each other. This abstract will focus on the tools developed for the automated extraction of the morphological features from terrestrial lidar data, while the response of the dune will be presented by Brodie and Spore as an accompanying abstract. Raw point cloud data can be dense and is often under-utilized due to time and personnel constraints required for analysis, since many algorithms are not fully automated. In our approach, the point cloud is first projected into a local coordinate system aligned with the coastline, and then bare earth points are interpolated onto a rectilinear 0.5 m grid creating a high resolution digital elevation model. The surface is analyzed by identifying features along each cross-shore transect. Surface curvature is used to identify the position of the dune toe, and then beach and berm morphology is extracted shoreward of the dune toe, and foredune morphology is extracted landward of the dune toe. Changes in, and magnitudes of, cross-shore slope, curvature, and surface roughness are used to describe the foredune face and each cross-shore transect is then classified using its pre-storm morphology for storm-response analysis.

  19. Automated extraction of knowledge for model-based diagnostics

    NASA Technical Reports Server (NTRS)

    Gonzalez, Avelino J.; Myler, Harley R.; Towhidnejad, Massood; Mckenzie, Frederic D.; Kladke, Robin R.

    1990-01-01

    The concept of accessing computer aided design (CAD) design databases and extracting a process model automatically is investigated as a possible source for the generation of knowledge bases for model-based reasoning systems. The resulting system, referred to as automated knowledge generation (AKG), uses an object-oriented programming structure and constraint techniques as well as internal database of component descriptions to generate a frame-based structure that describes the model. The procedure has been designed to be general enough to be easily coupled to CAD systems that feature a database capable of providing label and connectivity data from the drawn system. The AKG system is capable of defining knowledge bases in formats required by various model-based reasoning tools.

  20. Application and evaluation of automated methods to extract neuroanatomical connectivity statements from free text.

    PubMed

    French, Leon; Lane, Suzanne; Xu, Lydia; Siu, Celia; Kwok, Cathy; Chen, Yiqi; Krebs, Claudia; Pavlidis, Paul

    2012-11-15

    Automated annotation of neuroanatomical connectivity statements from the neuroscience literature would enable accessible and large-scale connectivity resources. Unfortunately, the connectivity findings are not formally encoded and occur as natural language text. This hinders aggregation, indexing, searching and integration of the reports. We annotated a set of 1377 abstracts for connectivity relations to facilitate automated extraction of connectivity relationships from neuroscience literature. We tested several baseline measures based on co-occurrence and lexical rules. We compare results from seven machine learning methods adapted from the protein interaction extraction domain that employ part-of-speech, dependency and syntax features. Co-occurrence based methods provided high recall with weak precision. The shallow linguistic kernel recalled 70.1% of the sentence-level connectivity statements at 50.3% precision. Owing to its speed and simplicity, we applied the shallow linguistic kernel to a large set of new abstracts. To evaluate the results, we compared 2688 extracted connections with the Brain Architecture Management System (an existing database of rat connectivity). The extracted connections were connected in the Brain Architecture Management System at a rate of 63.5%, compared with 51.1% for co-occurring brain region pairs. We found that precision increases with the recency and frequency of the extracted relationships. The source code, evaluations, documentation and other supplementary materials are available at http://www.chibi.ubc.ca/WhiteText. paul@chibi.ubc.ca. Supplementary data are available at Bioinformatics Online.

  1. Extraction of polychlorinated biphenyls from soils by automated focused microwave-assisted Soxhlet extraction.

    PubMed

    Luque-García, J L; de Castro, Luque

    2003-05-23

    The application of a new focused microwave-assisted Soxhlet extractor for the extraction of polychlorinated biphenyls from differently aged soils is here presented. The new extractor overcomes the disadvantages of previous devices based on the same principle and enables a fully automated extraction of two samples simultaneously. The variables affecting the extraction step (namely, power of irradiation, irradiation time, extractant volume, extractant composition and number of extraction cycles) have been optimized using experimental design methodology. The optimized method has also been applied to a certified reference material (CRM910-050 "real" contaminated soil) for quality assurance validation. Quantification of the target compounds has been performed by GC with ion-trap MS. The mass spectrometer was operated in the electron-ionization mode, with selected-ion monitoring at m/z 152, 186, 292, 326 and 498. The results obtained have demonstrated that this approach is as efficient as conventional Soxhlet but with a drastic reduction of both extraction time (70 min vs. 24 h for the "real" contaminated soil) and organic solvent disposal, as 75-80% of the extractant is recycled.

  2. Automated Dsm Extraction from Uav Images and Performance Analysis

    NASA Astrophysics Data System (ADS)

    Rhee, S.; Kim, T.

    2015-08-01

    As technology evolves, unmanned aerial vehicles (UAVs) imagery is being used from simple applications such as image acquisition to complicated applications such as 3D spatial information extraction. Spatial information is usually provided in the form of a DSM or point cloud. It is important to generate very dense tie points automatically from stereo images. In this paper, we tried to apply stereo image-based matching technique developed for satellite/aerial images to UAV images, propose processing steps for automated DSM generation and to analyse the possibility of DSM generation. For DSM generation from UAV images, firstly, exterior orientation parameters (EOPs) for each dataset were adjusted. Secondly, optimum matching pairs were determined. Thirdly, stereo image matching was performed with each pair. Developed matching algorithm is based on grey-level correlation on pixels applied along epipolar lines. Finally, the extracted match results were united with one result and the final DSM was made. Generated DSM was compared with a reference DSM from Lidar. Overall accuracy was 1.5 m in NMAD. However, several problems have to be solved in future, including obtaining precise EOPs, handling occlusion and image blurring problems. More effective interpolation technique needs to be developed in the future.

  3. Automated Extraction of Substance Use Information from Clinical Texts.

    PubMed

    Wang, Yan; Chen, Elizabeth S; Pakhomov, Serguei; Arsoniadis, Elliot; Carter, Elizabeth W; Lindemann, Elizabeth; Sarkar, Indra Neil; Melton, Genevieve B

    2015-01-01

    Within clinical discourse, social history (SH) includes important information about substance use (alcohol, drug, and nicotine use) as key risk factors for disease, disability, and mortality. In this study, we developed and evaluated a natural language processing (NLP) system for automated detection of substance use statements and extraction of substance use attributes (e.g., temporal and status) based on Stanford Typed Dependencies. The developed NLP system leveraged linguistic resources and domain knowledge from a multi-site social history study, Propbank and the MiPACQ corpus. The system attained F-scores of 89.8, 84.6 and 89.4 respectively for alcohol, drug, and nicotine use statement detection, as well as average F-scores of 82.1, 90.3, 80.8, 88.7, 96.6, and 74.5 respectively for extraction of attributes. Our results suggest that NLP systems can achieve good performance when augmented with linguistic resources and domain knowledge when applied to a wide breadth of substance use free text clinical notes.

  4. Multichannel Convolutional Neural Network for Biological Relation Extraction

    PubMed Central

    Quan, Chanqin; Sun, Xiao; Bai, Wenjun

    2016-01-01

    The plethora of biomedical relations which are embedded in medical logs (records) demands researchers' attention. Previous theoretical and practical focuses were restricted on traditional machine learning techniques. However, these methods are susceptible to the issues of “vocabulary gap” and data sparseness and the unattainable automation process in feature extraction. To address aforementioned issues, in this work, we propose a multichannel convolutional neural network (MCCNN) for automated biomedical relation extraction. The proposed model has the following two contributions: (1) it enables the fusion of multiple (e.g., five) versions in word embeddings; (2) the need for manual feature engineering can be obviated by automated feature learning with convolutional neural network (CNN). We evaluated our model on two biomedical relation extraction tasks: drug-drug interaction (DDI) extraction and protein-protein interaction (PPI) extraction. For DDI task, our system achieved an overall f-score of 70.2% compared to the standard linear SVM based system (e.g., 67.0%) on DDIExtraction 2013 challenge dataset. And for PPI task, we evaluated our system on Aimed and BioInfer PPI corpus; our system exceeded the state-of-art ensemble SVM system by 2.7% and 5.6% on f-scores. PMID:28053977

  5. Brain MAPS: an automated, accurate and robust brain extraction technique using a template library

    PubMed Central

    Leung, Kelvin K.; Barnes, Josephine; Modat, Marc; Ridgway, Gerard R.; Bartlett, Jonathan W.; Fox, Nick C.; Ourselin, Sébastien

    2011-01-01

    Whole brain extraction is an important pre-processing step in neuro-image analysis. Manual or semi-automated brain delineations are labour-intensive and thus not desirable in large studies, meaning that automated techniques are preferable. The accuracy and robustness of automated methods are crucial because human expertise may be required to correct any sub-optimal results, which can be very time consuming. We compared the accuracy of four automated brain extraction methods: Brain Extraction Tool (BET), Brain Surface Extractor (BSE), Hybrid Watershed Algorithm (HWA) and a Multi-Atlas Propagation and Segmentation (MAPS) technique we have previously developed for hippocampal segmentation. The four methods were applied to extract whole brains from 682 1.5T and 157 3T T1-weighted MR baseline images from the Alzheimer’s Disease Neuroimaging Initiative database. Semi-automated brain segmentations with manual editing and checking were used as the gold-standard to compare with the results. The median Jaccard index of MAPS was higher than HWA, BET and BSE in 1.5T and 3T scans (p < 0.05, all tests), and the 1st-99th centile range of the Jaccard index of MAPS was smaller than HWA, BET and BSE in 1.5T and 3T scans (p < 0.05, all tests). HWA and MAPS were found to be best at including all brain tissues (median false negative rate ≤ 0.010% for 1.5T scans and ≤ 0.019% for 3T scans, both methods). The median Jaccard index of MAPS were similar in both 1.5T and 3T scans, whereas those of BET, BSE and HWA were higher in 1.5T scans than 3T scans (p < 0.05, all tests). We found that the diagnostic group had a small effect on the median Jaccard index of all four methods. In conclusion, MAPS had relatively high accuracy and low variability compared to HWA, BET and BSE in MR scans with and without atrophy. PMID:21195780

  6. Automated extraction of subdural electrode grid from post-implant MRI scans for epilepsy surgery

    NASA Astrophysics Data System (ADS)

    Pozdin, Maksym A.; Skrinjar, Oskar

    2005-04-01

    This paper presents an automated algorithm for extraction of Subdural Electrode Grid (SEG) from post-implant MRI scans for epilepsy surgery. Post-implant MRI scans are corrupted by the image artifacts caused by implanted electrodes. The artifacts appear as dark spherical voids and given that the cerebrospinal fluid is also dark in T1-weigthed MRI scans, it is a difficult and time-consuming task to manually locate SEG position relative to brain structures of interest. The proposed algorithm reliably and accurately extracts SEG from post-implant MRI scan, i.e. finds its shape and position relative to brain structures of interest. The algorithm was validated against manually determined electrode locations, and the average error was 1.6mm for the three tested subjects.

  7. A COMPARISON OF AUTOMATED AND TRADITIONAL METHODS FOR THE EXTRACTION OF ARSENICALS FROM FISH

    EPA Science Inventory

    An automated extractor employing accelerated solvent extraction (ASE) has been compared with a traditional sonication method of extraction for the extraction of arsenicals from fish tissue. Four different species of fish and a standard reference material, DORM-2, were subjected t...

  8. AUTOMATED SOLID PHASE EXTRACTION GC/MS FOR ANALYSIS OF SEMIVOLATILES IN WATER AND SEDIMENTS

    EPA Science Inventory

    Data is presented on the development of a new automated system combining solid phase extraction (SPE) with GC/MS spectrometry for the single-run analysis of water samples containing a broad range of organic compounds. The system uses commercially available automated in-line sampl...

  9. AUTOMATED SOLID PHASE EXTRACTION GC/MS FOR ANALYSIS OF SEMIVOLATILES IN WATER AND SEDIMENTS

    EPA Science Inventory

    Data is presented on the development of a new automated system combining solid phase extraction (SPE) with GC/MS spectrometry for the single-run analysis of water samples containing a broad range of organic compounds. The system uses commercially available automated in-line sampl...

  10. An automated sample preparation for detection of 72 doping-related substances.

    PubMed

    Cuervo, Darío; Díaz-Rodríguez, Pablo; Muñoz-Guerra, Jesús

    2014-06-01

    Automation of sample preparation procedures in a doping control laboratory is of great interest due to the large number of samples that have to be analyzed, especially in large events where a high throughput protocol is required to process samples over 24 h. The automation of such protocols requires specific equipment capable of carrying out the diverse mechanical tasks required for accomplishing these analytical methodologies, which include pipetting, shaking, heating, or crimping. An automated sample preparation procedure for the determination of doping-related substances by gas chromatography-mass spectrometry (GC-MS) and gas chromatography-tandem mass spectrometry (GC-MS/MS) analysis, including enzymatic hydrolysis, liquid-phase extraction and derivatization steps, was developed by using an automated liquid handling system. This paper presents a description of the equipment, together with the validation data for 72 doping-related compounds including extraction efficiency, evaluation of carry-over, interferences, and robustness. Validation was approached as a comparison between the results obtained using the manual protocol and the transferred automated one. The described methodology can be applied for sample preparation in routine anti-doping analysis with high sample throughput and suitable performance. Copyright © 2013 John Wiley & Sons, Ltd.

  11. Use of conventional bioanalytical devices to automate DBS extractions in liquid-handling dispensing tips.

    PubMed

    Johnson, Casey Jl; Christianson, Chad D; Sheaff, Chrystal N; Laine, Derek F; Zimmer, Jennifer Sd; Needham, Shane R

    2011-10-01

    Conventional liquid-handling devices were employed, along with an improved punching device, to semi-automate dried blood spot (DBS) extraction of alprazolam, α-hydroxyalprazolam and midazolam from human whole blood. Liquid-handling devices were used to add internal standard to the DBS cards and to extract the analytes from the DBS, in order to be analyzed by HPLC-MS/MS. The technique was shown to be accurate (±12.0%) and precise (10.3%) across the dynamic range of the assay. The semi-automated extraction reduced sample preparation time by more than 50% when compared with more conventional DBS manual extraction methods.

  12. Application and evaluation of automated methods to extract neuroanatomical connectivity statements from free text

    PubMed Central

    Pavlidis, Paul

    2012-01-01

    Motivation: Automated annotation of neuroanatomical connectivity statements from the neuroscience literature would enable accessible and large-scale connectivity resources. Unfortunately, the connectivity findings are not formally encoded and occur as natural language text. This hinders aggregation, indexing, searching and integration of the reports. We annotated a set of 1377 abstracts for connectivity relations to facilitate automated extraction of connectivity relationships from neuroscience literature. We tested several baseline measures based on co-occurrence and lexical rules. We compare results from seven machine learning methods adapted from the protein interaction extraction domain that employ part-of-speech, dependency and syntax features. Results: Co-occurrence based methods provided high recall with weak precision. The shallow linguistic kernel recalled 70.1% of the sentence-level connectivity statements at 50.3% precision. Owing to its speed and simplicity, we applied the shallow linguistic kernel to a large set of new abstracts. To evaluate the results, we compared 2688 extracted connections with the Brain Architecture Management System (an existing database of rat connectivity). The extracted connections were connected in the Brain Architecture Management System at a rate of 63.5%, compared with 51.1% for co-occurring brain region pairs. We found that precision increases with the recency and frequency of the extracted relationships. Availability and implementation: The source code, evaluations, documentation and other supplementary materials are available at http://www.chibi.ubc.ca/WhiteText. Contact: paul@chibi.ubc.ca Supplementary information: Supplementary data are available at Bioinformatics Online. PMID:22954628

  13. Towards automated support for extraction of reusable components

    NASA Technical Reports Server (NTRS)

    Abd-El-hafiz, S. K.; Basili, Victor R.; Caldiera, Gianluigi

    1992-01-01

    A cost effective introduction of software reuse techniques requires the reuse of existing software developed in many cases without aiming at reusability. This paper discusses the problems related to the analysis and reengineering of existing software in order to reuse it. We introduce a process model for component extraction and focus on the problem of analyzing and qualifying software components which are candidates for reuse. A prototype tool for supporting the extraction of reusable components is presented. One of the components of this tool aids in understanding programs and is based on the functional model of correctness. It can assist software engineers in the process of finding correct formal specifications for programs. A detailed description of this component and an example to demonstrate a possible operational scenario are given.

  14. Automated extraction and variability analysis of sulcal neuroanatomy.

    PubMed

    Le Goualher, G; Procyk, E; Collins, D L; Venugopal, R; Barillot, C; Evans, A C

    1999-03-01

    Systematic mapping of the variability in cortical sulcal anatomy is an area of increasing interest which presents numerous methodological challenges. To address these issues, we have implemented sulcal extraction and assisted labeling (SEAL) to automatically extract the two-dimensional (2-D) surface ribbons that represent the median axis of cerebral sulci and to neuroanatomically label these entities. To encode the extracted three-dimensional (3-D) cortical sulcal schematic topography (CSST) we define a relational graph structure composed of two main features: vertices (representing sulci) and arcs (representing the relationships between sulci). Vertices contain a parametric representation of the surface ribbon buried within the sulcus. Points on this surface are expressed in stereotaxic coordinates (i.e., with respect to a standardized brain coordinate system). For each of these vertices, we store length, depth, and orientation as well as anatomical attributes (e.g., hemisphere, lobe, sulcus type, etc.). Each arc stores the 3-D location of the junction between sulci as well as a list of its connecting sulci. Sulcal labeling is performed semiautomatically by selecting a sulcal entity in the CSST and selecting from a menu of candidate sulcus names. In order to help the user in the labeling task, the menu is restricted to the most likely candidates by using priors for the expected sulcal spatial distribution. These priors, i.e., sulcal probabilistic maps, were created from the spatial distribution of 34 sulci traced manually on 36 different subjects. Given these spatial probability maps, the user is provided with the likelihood that the selected entity belongs to a particular sulcus. The cortical structure representation obtained by SEAL is suitable to extract statistical information about both the spatial and the structural composition of the cerebral cortical topography. This methodology allows for the iterative construction of a successively more complete

  15. Selecting a Relational Database Management System for Library Automation Systems.

    ERIC Educational Resources Information Center

    Shekhel, Alex; O'Brien, Mike

    1989-01-01

    Describes the evaluation of four relational database management systems (RDBMSs) (Informix Turbo, Oracle 6.0 TPS, Unify 2000 and Relational Technology's Ingres 5.0) to determine which is best suited for library automation. The evaluation criteria used to develop a benchmark specifically designed to test RDBMSs for libraries are discussed. (CLB)

  16. Selecting a Relational Database Management System for Library Automation Systems.

    ERIC Educational Resources Information Center

    Shekhel, Alex; O'Brien, Mike

    1989-01-01

    Describes the evaluation of four relational database management systems (RDBMSs) (Informix Turbo, Oracle 6.0 TPS, Unify 2000 and Relational Technology's Ingres 5.0) to determine which is best suited for library automation. The evaluation criteria used to develop a benchmark specifically designed to test RDBMSs for libraries are discussed. (CLB)

  17. Comparison of manual and automated nucleic acid extraction from whole-blood samples.

    PubMed

    Riemann, Kathrin; Adamzik, Michael; Frauenrath, Stefan; Egensperger, Rupert; Schmid, Kurt W; Brockmeyer, Norbert H; Siffert, Winfried

    2007-01-01

    Nucleic acid extraction and purification from whole blood is a routine application in many laboratories. Automation of this procedure promises standardized sample treatment, a low error rate, and avoidance of contamination. The performance of the BioRobot M48 (Qiagen) and the manual QIAmp DNA Blood Mini Kit (Qiagen) was compared for the extraction of DNA from whole blood. The concentration and purity of the extracted DNAs were determined by spectrophotometry. Analytical sensitivity was assessed by common PCR and genotyping techniques. The quantity and quality of the generated DNAs were slightly higher using the manual extraction method. The results of downstream applications were comparable to each other. Amplification of high-molecular-weight PCR fragments, genotyping by restriction digest, and pyrosequencing were successful for all samples. No cross-contamination could be detected. While automated DNA extraction requires significantly less hands-on time, it is slightly more expensive than the manual extraction method.

  18. Disposable and removable nucleic acid extraction and purification cartridges for automated flow-through systems

    DOEpatents

    Regan, John Frederick

    2014-09-09

    Removable cartridges are used on automated flow-through systems for the purpose of extracting and purifying genetic material from complex matrices. Different types of cartridges are paired with specific automated protocols to concentrate, extract, and purifying pathogenic or human genetic material. Their flow-through nature allows large quantities sample to be processed. Matrices may be filtered using size exclusion and/or affinity filters to concentrate the pathogen of interest. Lysed material is ultimately passed through a filter to remove the insoluble material before the soluble genetic material is delivered past a silica-like membrane that binds the genetic material, where it is washed, dried, and eluted. Cartridges are inserted into the housing areas of flow-through automated instruments, which are equipped with sensors to ensure proper placement and usage of the cartridges. Properly inserted cartridges create fluid- and air-tight seals with the flow lines of an automated instrument.

  19. Automating Nuclear-Safety-Related SQA Procedures with Custom Applications

    SciTech Connect

    Freels, James D.

    2016-01-01

    Nuclear safety-related procedures are rigorous for good reason. Small design mistakes can quickly turn into unwanted failures. Researchers at Oak Ridge National Laboratory worked with COMSOL to define a simulation app that automates the software quality assurance (SQA) verification process and provides results in less than 24 hours.

  20. Automating Nuclear-Safety-Related SQA Procedures with Custom Applications

    DOE PAGES

    Freels, James D.

    2016-01-01

    Nuclear safety-related procedures are rigorous for good reason. Small design mistakes can quickly turn into unwanted failures. Researchers at Oak Ridge National Laboratory worked with COMSOL to define a simulation app that automates the software quality assurance (SQA) verification process and provides results in less than 24 hours.

  1. Evaluation of four automated protocols for extraction of DNA from FTA cards.

    PubMed

    Stangegaard, Michael; Børsting, Claus; Ferrero-Miliani, Laura; Frank-Hansen, Rune; Poulsen, Lena; Hansen, Anders J; Morling, Niels

    2013-10-01

    Extraction of DNA using magnetic bead-based techniques on automated DNA extraction instruments provides a fast, reliable, and reproducible method for DNA extraction from various matrices. Here, we have compared the yield and quality of DNA extracted from FTA cards using four automated extraction protocols on three different instruments. The extraction processes were repeated up to six times with the same pieces of FTA cards. The sample material on the FTA cards was either blood or buccal cells. With the QIAamp DNA Investigator and QIAsymphony DNA Investigator kits, it was possible to extract DNA from the FTA cards in all six rounds of extractions in sufficient amount and quality to obtain complete short tandem repeat (STR) profiles on a QIAcube and a QIAsymphony SP. With the PrepFiler Express kit, almost all the extractable DNA was extracted in the first two rounds of extractions. Furthermore, we demonstrated that it was possible to successfully extract sufficient DNA for STR profiling from previously processed FTA card pieces that had been stored at 4 °C for up to 1 year. This showed that rare or precious FTA card samples may be saved for future analyses even though some DNA was already extracted from the FTA cards.

  2. Extracting causal relations on HIV drug resistance from literature

    PubMed Central

    2010-01-01

    Background In HIV treatment it is critical to have up-to-date resistance data of applicable drugs since HIV has a very high rate of mutation. These data are made available through scientific publications and must be extracted manually by experts in order to be used by virologists and medical doctors. Therefore there is an urgent need for a tool that partially automates this process and is able to retrieve relations between drugs and virus mutations from literature. Results In this work we present a novel method to extract and combine relationships between HIV drugs and mutations in viral genomes. Our extraction method is based on natural language processing (NLP) which produces grammatical relations and applies a set of rules to these relations. We applied our method to a relevant set of PubMed abstracts and obtained 2,434 extracted relations with an estimated performance of 84% for F-score. We then combined the extracted relations using logistic regression to generate resistance values for each pair. The results of this relation combination show more than 85% agreement with the Stanford HIVDB for the ten most frequently occurring mutations. The system is used in 5 hospitals from the Virolab project http://www.virolab.org to preselect the most relevant novel resistance data from literature and present those to virologists and medical doctors for further evaluation. Conclusions The proposed relation extraction and combination method has a good performance on extracting HIV drug resistance data. It can be used in large-scale relation extraction experiments. The developed methods can also be applied to extract other type of relations such as gene-protein, gene-disease, and disease-mutation. PMID:20178611

  3. Automated serial extraction of DNA and RNA from biobanked tissue specimens.

    PubMed

    Mathot, Lucy; Wallin, Monica; Sjöblom, Tobias

    2013-08-19

    With increasing biobanking of biological samples, methods for large scale extraction of nucleic acids are in demand. The lack of such techniques designed for extraction from tissues results in a bottleneck in downstream genetic analyses, particularly in the field of cancer research. We have developed an automated procedure for tissue homogenization and extraction of DNA and RNA into separate fractions from the same frozen tissue specimen. A purpose developed magnetic bead based technology to serially extract both DNA and RNA from tissues was automated on a Tecan Freedom Evo robotic workstation. 864 fresh-frozen human normal and tumor tissue samples from breast and colon were serially extracted in batches of 96 samples. Yields and quality of DNA and RNA were determined. The DNA was evaluated in several downstream analyses, and the stability of RNA was determined after 9 months of storage. The extracted DNA performed consistently well in processes including PCR-based STR analysis, HaloPlex selection and deep sequencing on an Illumina platform, and gene copy number analysis using microarrays. The RNA has performed well in RT-PCR analyses and maintains integrity upon storage. The technology described here enables the processing of many tissue samples simultaneously with a high quality product and a time and cost reduction for the user. This reduces the sample preparation bottleneck in cancer research. The open automation format also enables integration with upstream and downstream devices for automated sample quantitation or storage.

  4. Automated serial extraction of DNA and RNA from biobanked tissue specimens

    PubMed Central

    2013-01-01

    Background With increasing biobanking of biological samples, methods for large scale extraction of nucleic acids are in demand. The lack of such techniques designed for extraction from tissues results in a bottleneck in downstream genetic analyses, particularly in the field of cancer research. We have developed an automated procedure for tissue homogenization and extraction of DNA and RNA into separate fractions from the same frozen tissue specimen. A purpose developed magnetic bead based technology to serially extract both DNA and RNA from tissues was automated on a Tecan Freedom Evo robotic workstation. Results 864 fresh-frozen human normal and tumor tissue samples from breast and colon were serially extracted in batches of 96 samples. Yields and quality of DNA and RNA were determined. The DNA was evaluated in several downstream analyses, and the stability of RNA was determined after 9 months of storage. The extracted DNA performed consistently well in processes including PCR-based STR analysis, HaloPlex selection and deep sequencing on an Illumina platform, and gene copy number analysis using microarrays. The RNA has performed well in RT-PCR analyses and maintains integrity upon storage. Conclusions The technology described here enables the processing of many tissue samples simultaneously with a high quality product and a time and cost reduction for the user. This reduces the sample preparation bottleneck in cancer research. The open automation format also enables integration with upstream and downstream devices for automated sample quantitation or storage. PMID:23957867

  5. Spatial resolution requirements for automated cartographic road extraction

    USGS Publications Warehouse

    Benjamin, S.; Gaydos, L.

    1990-01-01

    Ground resolution requirements for detection and extraction of road locations in a digitized large-scale photographic database were investigated. A color infrared photograph of Sunnyvale, California was scanned, registered to a map grid, and spatially degraded to 1- to 5-metre resolution pixels. Road locations in each data set were extracted using a combination of image processing and CAD programs. These locations were compared to a photointerpretation of road locations to determine a preferred pixel size for the extraction method. Based on road pixel omission error computations, a 3-metre pixel resolution appears to be the best choice for this extraction method. -Authors

  6. Automated multisyringe stir bar sorptive extraction using robust montmorillonite/epoxy-coated stir bars.

    PubMed

    Ghani, Milad; Saraji, Mohammad; Maya, Fernando; Cerdà, Víctor

    2016-05-06

    Herein we present a simple, rapid and low cost strategy for the preparation of robust stir bar coatings based on the combination of montmorillonite with epoxy resin. The composite stir bar was implemented in a novel automated multisyringe stir bar sorptive extraction system (MS-SBSE), and applied to the extraction of four chlorophenols (4-chlorophenol, 2,4-dichlorophenol, 2,4,6-trichlorophenol and pentachlorophenol) as model compounds, followed by high performance liquid chromatography-diode array detection. The different experimental parameters of the MS-SBSE, such as sample volume, selection of the desorption solvent, desorption volume, desorption time, sample solution pH, salt effect and extraction time were studied. Under the optimum conditions, the detection limits were between 0.02 and 0.34μgL(-1). Relative standard deviations (RSD) of the method for the analytes at 10μgL(-1) concentration level ranged from 3.5% to 4.1% (as intra-day RSD) and from 3.9% to 4.3% (as inter-day RSD at 50μgL(-1) concentration level). Batch-to-batch reproducibility for three different stir bars was 4.6-5.1%. The enrichment factors were between 30 and 49. In order to investigate the capability of the developed technique for real sample analysis, well water, wastewater and leachates from a solid waste treatment plant were satisfactorily analyzed.

  7. Automated microfluidic DNA/RNA extraction with both disposable and reusable components

    NASA Astrophysics Data System (ADS)

    Kim, Jungkyu; Johnson, Michael; Hill, Parker; Sonkul, Rahul S.; Kim, Jongwon; Gale, Bruce K.

    2012-01-01

    An automated microfluidic nucleic extraction system was fabricated with a multilayer polydimethylsiloxane (PDMS) structure that consists of sample wells, microvalves, a micropump and a disposable microfluidic silica cartridge. Both the microvalves and micropump structures were fabricated in a single layer and are operated pneumatically using a 100 µm PDMS membrane. To fabricate the disposable microfluidic silica cartridge, two-cavity structures were made in a PDMS replica to fit the stacked silica membranes. A handheld controller for the microvalves and pumps was developed to enable system automation. With purified ribonucleic acid (RNA), whole blood and E. coli samples, the automated microfluidic nucleic acid extraction system was validated with a guanidine-based solid phase extraction procedure. An extraction efficiency of ~90% for deoxyribonucleic acid (DNA) and ~54% for RNA was obtained in 12 min from whole blood and E. coli samples, respectively. In addition, the same quantity and quality of extracted DNA was confirmed by polymerase chain reaction (PCR) amplification. The PCR also presented the appropriate amplification and melting profiles. Automated, programmable fluid control and physical separation of the reusable components and the disposable components significantly decrease the assay time and manufacturing cost and increase the flexibility and compatibility of the system with downstream components.

  8. Feature Extraction and Selection Strategies for Automated Target Recognition

    NASA Technical Reports Server (NTRS)

    Greene, W. Nicholas; Zhang, Yuhan; Lu, Thomas T.; Chao, Tien-Hsin

    2010-01-01

    Several feature extraction and selection methods for an existing automatic target recognition (ATR) system using JPLs Grayscale Optical Correlator (GOC) and Optimal Trade-Off Maximum Average Correlation Height (OT-MACH) filter were tested using MATLAB. The ATR system is composed of three stages: a cursory region of-interest (ROI) search using the GOC and OT-MACH filter, a feature extraction and selection stage, and a final classification stage. Feature extraction and selection concerns transforming potential target data into more useful forms as well as selecting important subsets of that data which may aide in detection and classification. The strategies tested were built around two popular extraction methods: Principal Component Analysis (PCA) and Independent Component Analysis (ICA). Performance was measured based on the classification accuracy and free-response receiver operating characteristic (FROC) output of a support vector machine(SVM) and a neural net (NN) classifier.

  9. Feature Extraction and Selection Strategies for Automated Target Recognition

    NASA Technical Reports Server (NTRS)

    Greene, W. Nicholas; Zhang, Yuhan; Lu, Thomas T.; Chao, Tien-Hsin

    2010-01-01

    Several feature extraction and selection methods for an existing automatic target recognition (ATR) system using JPLs Grayscale Optical Correlator (GOC) and Optimal Trade-Off Maximum Average Correlation Height (OT-MACH) filter were tested using MATLAB. The ATR system is composed of three stages: a cursory region of-interest (ROI) search using the GOC and OT-MACH filter, a feature extraction and selection stage, and a final classification stage. Feature extraction and selection concerns transforming potential target data into more useful forms as well as selecting important subsets of that data which may aide in detection and classification. The strategies tested were built around two popular extraction methods: Principal Component Analysis (PCA) and Independent Component Analysis (ICA). Performance was measured based on the classification accuracy and free-response receiver operating characteristic (FROC) output of a support vector machine(SVM) and a neural net (NN) classifier.

  10. Biomedical Relation Extraction: From Binary to Complex

    PubMed Central

    Zhong, Dayou

    2014-01-01

    Biomedical relation extraction aims to uncover high-quality relations from life science literature with high accuracy and efficiency. Early biomedical relation extraction tasks focused on capturing binary relations, such as protein-protein interactions, which are crucial for virtually every process in a living cell. Information about these interactions provides the foundations for new therapeutic approaches. In recent years, more interests have been shifted to the extraction of complex relations such as biomolecular events. While complex relations go beyond binary relations and involve more than two arguments, they might also take another relation as an argument. In the paper, we conduct a thorough survey on the research in biomedical relation extraction. We first present a general framework for biomedical relation extraction and then discuss the approaches proposed for binary and complex relation extraction with focus on the latter since it is a much more difficult task compared to binary relation extraction. Finally, we discuss challenges that we are facing with complex relation extraction and outline possible solutions and future directions. PMID:25214883

  11. Automation and Other Extensions of the SMAC Modal Parameter Extraction Package

    SciTech Connect

    KLENKE,SCOTT E.; MAYES,RANDALL L.

    1999-11-01

    As model validation techniques gain more acceptance and increase in power, the demands on the modal parameter extractions increase. The estimation accuracy, the number of modes desired, and the data reduction efficiency are required features. An algorithm known as SMAC (Synthesize Modes And Correlate), based on principles of modal filtering, has been in development for a few years. SMAC has now been extended in two main areas. First, it has now been automated. Second, it has been extended to fit complex modes as well as real modes. These extensions have enhanced the power of modal extraction so that, typically, the analyst needs to manually fit only 10 percent of the modes in the desired bandwidth, whereas the automated routines will fit 90 percent of the modes. SMAC could be successfully automated because it generally does not produce computational roots.

  12. Automated Protein Biomarker Analysis: on-line extraction of clinical samples by Molecularly Imprinted Polymers

    PubMed Central

    Rossetti, Cecilia; Świtnicka-Plak, Magdalena A.; Grønhaug Halvorsen, Trine; Cormack, Peter A.G.; Sellergren, Börje; Reubsaet, Léon

    2017-01-01

    Robust biomarker quantification is essential for the accurate diagnosis of diseases and is of great value in cancer management. In this paper, an innovative diagnostic platform is presented which provides automated molecularly imprinted solid-phase extraction (MISPE) followed by liquid chromatography-mass spectrometry (LC-MS) for biomarker determination using ProGastrin Releasing Peptide (ProGRP), a highly sensitive biomarker for Small Cell Lung Cancer, as a model. Molecularly imprinted polymer microspheres were synthesized by precipitation polymerization and analytical optimization of the most promising material led to the development of an automated quantification method for ProGRP. The method enabled analysis of patient serum samples with elevated ProGRP levels. Particularly low sample volumes were permitted using the automated extraction within a method which was time-efficient, thereby demonstrating the potential of such a strategy in a clinical setting. PMID:28303910

  13. Automated Protein Biomarker Analysis: on-line extraction of clinical samples by Molecularly Imprinted Polymers

    NASA Astrophysics Data System (ADS)

    Rossetti, Cecilia; Świtnicka-Plak, Magdalena A.; Grønhaug Halvorsen, Trine; Cormack, Peter A. G.; Sellergren, Börje; Reubsaet, Léon

    2017-03-01

    Robust biomarker quantification is essential for the accurate diagnosis of diseases and is of great value in cancer management. In this paper, an innovative diagnostic platform is presented which provides automated molecularly imprinted solid-phase extraction (MISPE) followed by liquid chromatography-mass spectrometry (LC-MS) for biomarker determination using ProGastrin Releasing Peptide (ProGRP), a highly sensitive biomarker for Small Cell Lung Cancer, as a model. Molecularly imprinted polymer microspheres were synthesized by precipitation polymerization and analytical optimization of the most promising material led to the development of an automated quantification method for ProGRP. The method enabled analysis of patient serum samples with elevated ProGRP levels. Particularly low sample volumes were permitted using the automated extraction within a method which was time-efficient, thereby demonstrating the potential of such a strategy in a clinical setting.

  14. Artificial intelligence issues related to automated computing operations

    NASA Technical Reports Server (NTRS)

    Hornfeck, William A.

    1989-01-01

    Large data processing installations represent target systems for effective applications of artificial intelligence (AI) constructs. The system organization of a large data processing facility at the NASA Marshall Space Flight Center is presented. The methodology and the issues which are related to AI application to automated operations within a large-scale computing facility are described. Problems to be addressed and initial goals are outlined.

  15. Discovering Indicators of Successful Collaboration Using Tense: Automated Extraction of Patterns in Discourse

    ERIC Educational Resources Information Center

    Thompson, Kate; Kennedy-Clark, Shannon; Wheeler, Penny; Kelly, Nick

    2014-01-01

    This paper describes a technique for locating indicators of success within the data collected from complex learning environments, proposing an application of e-research to access learner processes and measure and track group progress. The technique combines automated extraction of tense and modality via parts-of-speech tagging with a visualisation…

  16. Discovering Indicators of Successful Collaboration Using Tense: Automated Extraction of Patterns in Discourse

    ERIC Educational Resources Information Center

    Thompson, Kate; Kennedy-Clark, Shannon; Wheeler, Penny; Kelly, Nick

    2014-01-01

    This paper describes a technique for locating indicators of success within the data collected from complex learning environments, proposing an application of e-research to access learner processes and measure and track group progress. The technique combines automated extraction of tense and modality via parts-of-speech tagging with a visualisation…

  17. Plasma free metanephrine measurement using automated online solid-phase extraction HPLC tandem mass spectrometry.

    PubMed

    de Jong, Wilhelmina H A; Graham, Kendon S; van der Molen, Jan C; Links, Thera P; Morris, Michael R; Ross, H Alec; de Vries, Elisabeth G E; Kema, Ido P

    2007-09-01

    Quantification of plasma free metanephrine (MN) and normetanephrine (NMN) is considered to be the most accurate test for the clinical chemical diagnosis of pheochromocytoma and follow-up of pheochromocytoma patients. Current methods involve laborious, time-consuming, offline sample preparation, coupled with relatively nonspecific detection. Our aim was to develop a rapid, sensitive, and highly selective automated method for plasma free MNs in the nanomole per liter range. We used online solid-phase extraction coupled with HPLC-tandem mass spectrometric detection (XLC-MS/MS). Fifty microliters plasma equivalent was prepurified by automated online solid-phase extraction, using weak cation exchange cartridges. Chromatographic separation of the analytes and deuterated analogs was achieved by hydrophilic interaction chromatography. Mass spectrometric detection was performed in the multiple reaction monitoring mode using a quadrupole tandem mass spectrometer in positive electrospray ionization mode. Total run-time including sample cleanup was 8 min. Intra- and interassay analytical variation (CV) varied from 2.0% to 4.7% and 1.6% to 13.5%, respectively, whereas biological intra- and interday variation ranged from 9.4% to 45.0% and 8.4% to 23.2%. Linearity in the 0 to 20 nmol/L calibration range was excellent (R(2) > 0.99). For all compounds, recoveries ranged from 74.5% to 99.6%, and detection limits were <0.10 nmol/L. Reference intervals for 120 healthy adults were 0.07 to 0.33 nmol/L (MN), 0.23 to 1.07 nmol/L (NMN), and <0.17 nmol/L (3-methoxytyramine). This automated high-throughput XLC-MS/MS method for the measurement of plasma free MNs is precise and linear, with short analysis time and low variable costs. The method is attractive for routine diagnosis of pheochromocytoma because of its high analytical sensitivity, the analytical power of MS/MS, and the high diagnostic accuracy of free MNs.

  18. Automated extraction of pleural effusion in three-dimensional thoracic CT images

    NASA Astrophysics Data System (ADS)

    Kido, Shoji; Tsunomori, Akinori

    2009-02-01

    It is important for diagnosis of pulmonary diseases to measure volume of accumulating pleural effusion in threedimensional thoracic CT images quantitatively. However, automated extraction of pulmonary effusion correctly is difficult. Conventional extraction algorithm using a gray-level based threshold can not extract pleural effusion from thoracic wall or mediastinum correctly, because density of pleural effusion in CT images is similar to those of thoracic wall or mediastinum. So, we have developed an automated extraction method of pulmonary effusion by use of extracting lung area with pleural effusion. Our method used a template of lung obtained from a normal lung for segmentation of lungs with pleural effusions. Registration process consisted of two steps. First step was a global matching processing between normal and abnormal lungs of organs such as bronchi, bones (ribs, sternum and vertebrae) and upper surfaces of livers which were extracted using a region-growing algorithm. Second step was a local matching processing between normal and abnormal lungs which were deformed by the parameter obtained from the global matching processing. Finally, we segmented a lung with pleural effusion by use of the template which was deformed by two parameters obtained from the global matching processing and the local matching processing. We compared our method with a conventional extraction method using a gray-level based threshold and two published methods. The extraction rates of pleural effusions obtained from our method were much higher than those obtained from other methods. Automated extraction method of pulmonary effusion by use of extracting lung area with pleural effusion is promising for diagnosis of pulmonary diseases by providing quantitative volume of accumulating pleural effusion.

  19. Data Mining: The Art of Automated Knowledge Extraction

    NASA Astrophysics Data System (ADS)

    Karimabadi, H.; Sipes, T.

    2012-12-01

    Data mining algorithms are used routinely in a wide variety of fields and they are gaining adoption in sciences. The realities of real world data analysis are that (a) data has flaws, and (b) the models and assumptions that we bring to the data are inevitably flawed, and/or biased and misspecified in some way. Data mining can improve data analysis by detecting anomalies in the data, check for consistency of the user model assumptions, and decipher complex patterns and relationships that would not be possible otherwise. The common form of data collected from in situ spacecraft measurements is multi-variate time series which represents one of the most challenging problems in data mining. We have successfully developed algorithms to deal with such data and have extended the algorithms to handle streaming data. In this talk, we illustrate the utility of our algorithms through several examples including automated detection of reconnection exhausts in the solar wind and flux ropes in the magnetotail. We also show examples from successful applications of our technique to analysis of 3D kinetic simulations. With an eye to the future, we provide an overview of our upcoming plans that include collaborative data mining, expert outsourcing data mining, computer vision for image analysis, among others. Finally, we discuss the integration of data mining algorithms with web-based services such as VxOs and other Heliophysics data centers and the resulting capabilities that it would enable.

  20. Multispectral Image Road Extraction Based Upon Automated Map Conflation

    NASA Astrophysics Data System (ADS)

    Chen, Bin

    Road network extraction from remotely sensed imagery enables many important and diverse applications such as vehicle tracking, drone navigation, and intelligent transportation studies. There are, however, a number of challenges to road detection from an image. Road pavement material, width, direction, and topology vary across a scene. Complete or partial occlusions caused by nearby buildings, trees, and the shadows cast by them, make maintaining road connectivity difficult. The problems posed by occlusions are exacerbated with the increasing use of oblique imagery from aerial and satellite platforms. Further, common objects such as rooftops and parking lots are made of materials similar or identical to road pavements. This problem of common materials is a classic case of a single land cover material existing for different land use scenarios. This work addresses these problems in road extraction from geo-referenced imagery by leveraging the OpenStreetMap digital road map to guide image-based road extraction. The crowd-sourced cartography has the advantages of worldwide coverage that is constantly updated. The derived road vectors follow only roads and so can serve to guide image-based road extraction with minimal confusion from occlusions and changes in road material. On the other hand, the vector road map has no information on road widths and misalignments between the vector map and the geo-referenced image are small but nonsystematic. Properly correcting misalignment between two geospatial datasets, also known as map conflation, is an essential step. A generic framework requiring minimal human intervention is described for multispectral image road extraction and automatic road map conflation. The approach relies on the road feature generation of a binary mask and a corresponding curvilinear image. A method for generating the binary road mask from the image by applying a spectral measure is presented. The spectral measure, called anisotropy-tunable distance (ATD

  1. Automation of solid-phase microextraction-gas chromatography-mass spectrometry extraction of eucalyptus volatiles.

    PubMed

    Zini, Cláudia A; Lord, Heather; Christensen, Eva; de, Assis Teotĵnio F; Caramão, Elina B; Pawliszyn, Janusz

    2002-03-01

    Solid-phase microextraction (SPME) coupled with gas chromatography (GC)-ion-trap mass spectrometry (ITMS) is employed to analyze fragrance compounds from different species of eucalyptus trees: Eucalyptus dunnii, Eucalyptus saligna, Eucalyptus grandis, and hybrids of other species. The analyses are performed using an automated system for preincubation, extraction, injection, and analysis of samples. The autosampler used is a CombiPAL and has much flexibility for the development of SPME methods and accommodates a variety of vial sizes. For automated fragrance analysis the 10- and 20-mL vials are the most appropriate. The chromatographic separation and identification of the analytes are performed with a Varian Saturn 4D GC-ITMS using an HP-5MS capillary column. Several compounds of eucalyptus volatiles are identified, with good reproducibility for both the peak areas and retention times. Equilibrium extraction provides maximal sensitivity but requires additional consideration for the effect of carryover. Preequilibrium extraction allows good sensitivity with minimal carryover.

  2. Highly efficient automated extraction of DNA from old and contemporary skeletal remains.

    PubMed

    Zupanič Pajnič, Irena; Debska, Magdalena; Gornjak Pogorelc, Barbara; Vodopivec Mohorčič, Katja; Balažic, Jože; Zupanc, Tomaž; Štefanič, Borut; Geršak, Ksenija

    2016-01-01

    We optimised the automated extraction of DNA from old and contemporary skeletal remains using the AutoMate Express system and the PrepFiler BTA kit. 24 Contemporary and 25 old skeletal remains from WWII were analysed. For each skeleton, extraction using only 0.05 g of powder was performed according to the manufacturer's recommendations (no demineralisation - ND method). Since only 32% of full profiles were obtained from aged and 58% from contemporary casework skeletons, the extraction protocol was modified to acquire higher quality DNA and genomic DNA was obtained after full demineralisation (FD method). The nuclear DNA of the samples was quantified using the Investigator Quantiplex kit and STR typing was performed using the NGM kit to evaluate the performance of tested extraction methods. In the aged DNA samples, 64% of full profiles were obtained using the FD method. For the contemporary skeletal remains the performance of the ND method was closer to the FD method compared to the old skeletons, giving 58% of full profiles with the ND method and 71% of full profiles using the FD method. The extraction of DNA from only 0.05 g of bone or tooth powder using the AutoMate Express has proven highly successful in the recovery of DNA from old and contemporary skeletons, especially with the modified FD method. We believe that the results obtained will contribute to the possibilities of using automated devices for extracting DNA from skeletal remains, which would shorten the procedures for obtaining high-quality DNA from skeletons in forensic laboratories.

  3. Automated motif extraction and classification in RNA tertiary structures

    PubMed Central

    Djelloul, Mahassine; Denise, Alain

    2008-01-01

    We used a novel graph-based approach to extract RNA tertiary motifs. We cataloged them all and clustered them using an innovative graph similarity measure. We applied our method to three widely studied structures: Haloarcula marismortui 50S (H.m 50S), Escherichia coli 50S (E. coli 50S), and Thermus thermophilus 16S (T.th 16S) RNAs. We identified 10 known motifs without any prior knowledge of their shapes or positions. We additionally identified four putative new motifs. PMID:18957493

  4. Automated extraction and semantic analysis of mutation impacts from the biomedical literature.

    PubMed

    Naderi, Nona; Witte, René

    2012-06-18

    ), the first comprehensive, fully open-source approach to automatically extract impacts and related relevant information from the biomedical literature. We assessed the performance of our work on manually annotated corpora and the results show the reliability of our approach. The representation of the extracted information into a structured format facilitates knowledge management and aids in database curation and correction. Furthermore, access to the analysis results is provided through multiple interfaces, including web services for automated data integration and desktop-based solutions for end user interactions.

  5. Fully Automated Electro Membrane Extraction Autosampler for LC-MS Systems Allowing Soft Extractions for High-Throughput Applications.

    PubMed

    Fuchs, David; Pedersen-Bjergaard, Stig; Jensen, Henrik; Rand, Kasper D; Honoré Hansen, Steen; Petersen, Nickolaj Jacob

    2016-07-05

    The current work describes the implementation of electro membrane extraction (EME) into an autosampler for high-throughput analysis of samples by EME-LC-MS. The extraction probe was built into a luer lock adapter connected to a HTC PAL autosampler syringe. As the autosampler drew sample solution, analytes were extracted into the lumen of the extraction probe and transferred to a LC-MS system for further analysis. Various parameters affecting extraction efficacy were investigated including syringe fill strokes, syringe pull up volume, pull up delay and volume in the sample vial. The system was optimized for soft extraction of analytes and high sample throughput. Further, it was demonstrated that by flushing the EME-syringe with acidic wash buffer and reverting the applied electric potential, carry-over between samples can be reduced to below 1%. Performance of the system was characterized (RSD, <10%; R(2), 0.994) and finally, the EME-autosampler was used to analyze in vitro conversion of methadone into its main metabolite by rat liver microsomes and for demonstrating the potential of known CYP3A4 inhibitors to prevent metabolism of methadone. By making use of the high extraction speed of EME, a complete analytical workflow of purification, separation, and analysis of sample could be achieved within only 5.5 min. With the developed system large sequences of samples could be analyzed in a completely automated manner. This high degree of automation makes the developed EME-autosampler a powerful tool for a wide range of applications where high-throughput extractions are required before sample analysis.

  6. Automated Detection and Extraction of Coronal Dimmings from SDO/AIA Data

    NASA Astrophysics Data System (ADS)

    Wills-Davey, Meredith; Attrill, G. D. R.

    2009-05-01

    The sheer volume of data anticipated from the Solar Dynamics Observatory/Atmospheric Imaging Assembly (SDO/AIA) highlights the necessity for the development of automatic detection methods for various types of solar activity. Initially recognized in the 1970s, it is now well established that coronal dimmings are closely associated with coronal mass ejections (CMEs), and are particularly recognized as a reliable indicator of front-side (halo) CMEs, which can be difficult to detect in white-light coronagraph data. Existing work demonstrates that (i) estimates of the dimming volume can be related to the CME mass, (ii) the spatial extent of coronal dimmings gives information regarding the angular extent of the associated CME, (iii) measurement of the magnetic flux in dimming regions can be compared to that contained in modeled magnetic clouds, (iv) the evolution of coronal dimmings gives information about the development of the CME post-eruption, and (v) the distribution of the dimmings and their order of formation can be used to derive an understanding of the CME's early evolution. An automated coronal dimming region detection and extraction algorithm removes visual observer bias from determination of physical quantities described above. This allows reproducible, quantifiable results to be mined from very large datasets. The information derived may facilitate more reliable early space weather detection, as well as offering the potential for conducting large-sample studies focused on determining the geoeffectiveness of CMEs, coupled with analysis of their associated coronal dimmings. We present examples of dimming events extracted using our algorithm from existing EUV data, demonstrating the potential for the anticipated application to SDO/AIA data. Metadata returned by our algorithm include: location, area, volume, mass and dynamics of coronal dimmings. As well as running on historic datasets, this algorithm is capable of detecting and extracting coronal dimmings in

  7. Automated extraction and validation of children's gait parameters with the Kinect.

    PubMed

    Motiian, Saeid; Pergami, Paola; Guffey, Keegan; Mancinelli, Corrie A; Doretto, Gianfranco

    2015-12-02

    Gait analysis for therapy regimen prescription and monitoring requires patients to physically access clinics with specialized equipment. The timely availability of such infrastructure at the right frequency is especially important for small children. Besides being very costly, this is a challenge for many children living in rural areas. This is why this work develops a low-cost, portable, and automated approach for in-home gait analysis, based on the Microsoft Kinect. A robust and efficient method for extracting gait parameters is introduced, which copes with the high variability of noisy Kinect skeleton tracking data experienced across the population of young children. This is achieved by temporally segmenting the data with an approach based on coupling a probabilistic matching of stride template models, learned offline, with the estimation of their global and local temporal scaling. A preliminary study conducted on healthy children between 2 and 4 years of age is performed to analyze the accuracy, precision, repeatability, and concurrent validity of the proposed method against the GAITRite when measuring several spatial and temporal children's gait parameters. The method has excellent accuracy and good precision, with segmenting temporal sequences of body joint locations into stride and step cycles. Also, the spatial and temporal gait parameters, estimated automatically, exhibit good concurrent validity with those provided by the GAITRite, as well as very good repeatability. In particular, on a range of nine gait parameters, the relative and absolute agreements were found to be good and excellent, and the overall agreements were found to be good and moderate. This work enables and validates the automated use of the Kinect for children's gait analysis in healthy subjects. In particular, the approach makes a step forward towards developing a low-cost, portable, parent-operated in-home tool for clinicians assisting young children.

  8. Active learning: a step towards automating medical concept extraction.

    PubMed

    Kholghi, Mahnoosh; Sitbon, Laurianne; Zuccon, Guido; Nguyen, Anthony

    2016-03-01

    This paper presents an automatic, active learning-based system for the extraction of medical concepts from clinical free-text reports. Specifically, (1) the contribution of active learning in reducing the annotation effort and (2) the robustness of incremental active learning framework across different selection criteria and data sets are determined. The comparative performance of an active learning framework and a fully supervised approach were investigated to study how active learning reduces the annotation effort while achieving the same effectiveness as a supervised approach. Conditional random fields as the supervised method, and least confidence and information density as 2 selection criteria for active learning framework were used. The effect of incremental learning vs standard learning on the robustness of the models within the active learning framework with different selection criteria was also investigated. The following 2 clinical data sets were used for evaluation: the Informatics for Integrating Biology and the Bedside/Veteran Affairs (i2b2/VA) 2010 natural language processing challenge and the Shared Annotated Resources/Conference and Labs of the Evaluation Forum (ShARe/CLEF) 2013 eHealth Evaluation Lab. The annotation effort saved by active learning to achieve the same effectiveness as supervised learning is up to 77%, 57%, and 46% of the total number of sequences, tokens, and concepts, respectively. Compared with the random sampling baseline, the saving is at least doubled. Incremental active learning is a promising approach for building effective and robust medical concept extraction models while significantly reducing the burden of manual annotation. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  9. Automated DNA extraction platforms offer solutions to challenges of assessing microbial biofouling in oil production facilities

    PubMed Central

    2012-01-01

    The analysis of microbial assemblages in industrial, marine, and medical systems can inform decisions regarding quality control or mitigation. Modern molecular approaches to detect, characterize, and quantify microorganisms provide rapid and thorough measures unbiased by the need for cultivation. The requirement of timely extraction of high quality nucleic acids for molecular analysis is faced with specific challenges when used to study the influence of microorganisms on oil production. Production facilities are often ill equipped for nucleic acid extraction techniques, making the preservation and transportation of samples off-site a priority. As a potential solution, the possibility of extracting nucleic acids on-site using automated platforms was tested. The performance of two such platforms, the Fujifilm QuickGene-Mini80™ and Promega Maxwell®16 was compared to a widely used manual extraction kit, MOBIO PowerBiofilm™ DNA Isolation Kit, in terms of ease of operation, DNA quality, and microbial community composition. Three pipeline biofilm samples were chosen for these comparisons; two contained crude oil and corrosion products and the third transported seawater. Overall, the two more automated extraction platforms produced higher DNA yields than the manual approach. DNA quality was evaluated for amplification by quantitative PCR (qPCR) and end-point PCR to generate 454 pyrosequencing libraries for 16S rRNA microbial community analysis. Microbial community structure, as assessed by DGGE analysis and pyrosequencing, was comparable among the three extraction methods. Therefore, the use of automated extraction platforms should enhance the feasibility of rapidly evaluating microbial biofouling at remote locations or those with limited resources. PMID:23168231

  10. Automated extraction and labelling of the arterial tree from whole-body MRA data.

    PubMed

    Shahzad, Rahil; Dzyubachyk, Oleh; Staring, Marius; Kullberg, Joel; Johansson, Lars; Ahlström, Håkan; Lelieveldt, Boudewijn P F; van der Geest, Rob J

    2015-08-01

    In this work, we present a fully automated algorithm for extraction of the 3D arterial tree and labelling the tree segments from whole-body magnetic resonance angiography (WB-MRA) sequences. The algorithm developed consists of two core parts (i) 3D volume reconstruction from different stations with simultaneous correction of different types of intensity inhomogeneity, and (ii) Extraction of the arterial tree and subsequent labelling of the pruned extracted tree. Extraction of the arterial tree is performed using the probability map of the "contrast" class, which is obtained as one of the results of the inhomogeneity correction scheme. We demonstrate that such approach is more robust than using the difference between the pre- and post-contrast channels traditionally used for this purpose. Labelling the extracted tree is performed by using a combination of graph-based and atlas-based approaches. Validation of our method with respect to the extracted tree was performed on the arterial tree subdivided into 32 segments, 82.4% of which were completely detected, 11.7% partially detected, and 5.9% were missed on a cohort of 35 subjects. With respect to automated labelling accuracy of the 32 segments, various registration strategies were investigated on a training set consisting of 10 scans. Further analysis on the test set consisting of 25 data sets indicates that 69% of the vessel centerline tree in the head and neck region, 80% in the thorax and abdomen region, and 84% in the legs was accurately labelled to the correct vessel segment. These results indicate clinical potential of our approach in enabling fully automated and accurate analysis of the entire arterial tree. This is the first study that not only automatically extracts the WB-MRA arterial tree, but also labels the vessel tree segments.

  11. Comparisons of Three Automated Systems for Genomic DNA Extraction in a Clinical Diagnostic Laboratory

    PubMed Central

    Lee, Jong-Han; Park, Yongjung; Choi, Jong Rak; Lee, Eun Kyung

    2010-01-01

    Purpose The extraction of nucleic acid is initially a limiting step for successful molecular-based diagnostic workup. This study aims to compare the effectiveness of three automated DNA extraction systems for clinical laboratory use. Materials and Methods Venous blood samples from 22 healthy volunteers were analyzed using QIAamp® Blood Mini Kit (Qiagen), MagNA Pure LC Nucleic Acid Isolation Kit I (Roche), and Magtration-Magnazorb DNA common kit-200N (PSS). The concentration of extracted DNAs was measured by NanoDrop ND-1000 (PeqLab). Also, extracted DNAs were confirmed by applying in direct agarose gel electrophoresis and were amplified by polymerase chain reaction (PCR) for human beta-globin gene. Results The corrected concentrations of extracted DNAs were 25.42 ± 8.82 ng/µL (13.49-52.85 ng/µL) by QIAamp® Blood Mini Kit (Qiagen), and 22.65 ± 14.49 ng/µL (19.18-93.39 ng/µL) by MagNA Pure LC Nucleic Acid Isolation Kit I, and 22.35 ± 6.47 ng/µL (12.57-35.08 ng/µL) by Magtration-Magnazorb DNA common kit-200N (PSS). No statistically significant difference was noticed among the three commercial kits (p > 0.05). Only the mean value of DNA purity through PSS was slightly lower than others. All the extracted DNAs were successfully identified in direct agarose gel electrophoresis. And all the product of beta-globin gene PCR showed a reproducible pattern of bands. Conclusion The effectiveness of the three automated extraction systems is of an equivalent level and good enough to produce reasonable results. Each laboratory could select the automated system according to its clinical and laboratory conditions. PMID:20046522

  12. Visual Routines for Extracting Magnitude Relations

    ERIC Educational Resources Information Center

    Michal, Audrey L.; Uttal, David; Shah, Priti; Franconeri, Steven L.

    2016-01-01

    Linking relations described in text with relations in visualizations is often difficult. We used eye tracking to measure the optimal way to extract such relations in graphs, college students, and young children (6- and 8-year-olds). Participants compared relational statements ("Are there more blueberries than oranges?") with simple…

  13. Visual Routines for Extracting Magnitude Relations

    ERIC Educational Resources Information Center

    Michal, Audrey L.; Uttal, David; Shah, Priti; Franconeri, Steven L.

    2016-01-01

    Linking relations described in text with relations in visualizations is often difficult. We used eye tracking to measure the optimal way to extract such relations in graphs, college students, and young children (6- and 8-year-olds). Participants compared relational statements ("Are there more blueberries than oranges?") with simple…

  14. AutoMate Express™ forensic DNA extraction system for the extraction of genomic DNA from biological samples.

    PubMed

    Liu, Jason Y; Zhong, Chang; Holt, Allison; Lagace, Robert; Harrold, Michael; Dixon, Alan B; Brevnov, Maxim G; Shewale, Jaiprakash G; Hennessy, Lori K

    2012-07-01

    The AutoMate Express™ Forensic DNA Extraction System was developed for automatic isolation of DNA from a variety of forensic biological samples. The performance of the system was investigated using a wide range of biological samples. Depending on the sample type, either PrepFiler™ lysis buffer or PrepFiler BTA™ lysis buffer was used to lyse the samples. After lysis and removal of the substrate using LySep™ column, the lysate in the sample tubes were loaded onto AutoMate Express™ instrument and DNA was extracted using one of the two instrument extraction protocols. Our study showed that DNA was recovered from as little as 0.025 μL of blood. DNA extracted from casework-type samples was free of detectable PCR inhibitors and the short tandem repeat profiles were complete, conclusive, and devoid of any PCR artifacts. The system also showed consistent performance from day-to-day operation. 2012 American Academy of Forensic Sciences. Published 2012. This article is a U.S. Government work and is in the public domain in the U.S.A.

  15. Automated segmentation and feature extraction of product inspection items

    NASA Astrophysics Data System (ADS)

    Talukder, Ashit; Casasent, David P.

    1997-03-01

    X-ray film and linescan images of pistachio nuts on conveyor trays for product inspection are considered. The final objective is the categorization of pistachios into good, blemished and infested nuts. A crucial step before classification is the separation of touching products and the extraction of features essential for classification. This paper addresses new detection and segmentation algorithms to isolate touching or overlapping items. These algorithms employ a new filter, a new watershed algorithm, and morphological processing to produce nutmeat-only images. Tests on a large database of x-ray film and real-time x-ray linescan images of around 2900 small, medium and large nuts showed excellent segmentation results. A new technique to detect and segment dark regions in nutmeat images is also presented and tested on approximately 300 x-ray film and approximately 300 real-time linescan x-ray images with 95-97 percent detection and correct segmentation. New algorithms are described that determine nutmeat fill ratio and locate splits in nutmeat. The techniques formulated in this paper are of general use in many different product inspection and computer vision problems.

  16. A fully automated liquid–liquid extraction system utilizing interface detection

    PubMed Central

    Maslana, Eugene; Schmitt, Robert; Pan, Jeffrey

    2000-01-01

    The development of the Abbott Liquid-Liquid Extraction Station was a result of the need for an automated system to perform aqueous extraction on large sets of newly synthesized organic compounds used for drug discovery. The system utilizes a cylindrical laboratory robot to shuttle sample vials between two loading racks, two identical extraction stations, and a centrifuge. Extraction is performed by detecting the phase interface (by difference in refractive index) of the moving column of fluid drawn from the bottom of each vial containing a biphasic mixture. The integration of interface detection with fluid extraction maximizes sample throughput. Abbott-developed electronics process the detector signals. Sample mixing is performed by high-speed solvent injection. Centrifuging of the samples reduces interface emulsions. Operating software permits the user to program wash protocols with any one of six solvents per wash cycle with as many cycle repeats as necessary. Station capacity is eighty, 15 ml vials. This system has proven successful with a broad spectrum of both ethyl acetate and methylene chloride based chemistries. The development and characterization of this automated extraction system will be presented. PMID:18924693

  17. Automated renal histopathology: digital extraction and quantification of renal pathology

    NASA Astrophysics Data System (ADS)

    Sarder, Pinaki; Ginley, Brandon; Tomaszewski, John E.

    2016-03-01

    The branch of pathology concerned with excess blood serum proteins being excreted in the urine pays particular attention to the glomerulus, a small intertwined bunch of capillaries located at the beginning of the nephron. Normal glomeruli allow moderate amount of blood proteins to be filtered; proteinuric glomeruli allow large amount of blood proteins to be filtered. Diagnosis of proteinuric diseases requires time intensive manual examination of the structural compartments of the glomerulus from renal biopsies. Pathological examination includes cellularity of individual compartments, Bowman's and luminal space segmentation, cellular morphology, glomerular volume, capillary morphology, and more. Long examination times may lead to increased diagnosis time and/or lead to reduced precision of the diagnostic process. Automatic quantification holds strong potential to reduce renal diagnostic time. We have developed a computational pipeline capable of automatically segmenting relevant features from renal biopsies. Our method first segments glomerular compartments from renal biopsies by isolating regions with high nuclear density. Gabor texture segmentation is used to accurately define glomerular boundaries. Bowman's and luminal spaces are segmented using morphological operators. Nuclei structures are segmented using color deconvolution, morphological processing, and bottleneck detection. Average computation time of feature extraction for a typical biopsy, comprising of ~12 glomeruli, is ˜69 s using an Intel(R) Core(TM) i7-4790 CPU, and is ~65X faster than manual processing. Using images from rat renal tissue samples, automatic glomerular structural feature estimation was reproducibly demonstrated for 15 biopsy images, which contained 148 individual glomeruli images. The proposed method holds immense potential to enhance information available while making clinical diagnoses.

  18. Automated DNA extraction from genetically modified maize using aminosilane-modified bacterial magnetic particles.

    PubMed

    Ota, Hiroyuki; Lim, Tae-Kyu; Tanaka, Tsuyoshi; Yoshino, Tomoko; Harada, Manabu; Matsunaga, Tadashi

    2006-09-18

    A novel, automated system, PNE-1080, equipped with eight automated pestle units and a spectrophotometer was developed for genomic DNA extraction from maize using aminosilane-modified bacterial magnetic particles (BMPs). The use of aminosilane-modified BMPs allowed highly accurate DNA recovery. The (A(260)-A(320)):(A(280)-A(320)) ratio of the extracted DNA was 1.9+/-0.1. The DNA quality was sufficiently pure for PCR analysis. The PNE-1080 offered rapid assay completion (30 min) with high accuracy. Furthermore, the results of real-time PCR confirmed that our proposed method permitted the accurate determination of genetically modified DNA composition and correlated well with results obtained by conventional cetyltrimethylammonium bromide (CTAB)-based methods.

  19. Extraction of Prostatic Lumina and Automated Recognition for Prostatic Calculus Image Using PCA-SVM

    PubMed Central

    Wang, Zhuocai; Xu, Xiangmin; Ding, Xiaojun; Xiao, Hui; Huang, Yusheng; Liu, Jian; Xing, Xiaofen; Wang, Hua; Liao, D. Joshua

    2011-01-01

    Identification of prostatic calculi is an important basis for determining the tissue origin. Computation-assistant diagnosis of prostatic calculi may have promising potential but is currently still less studied. We studied the extraction of prostatic lumina and automated recognition for calculus images. Extraction of lumina from prostate histology images was based on local entropy and Otsu threshold recognition using PCA-SVM and based on the texture features of prostatic calculus. The SVM classifier showed an average time 0.1432 second, an average training accuracy of 100%, an average test accuracy of 93.12%, a sensitivity of 87.74%, and a specificity of 94.82%. We concluded that the algorithm, based on texture features and PCA-SVM, can recognize the concentric structure and visualized features easily. Therefore, this method is effective for the automated recognition of prostatic calculi. PMID:21461364

  20. Extraction of prostatic lumina and automated recognition for prostatic calculus image using PCA-SVM.

    PubMed

    Wang, Zhuocai; Xu, Xiangmin; Ding, Xiaojun; Xiao, Hui; Huang, Yusheng; Liu, Jian; Xing, Xiaofen; Wang, Hua; Liao, D Joshua

    2011-01-01

    Identification of prostatic calculi is an important basis for determining the tissue origin. Computation-assistant diagnosis of prostatic calculi may have promising potential but is currently still less studied. We studied the extraction of prostatic lumina and automated recognition for calculus images. Extraction of lumina from prostate histology images was based on local entropy and Otsu threshold recognition using PCA-SVM and based on the texture features of prostatic calculus. The SVM classifier showed an average time 0.1432 second, an average training accuracy of 100%, an average test accuracy of 93.12%, a sensitivity of 87.74%, and a specificity of 94.82%. We concluded that the algorithm, based on texture features and PCA-SVM, can recognize the concentric structure and visualized features easily. Therefore, this method is effective for the automated recognition of prostatic calculi.

  1. An integrated approach for automating validation of extracted ion chromatographic peaks.

    PubMed

    Nelson, William D; Viele, Kert; Lynn, Bert C

    2008-09-15

    Accurate determination of extracted ion chromatographic peak areas in isotope-labeled quantitative proteomics is difficult to automate. Manual validation of identified peaks is typically required. We have integrated a peak confidence scoring algorithm into existing tools which are compatible with analysis pipelines based on the standards from the Institute for Systems Biology. This algorithm automatically excludes incorrectly identified peaks, improving the accuracy of the final protein expression ratio calculation. http://www.chem.uky.edu/research/lynn/Nelson.pdf.

  2. Evaluation of an automated protocol for efficient and reliable DNA extraction of dietary samples.

    PubMed

    Wallinger, Corinna; Staudacher, Karin; Sint, Daniela; Thalinger, Bettina; Oehm, Johannes; Juen, Anita; Traugott, Michael

    2017-08-01

    Molecular techniques have become an important tool to empirically assess feeding interactions. The increased usage of next-generation sequencing approaches has stressed the need of fast DNA extraction that does not compromise DNA quality. Dietary samples here pose a particular challenge, as these demand high-quality DNA extraction procedures for obtaining the minute quantities of short-fragmented food DNA. Automatic high-throughput procedures significantly decrease time and costs and allow for standardization of extracting total DNA. However, these approaches have not yet been evaluated for dietary samples. We tested the efficiency of an automatic DNA extraction platform and a traditional CTAB protocol, employing a variety of dietary samples including invertebrate whole-body extracts as well as invertebrate and vertebrate gut content samples and feces. Extraction efficacy was quantified using the proportions of successful PCR amplifications of both total and prey DNA, and cost was estimated in terms of time and material expense. For extraction of total DNA, the automated platform performed better for both invertebrate and vertebrate samples. This was also true for prey detection in vertebrate samples. For the dietary analysis in invertebrates, there is still room for improvement when using the high-throughput system for optimal DNA yields. Overall, the automated DNA extraction system turned out as a promising alternative to labor-intensive, low-throughput manual extraction methods such as CTAB. It is opening up the opportunity for an extensive use of this cost-efficient and innovative methodology at low contamination risk also in trophic ecology.

  3. An advanced distributed automated extraction of drainage network model on high-resolution DEM

    NASA Astrophysics Data System (ADS)

    Mao, Y.; Ye, A.; Xu, J.; Ma, F.; Deng, X.; Miao, C.; Gong, W.; Di, Z.

    2014-07-01

    A high-resolution and high-accuracy drainage network map is a prerequisite for simulating the water cycle in land surface hydrological models. The objective of this study was to develop a new automated extraction of drainage network model, which can get high-precision continuous drainage network on high-resolution DEM (Digital Elevation Model). The high-resolution DEM need too much computer resources to extract drainage network. The conventional GIS method often can not complete to calculate on high-resolution DEM of big basins, because the number of grids is too large. In order to decrease the computation time, an advanced distributed automated extraction of drainage network model (Adam) was proposed in the study. The Adam model has two features: (1) searching upward from outlet of basin instead of sink filling, (2) dividing sub-basins on low-resolution DEM, and then extracting drainage network on sub-basins of high-resolution DEM. The case study used elevation data of the Shuttle Radar Topography Mission (SRTM) at 3 arc-second resolution in Zhujiang River basin, China. The results show Adam model can dramatically reduce the computation time. The extracting drainage network was continuous and more accurate than HydroSHEDS (Hydrological data and maps based on Shuttle Elevation Derivatives at multiple Scales).

  4. Metal-organic framework mixed-matrix disks: Versatile supports for automated solid-phase extraction prior to chromatographic separation.

    PubMed

    Ghani, Milad; Font Picó, Maria Francesca; Salehinia, Shima; Palomino Cabello, Carlos; Maya, Fernando; Berlier, Gloria; Saraji, Mohammad; Cerdà, Víctor; Turnes Palomino, Gemma

    2017-03-10

    We present for the first time the application of metal-organic framework (MOF) mixed-matrix disks (MMD) for the automated flow-through solid-phase extraction (SPE) of environmental pollutants. Zirconium terephthalate UiO-66 and UiO-66-NH2 MOFs with different size (90, 200 and 300nm) have been incorporated into mechanically stable polyvinylidene difluoride (PVDF) disks. The performance of the MOF-MMDs for automated SPE of seven substituted phenols prior to HPLC analysis has been evaluated using the sequential injection analysis technique. MOF-MMDs enabled the simultaneous extraction of phenols with the concomitant size exclusion of molecules of larger size. The best extraction performance was obtained using a MOF-MMD containing 90nm UiO-66-NH2 crystals. Using the selected MOF-MMD, detection limits ranging from 0.1 to 0.2μgL(-1) were obtained. Relative standard deviations ranged from 3.9 to 5.3% intra-day, and 4.7-5.7% inter-day. Membrane batch-to-batch reproducibility was from 5.2 to 6.4%. Three different groundwater samples were analyzed with the proposed method using MOF-MMDs, obtaining recoveries ranging from 90 to 98% for all tested analytes.

  5. Automated information extraction of key trial design elements from clinical trial publications.

    PubMed

    de Bruijn, Berry; Carini, Simona; Kiritchenko, Svetlana; Martin, Joel; Sim, Ida

    2008-11-06

    Clinical trials are one of the most valuable sources of scientific evidence for improving the practice of medicine. The Trial Bank project aims to improve structured access to trial findings by including formalized trial information into a knowledge base. Manually extracting trial information from published articles is costly, but automated information extraction techniques can assist. The current study highlights a single architecture to extract a wide array of information elements from full-text publications of randomized clinical trials (RCTs). This architecture combines a text classifier with a weak regular expression matcher. We tested this two-stage architecture on 88 RCT reports from 5 leading medical journals, extracting 23 elements of key trial information such as eligibility rules, sample size, intervention, and outcome names. Results prove this to be a promising avenue to help critical appraisers, systematic reviewers, and curators quickly identify key information elements in published RCT articles.

  6. Comparison of QIAGEN automated nucleic acid extraction methods for CMV quantitative PCR testing.

    PubMed

    Miller, Steve; Seet, Henrietta; Khan, Yasmeen; Wright, Carolyn; Nadarajah, Rohan

    2010-04-01

    We examined the effect of nucleic acid extraction methods on the analytic characteristics of a quantitative polymerase chain reaction (PCR) assay for cytomegalovirus (CMV). Human serum samples were extracted with 2 automated instruments (BioRobot EZ1 and QIAsymphony SP, QIAGEN, Valencia, CA) and CMV PCR results compared with those of pp65 antigenemia testing. Both extraction methods yielded results that were comparably linear and precise, whereas the QIAsymphony SP had a slightly lower limit of detection (1.92 log(10) copies/mL vs 2.26 log(10) copies/mL). In both cases, PCR was more sensitive than CMV antigen detection, detecting CMV viremia in 12% (EZ1) and 21% (QIAsymphony) of antigen-negative specimens. This study demonstrates the feasibility of using 2 different extraction techniques to yield results within 0.5 log(10) copies/mL of the mean value, a level that would allow for clinical comparison between different laboratory assays.

  7. Bacterial and fungal DNA extraction from positive blood culture bottles: a manual and an automated protocol.

    PubMed

    Mäki, Minna

    2015-01-01

    When adapting a gene amplification-based method in a routine sepsis diagnostics using a blood culture sample as a specimen type, a prerequisite for a successful and sensitive downstream analysis is the efficient DNA extraction step. In recent years, a number of in-house and commercial DNA extraction solutions have become available. Careful evaluation in respect to cell wall disruption of various microbes and subsequent recovery of microbial DNA without putative gene amplification inhibitors should be conducted prior selecting the most feasible DNA extraction solution for the downstream analysis used. Since gene amplification technologies have been developed to be highly sensitive for a broad range of microbial species, it is also important to confirm that the used sample preparation reagents and materials are bioburden-free to avoid any risks for false-positive result reporting or interference of the diagnostic process. Here, one manual and one automated DNA extraction system feasible for blood culture samples are described.

  8. Automated Device for Asynchronous Extraction of RNA, DNA, or Protein Biomarkers from Surrogate Patient Samples.

    PubMed

    Bitting, Anna L; Bordelon, Hali; Baglia, Mark L; Davis, Keersten M; Creecy, Amy E; Short, Philip A; Albert, Laura E; Karhade, Aditya V; Wright, David W; Haselton, Frederick R; Adams, Nicholas M

    2016-12-01

    Many biomarker-based diagnostic methods are inhibited by nontarget molecules in patient samples, necessitating biomarker extraction before detection. We have developed a simple device that purifies RNA, DNA, or protein biomarkers from complex biological samples without robotics or fluid pumping. The device design is based on functionalized magnetic beads, which capture biomarkers and remove background biomolecules by magnetically transferring the beads through processing solutions arrayed within small-diameter tubing. The process was automated by wrapping the tubing around a disc-like cassette and rotating it past a magnet using a programmable motor. This device recovered biomarkers at ~80% of the operator-dependent extraction method published previously. The device was validated by extracting biomarkers from a panel of surrogate patient samples containing clinically relevant concentrations of (1) influenza A RNA in nasal swabs, (2) Escherichia coli DNA in urine, (3) Mycobacterium tuberculosis DNA in sputum, and (4) Plasmodium falciparum protein and DNA in blood. The device successfully extracted each biomarker type from samples representing low levels of clinically relevant infectivity (i.e., 7.3 copies/µL of influenza A RNA, 405 copies/µL of E. coli DNA, 0.22 copies/µL of TB DNA, 167 copies/µL of malaria parasite DNA, and 2.7 pM of malaria parasite protein). © 2015 Society for Laboratory Automation and Screening.

  9. PRECOG: a tool for automated extraction and visualization of fitness components in microbial growth phenomics.

    PubMed

    Fernandez-Ricaud, Luciano; Kourtchenko, Olga; Zackrisson, Martin; Warringer, Jonas; Blomberg, Anders

    2016-06-23

    Phenomics is a field in functional genomics that records variation in organismal phenotypes in the genetic, epigenetic or environmental context at a massive scale. For microbes, the key phenotype is the growth in population size because it contains information that is directly linked to fitness. Due to technical innovations and extensive automation our capacity to record complex and dynamic microbial growth data is rapidly outpacing our capacity to dissect and visualize this data and extract the fitness components it contains, hampering progress in all fields of microbiology. To automate visualization, analysis and exploration of complex and highly resolved microbial growth data as well as standardized extraction of the fitness components it contains, we developed the software PRECOG (PREsentation and Characterization Of Growth-data). PRECOG allows the user to quality control, interact with and evaluate microbial growth data with ease, speed and accuracy, also in cases of non-standard growth dynamics. Quality indices filter high- from low-quality growth experiments, reducing false positives. The pre-processing filters in PRECOG are computationally inexpensive and yet functionally comparable to more complex neural network procedures. We provide examples where data calibration, project design and feature extraction methodologies have a clear impact on the estimated growth traits, emphasising the need for proper standardization in data analysis. PRECOG is a tool that streamlines growth data pre-processing, phenotypic trait extraction, visualization, distribution and the creation of vast and informative phenomics databases.

  10. [Method for automated extraction and purification of nucleic acids and its implementation in microfluidic system].

    PubMed

    Mamaev, D D; Khodakov, D A; Dement'eva, E I; Filatov, I V; Iurasov, D A; Cherepanov, A I; Vasiliskov, V A; Smoldovskaia, O V; Zimenkov, D V; Griadunov, D A; Mikhaĭlovich, V M; Zasedatelev, A S

    2011-01-01

    A method and a microfluidic device for automated extraction and purification of nucleic acids from biological samples have been developed. The method involves disruption of bacterial cells and/or viral particles by combining enzymatic and chemical lysis procedures followed by solid-phase sorbent extraction and purification of nucleic acids. The procedure is carried out in an automated mode in a microfluidic module isolated from the outside environment, which minimizes contact of the researcher with potentially infectious samples and, consequently, decreases the risk of laboratory-acquired infections. The module includes reservoirs with lyophilized components for lysis and washing buffers; a microcolumn with a solid-phase sorbent; reservoirs containing water, ethanol, and water-ethanol buffer solutions for dissolving freeze-dried buffer components, rinsing the microcolumn, and eluting of nucleic acids; and microchannels and valves needed for directing fluids inside the module. The microfluidic module is placed into the control unit that delivers pressure, heats, mixes reagents, and flows solutions within the microfluidic module. The microfluidic system performs extraction and purification of nucleic acids with high efficiency in 40 min, and nucleic acids extracted can be directly used in PCR reaction and microarray assays.

  11. Automated extraction of single H atoms with STM: tip state dependency.

    PubMed

    Møller, Morten; Jarvis, Samuel P; Guérinet, Laurent; Sharp, Peter; Woolley, Richard; Rahe, Philipp; Moriarty, Philip

    2017-02-17

    The atomistic structure of the tip apex plays a crucial role in performing reliable atomic-scale surface and adsorbate manipulation using scanning probe techniques. We have developed an automated extraction routine for controlled removal of single hydrogen atoms from the H:Si(100) surface. The set of atomic extraction protocols detect a variety of desorption events during scanning tunneling microscope (STM)-induced modification of the hydrogen-passivated surface. The influence of the tip state on the probability for hydrogen removal was examined by comparing the desorption efficiency for various classifications of STM topographs (rows, dimers, atoms, etc). We find that dimer-row-resolving tip apices extract hydrogen atoms most readily and reliably (and with least spurious desorption), while tip states which provide atomic resolution counter-intuitively have a lower probability for single H atom removal.

  12. Automated extraction of single H atoms with STM: tip state dependency

    NASA Astrophysics Data System (ADS)

    Møller, Morten; Jarvis, Samuel P.; Guérinet, Laurent; Sharp, Peter; Woolley, Richard; Rahe, Philipp; Moriarty, Philip

    2017-02-01

    The atomistic structure of the tip apex plays a crucial role in performing reliable atomic-scale surface and adsorbate manipulation using scanning probe techniques. We have developed an automated extraction routine for controlled removal of single hydrogen atoms from the H:Si(100) surface. The set of atomic extraction protocols detect a variety of desorption events during scanning tunneling microscope (STM)-induced modification of the hydrogen-passivated surface. The influence of the tip state on the probability for hydrogen removal was examined by comparing the desorption efficiency for various classifications of STM topographs (rows, dimers, atoms, etc). We find that dimer-row-resolving tip apices extract hydrogen atoms most readily and reliably (and with least spurious desorption), while tip states which provide atomic resolution counter-intuitively have a lower probability for single H atom removal.

  13. Factors controlling the manual and automated extraction of image information using imaging polarimetry

    NASA Astrophysics Data System (ADS)

    Duggin, Michael J.

    2004-07-01

    The factors governing the extraction of useful information from polarimetric images depend upon the image acquisition and analytical methodologies being used, and upon systematic and environmental variations present during the acquisition process. The acquisition process generally occurs with foreknowledge of the analysis to be used. Broadly, interactive image analysis and automated image analysis are two different procedures: in each case, there are technical challenges. Imaging polarimetry is more complex than other imaging methodologies, and produces an increased dimensionality. However, there are several potential broad areas of interactive (manual) and automated remote sensing in which imaging polarimetry can provide useful additional information. A review is presented of the factors controlling feature discrimination, of metrics that are used, and of some proposed directions for future research.

  14. Automated Kinematic Extraction of Wing and Body Motions of Free Flying Diptera

    NASA Astrophysics Data System (ADS)

    Kostreski, Nicholas I.

    In the quest to understand the forces generated by micro aerial systems powered by oscillating appendages, it is necessary to study the kinematics that generate those forces. Automated and manual tracking techniques were developed to extract the complex wing and body motions of dipteran insects, ideal micro aerial systems, in free flight. Video sequences were captured by three high speed cameras (7500 fps) oriented orthogonally around a clear flight test chamber. Synchronization and image-based triggering were made possible by an automated triggering circuit. A multi-camera calibration was implemented using image-based tracking techniques. Three-dimensional reconstructions of the insect were generated from the 2-D images by shape from silhouette (SFS) methods. An intensity based segmentation of the wings and body was performed using a mixture of Gaussians. In addition to geometric and cost based filtering, spectral clustering was also used to refine the reconstruction and Principal Component Analysis (PCA) was performed to find the body roll axis and wing-span axes. The unobservable roll state of the cylindrically shaped body was successfully estimated by combining observations of the wing kinematics with a wing symmetry assumption. Wing pitch was determined by a ray tracing technique to compute and minimize a point-to-line cost function. Linear estimation with assumed motion models was accomplished by discrete Kalman filtering the measured body states. Generative models were developed for different species of diptera for model based tracking, simulation, and extraction of inertial properties. Manual and automated tracking results were analyzed and insect flight simulation videos were developed to quantify ground truth errors for an assumed model. The results demonstrated the automated tracker to have comparable performance to a human digitizer, though manual techniques displayed superiority during aggressive maneuvers and image blur. Both techniques demonstrated

  15. Toward high-throughput phenotyping: unbiased automated feature extraction and selection from knowledge sources.

    PubMed

    Yu, Sheng; Liao, Katherine P; Shaw, Stanley Y; Gainer, Vivian S; Churchill, Susanne E; Szolovits, Peter; Murphy, Shawn N; Kohane, Isaac S; Cai, Tianxi

    2015-09-01

    Analysis of narrative (text) data from electronic health records (EHRs) can improve population-scale phenotyping for clinical and genetic research. Currently, selection of text features for phenotyping algorithms is slow and laborious, requiring extensive and iterative involvement by domain experts. This paper introduces a method to develop phenotyping algorithms in an unbiased manner by automatically extracting and selecting informative features, which can be comparable to expert-curated ones in classification accuracy. Comprehensive medical concepts were collected from publicly available knowledge sources in an automated, unbiased fashion. Natural language processing (NLP) revealed the occurrence patterns of these concepts in EHR narrative notes, which enabled selection of informative features for phenotype classification. When combined with additional codified features, a penalized logistic regression model was trained to classify the target phenotype. The authors applied our method to develop algorithms to identify patients with rheumatoid arthritis and coronary artery disease cases among those with rheumatoid arthritis from a large multi-institutional EHR. The area under the receiver operating characteristic curves (AUC) for classifying RA and CAD using models trained with automated features were 0.951 and 0.929, respectively, compared to the AUCs of 0.938 and 0.929 by models trained with expert-curated features. Models trained with NLP text features selected through an unbiased, automated procedure achieved comparable or slightly higher accuracy than those trained with expert-curated features. The majority of the selected model features were interpretable. The proposed automated feature extraction method, generating highly accurate phenotyping algorithms with improved efficiency, is a significant step toward high-throughput phenotyping. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association. All

  16. Automated extraction of natural drainage density patterns for the conterminous United States through high performance computing

    USGS Publications Warehouse

    Stanislawski, Larry V.; Falgout, Jeff T.; Buttenfield, Barbara P.

    2015-01-01

    Hydrographic networks form an important data foundation for cartographic base mapping and for hydrologic analysis. Drainage density patterns for these networks can be derived to characterize local landscape, bedrock and climate conditions, and further inform hydrologic and geomorphological analysis by indicating areas where too few headwater channels have been extracted. But natural drainage density patterns are not consistently available in existing hydrographic data for the United States because compilation and capture criteria historically varied, along with climate, during the period of data collection over the various terrain types throughout the country. This paper demonstrates an automated workflow that is being tested in a high-performance computing environment by the U.S. Geological Survey (USGS) to map natural drainage density patterns at the 1:24,000-scale (24K) for the conterminous United States. Hydrographic network drainage patterns may be extracted from elevation data to guide corrections for existing hydrographic network data. The paper describes three stages in this workflow including data pre-processing, natural channel extraction, and generation of drainage density patterns from extracted channels. The workflow is concurrently implemented by executing procedures on multiple subbasin watersheds within the U.S. National Hydrography Dataset (NHD). Pre-processing defines parameters that are needed for the extraction process. Extraction proceeds in standard fashion: filling sinks, developing flow direction and weighted flow accumulation rasters. Drainage channels with assigned Strahler stream order are extracted within a subbasin and simplified. Drainage density patterns are then estimated with 100-meter resolution and subsequently smoothed with a low-pass filter. The extraction process is found to be of better quality in higher slope terrains. Concurrent processing through the high performance computing environment is shown to facilitate and refine

  17. Analyzing Automated Instructional Systems: Metaphors from Related Design Professions.

    ERIC Educational Resources Information Center

    Jonassen, David H.; Wilson, Brent G.

    Noting that automation has had an impact on virtually every manufacturing and information operation in the world, including instructional design (ID), this paper suggests three basic metaphors for automating instructional design activities: (1) computer-aided design and manufacturing (CAD/CAM) systems; (2) expert system advisor systems; and (3)…

  18. Analyzing Automated Instructional Systems: Metaphors from Related Design Professions.

    ERIC Educational Resources Information Center

    Jonassen, David H.; Wilson, Brent G.

    Noting that automation has had an impact on virtually every manufacturing and information operation in the world, including instructional design (ID), this paper suggests three basic metaphors for automating instructional design activities: (1) computer-aided design and manufacturing (CAD/CAM) systems; (2) expert system advisor systems; and (3)…

  19. Automated milk fat extraction for the analyses of persistent organic pollutants.

    PubMed

    Archer, Jeffrey C; Jenkins, Roy G

    2017-01-15

    We have utilized an automated acid hydrolysis technology, followed by an abbreviated Soxhlet extraction technique to obtain fat from whole milk for the determination of persistent organic pollutants, namely polychlorinated dibenzo-p-dioxins, polychlorinated dibenzofurans and polychlorinated biphenyls. The process simply involves (1) pouring the liquid milk into the hydrolysis beaker with reagents and standards, (2) drying the obtained fat on a filter paper and (3) obtaining pure fat via the modified Soxhlet extraction using 100mL of hexane per sample. This technique is in contrast to traditional manually intense liquid-liquid extractions and avoids the preparatory step of freeze-drying the samples for pressurized liquid extractions. Along with these extraction improvements, analytical results closely agree between the methods, thus no quality has been compromised. The native spike (n=12) and internal standard (n=24) precision and accuracy results are within EPA Methods 1613 and 1668 limits. While the median (n=6) Toxic Equivalency Quotient (TEQ) for polychlorinated dibenzo-p-dioxins/polychlorinated dibenzofurans and the concentration of the marker polychlorinated biphenyls show a percent difference of 1% and 12%, respectively, compared to 315 previously analyzed milk samples at the same laboratory using liquid-liquid extraction. During our feasibility studies, both egg and fish tissue show substantial promise using this technique as well.

  20. Comparison of manual and automated nucleic acid extraction methods from clinical specimens for microbial diagnosis purposes.

    PubMed

    Wozniak, Aniela; Geoffroy, Enrique; Miranda, Carolina; Castillo, Claudia; Sanhueza, Francia; García, Patricia

    2016-11-01

    The choice of nucleic acids (NAs) extraction method for molecular diagnosis in microbiology is of major importance because of the low microbial load, different nature of microorganisms, and clinical specimens. The NA yield of different extraction methods has been mostly studied using spiked samples. However, information from real human clinical specimens is scarce. The purpose of this study was to compare the performance of a manual low-cost extraction method (Qiagen kit or salting-out extraction method) with the automated high-cost MagNAPure Compact method. According to cycle threshold values for different pathogens, MagNAPure is as efficient as Qiagen for NA extraction from noncomplex clinical specimens (nasopharyngeal swab, skin swab, plasma, respiratory specimens). In contrast, according to cycle threshold values for RNAseP, MagNAPure method may not be an appropriate method for NA extraction from blood. We believe that MagNAPure versatility reduced risk of cross-contamination and reduced hands-on time compensates its high cost. Copyright © 2016 Elsevier Inc. All rights reserved.

  1. Automated extraction of DNA from biological stains on fabric from crime cases. A comparison of a manual and three automated methods.

    PubMed

    Stangegaard, Michael; Hjort, Benjamin B; Hansen, Thomas N; Hoflund, Anders; Mogensen, Helle S; Hansen, Anders J; Morling, Niels

    2013-05-01

    The presence of PCR inhibitors in extracted DNA may interfere with the subsequent quantification and short tandem repeat (STR) reactions used in forensic genetic DNA typing. DNA extraction from fabric for forensic genetic purposes may be challenging due to the occasional presence of PCR inhibitors that may be co-extracted with the DNA. Using 120 forensic trace evidence samples consisting of various types of fabric, we compared three automated DNA extraction methods based on magnetic beads (PrepFiler Express Forensic DNA Extraction Kit on an AutoMate Express, QIAsyphony DNA Investigator kit either with the sample pre-treatment recommended by Qiagen or an in-house optimized sample pre-treatment on a QIAsymphony SP) and one manual method (Chelex) with the aim of reducing the amount of PCR inhibitors in the DNA extracts and increasing the proportion of reportable STR-profiles. A total of 480 samples were processed. The highest DNA recovery was obtained with the PrepFiler Express kit on an AutoMate Express while the lowest DNA recovery was obtained using a QIAsymphony SP with the sample pre-treatment recommended by Qiagen. Extraction using a QIAsymphony SP with the sample pre-treatment recommended by Qiagen resulted in the lowest percentage of PCR inhibition (0%) while extraction using manual Chelex resulted in the highest percentage of PCR inhibition (51%). The largest number of reportable STR-profiles was obtained with DNA from samples extracted with the PrepFiler Express kit (75%) while the lowest number was obtained with DNA from samples extracted using a QIAsymphony SP with the sample pre-treatment recommended by Qiagen (41%). Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  2. Establishing a novel automated magnetic bead-based method for the extraction of DNA from a variety of forensic samples.

    PubMed

    Witt, Sebastian; Neumann, Jan; Zierdt, Holger; Gébel, Gabriella; Röscheisen, Christiane

    2012-09-01

    Automated systems have been increasingly utilized for DNA extraction by many forensic laboratories to handle growing numbers of forensic casework samples while minimizing the risk of human errors and assuring high reproducibility. The step towards automation however is not easy: The automated extraction method has to be very versatile to reliably prepare high yields of pure genomic DNA from a broad variety of sample types on different carrier materials. To prevent possible cross-contamination of samples or the loss of DNA, the components of the kit have to be designed in a way that allows for the automated handling of the samples with no manual intervention necessary. DNA extraction using paramagnetic particles coated with a DNA-binding surface is predestined for an automated approach. For this study, we tested different DNA extraction kits using DNA-binding paramagnetic particles with regard to DNA yield and handling by a Freedom EVO(®)150 extraction robot (Tecan) equipped with a Te-MagS magnetic separator. Among others, the extraction kits tested were the ChargeSwitch(®)Forensic DNA Purification Kit (Invitrogen), the PrepFiler™Automated Forensic DNA Extraction Kit (Applied Biosystems) and NucleoMag™96 Trace (Macherey-Nagel). After an extensive test phase, we established a novel magnetic bead extraction method based upon the NucleoMag™ extraction kit (Macherey-Nagel). The new method is readily automatable and produces high yields of DNA from different sample types (blood, saliva, sperm, contact stains) on various substrates (filter paper, swabs, cigarette butts) with no evidence of a loss of magnetic beads or sample cross-contamination. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  3. Extraction platform evaluations: a comparison of AutoMate Express™, EZ1® Advanced XL, and Maxwell® 16 Bench-top DNA extraction systems.

    PubMed

    Davis, Carey P; King, Jonathan L; Budowle, Bruce; Eisenberg, Arthur J; Turnbough, Meredith A

    2012-01-01

    The DNA extraction performance of three low-throughput extraction systems was evaluated. The instruments and respective chemistries all use a similar extraction methodology that involves binding DNA to a coated magnetic resin in the presence of chaotropic salt, washing of the resin to remove undesirable compounds, and elution of DNA from the particles in a low-salt solution. The AutoMate Express™ (Life Technologies Corporation, Carlsbad, CA), EZ1® Advanced XL (Qiagen Inc., Valencia, CA), and Maxwell® 16 (Promega Corporation, Madison, WI) were compared using a variety of samples including: blood on swabs, blood on denim, blood on cotton, blood mixed with inhibitors (a mixture of indigo, hematin, humic acid, and urban dust) on cotton, blood on FTA® paper, saliva residue on cigarette butt paper, epithelial cells on cotton swabs, neat semen on cotton, hair roots, bones, and teeth. Each instrument had a recommended pre-processing protocol for each sample type, and these protocols were followed strictly to reduce user bias. All extractions were performed in triplicate for each sample type. The three instruments were compared on the basis of quantity of DNA recovered (as determined by real-time PCR), relative level of inhibitors present in the extract (shown as shifts in the C(T) value for the internal PCR control in the real-time PCR assay), STR peak heights, use of consumables not included in the extraction kits, ease of use, and application flexibility. All three systems performed well; however extraction efficiency varied by sample type and with the preprocessing protocol applied to the various samples. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  4. An integrated approach for automating validation of extracted ion chromatographic peaks

    PubMed Central

    Nelson, William D.; Viele, Kert; Lynn, Bert C.

    2008-01-01

    Summary: Accurate determination of extracted ion chromatographic peak areas in isotope-labeled quantitative proteomics is difficult to automate. Manual validation of identified peaks is typically required. We have integrated a peak confidence scoring algorithm into existing tools which are compatible with analysis pipelines based on the standards from the Institute for Systems Biology. This algorithm automatically excludes incorrectly identified peaks, improving the accuracy of the final protein expression ratio calculation. Contact: wnels2@uky.edu Source and Supplementary Information: http://www.chem.uky.edu/research/lynn/Nelson.pdf PMID:18653519

  5. Automated solid-phase extraction of herbicides from water for gas chromatographic-mass spectrometric analysis

    USGS Publications Warehouse

    Meyer, M.T.; Mills, M.S.; Thurman, E.M.

    1993-01-01

    An automated solid-phase extraction (SPE) method was developed for the pre-concentration of chloroacetanilide and triazine herbicides, and two triazine metabolites from 100-ml water samples. Breakthrough experiments for the C18 SPE cartridge show that the two triazine metabolites are not fully retained and that increasing flow-rate decreases their retention. Standard curve r2 values of 0.998-1.000 for each compound were consistently obtained and a quantitation level of 0.05 ??g/l was achieved for each compound tested. More than 10,000 surface and ground water samples have been analyzed by this method.

  6. Automated CO2 extraction from air for clumped isotope analysis in the atmo- and biosphere

    NASA Astrophysics Data System (ADS)

    Hofmann, Magdalena; Ziegler, Martin; Pons, Thijs; Lourens, Lucas; Röckmann, Thomas

    2015-04-01

    The conventional stable isotope ratios 13C/12C and 18O/16O in atmospheric CO2 are a powerful tool for unraveling the global carbon cycle. In recent years, it has been suggested that the abundance of the very rare isotopologue 13C18O16O on m/z 47 might be a promising tracer to complement conventional stable isotope analysis of atmospheric CO2 [Affek and Eiler, 2006; Affek et al. 2007; Eiler and Schauble, 2004; Yeung et al., 2009]. Here we present an automated analytical system that is designed for clumped isotope analysis of atmo- and biospheric CO2. The carbon dioxide gas is quantitatively extracted from about 1.5L of air (ATP). The automated stainless steel extraction and purification line consists of three main components: (i) a drying unit (a magnesium perchlorate unit and a cryogenic water trap), (ii) two CO2 traps cooled with liquid nitrogen [Werner et al., 2001] and (iii) a GC column packed with Porapak Q that can be cooled with liquid nitrogen to -30°C during purification and heated up to 230°C in-between two extraction runs. After CO2 extraction and purification, the CO2 is automatically transferred to the mass spectrometer. Mass spectrometric analysis of the 13C18O16O abundance is carried out in dual inlet mode on a MAT 253 mass spectrometer. Each analysis generally consists of 80 change-over-cycles. Three additional Faraday cups were added to the mass spectrometer for simultaneous analysis of the mass-to-charge ratios 44, 45, 46, 47, 48 and 49. The reproducibility for δ13C, δ18O and Δ47 for repeated CO2 extractions from air is in the range of 0.11o (SD), 0.18o (SD) and 0.02 (SD)o respectively. This automated CO2 extraction and purification system will be used to analyse the clumped isotopic signature in atmospheric CO2 (tall tower, Cabauw, Netherlands) and to study the clumped isotopic fractionation during photosynthesis (leaf chamber experiments) and soil respiration. References Affek, H. P., Xu, X. & Eiler, J. M., Geochim. Cosmochim. Acta 71, 5033

  7. Malpractice claims related to tooth extractions.

    PubMed

    Koskela, Sanna; Suomalainen, Anni; Apajalahti, Satu; Ventä, Irja

    2017-03-01

    The aim of this study was to analyze malpractice claims related to tooth extractions in order to identify areas requiring emphasis and eventually to reduce the number of complications. We compiled a file of all malpractice claims related to tooth extractions (EBA code) between 1997 and 2010 from the Finnish Patient Insurance Centre. We then examined the data with respect to date, tooth, surgery, injury diagnosis, and the authority's decision on the case. The material consisted of 852 completed patient cases. Most of the teeth were third molars (66 %), followed by first molars (8 %), and second molars (7 %). The majority of claims were related to operative extraction (71 %) followed by ordinary extraction (17 %) and apicoectomy of a single-rooted tooth (7 %) or multi-rooted tooth (2 %). The most common diagnosis was injury of the lingual or inferior alveolar nerve. According to the authority's decision, the patient received compensation more often in cases involving a third molar than other teeth (56 vs. 46 %, P < 0.05). The removal of a mandibular third molar was the basis for the majority of malpractice claims. To reduce the numbers of lingual and inferior alveolar nerve injuries, the removal of mandibular third molars necessitates recent and high-quality panoramic radiograph, preoperative assessment of the difficulty of removal, and consciousness of the variable anatomical course of the lingual nerve.

  8. Automation of lidar-based hydrologic feature extraction workflows using GIS

    NASA Astrophysics Data System (ADS)

    Borlongan, Noel Jerome B.; de la Cruz, Roel M.; Olfindo, Nestor T.; Perez, Anjillyn Mae C.

    2016-10-01

    With the advent of LiDAR technology, higher resolution datasets become available for use in different remote sensing and GIS applications. One significant application of LiDAR datasets in the Philippines is in resource features extraction. Feature extraction using LiDAR datasets require complex and repetitive workflows which can take a lot of time for researchers through manual execution and supervision. The Development of the Philippine Hydrologic Dataset for Watersheds from LiDAR Surveys (PHD), a project under the Nationwide Detailed Resources Assessment Using LiDAR (Phil-LiDAR 2) program, created a set of scripts, the PHD Toolkit, to automate its processes and workflows necessary for hydrologic features extraction specifically Streams and Drainages, Irrigation Network, and Inland Wetlands, using LiDAR Datasets. These scripts are created in Python and can be added in the ArcGIS® environment as a toolbox. The toolkit is currently being used as an aid for the researchers in hydrologic feature extraction by simplifying the workflows, eliminating human errors when providing the inputs, and providing quick and easy-to-use tools for repetitive tasks. This paper discusses the actual implementation of different workflows developed by Phil-LiDAR 2 Project 4 in Streams, Irrigation Network and Inland Wetlands extraction.

  9. Strategies for Medical Data Extraction and Presentation Part 3: Automated Context- and User-Specific Data Extraction.

    PubMed

    Reiner, Bruce

    2015-08-01

    In current medical practice, data extraction is limited by a number of factors including lack of information system integration, manual workflow, excessive workloads, and lack of standardized databases. The combined limitations result in clinically important data often being overlooked, which can adversely affect clinical outcomes through the introduction of medical error, diminished diagnostic confidence, excessive utilization of medical services, and delays in diagnosis and treatment planning. Current technology development is largely inflexible and static in nature, which adversely affects functionality and usage among the diverse and heterogeneous population of end users. In order to address existing limitations in medical data extraction, alternative technology development strategies need to be considered which incorporate the creation of end user profile groups (to account for occupational differences among end users), customization options (accounting for individual end user needs and preferences), and context specificity of data (taking into account both the task being performed and data subject matter). Creation of the proposed context- and user-specific data extraction and presentation templates offers a number of theoretical benefits including automation and improved workflow, completeness in data search, ability to track and verify data sources, creation of computerized decision support and learning tools, and establishment of data-driven best practice guidelines.

  10. CHANNEL MORPHOLOGY TOOL (CMT): A GIS-BASED AUTOMATED EXTRACTION MODEL FOR CHANNEL GEOMETRY

    SciTech Connect

    JUDI, DAVID; KALYANAPU, ALFRED; MCPHERSON, TIMOTHY; BERSCHEID, ALAN

    2007-01-17

    This paper describes an automated Channel Morphology Tool (CMT) developed in ArcGIS 9.1 environment. The CMT creates cross-sections along a stream centerline and uses a digital elevation model (DEM) to create station points with elevations along each of the cross-sections. The generated cross-sections may then be exported into a hydraulic model. Along with the rapid cross-section generation the CMT also eliminates any cross-section overlaps that might occur due to the sinuosity of the channels using the Cross-section Overlap Correction Algorithm (COCoA). The CMT was tested by extracting cross-sections from a 5-m DEM for a 50-km channel length in Houston, Texas. The extracted cross-sections were compared directly with surveyed cross-sections in terms of the cross-section area. Results indicated that the CMT-generated cross-sections satisfactorily matched the surveyed data.

  11. Modelling and representation issues in automated feature extraction from aerial and satellite images

    NASA Astrophysics Data System (ADS)

    Sowmya, Arcot; Trinder, John

    New digital systems for the processing of photogrammetric and remote sensing images have led to new approaches to information extraction for mapping and Geographic Information System (GIS) applications, with the expectation that data can become more readily available at a lower cost and with greater currency. Demands for mapping and GIS data are increasing as well for environmental assessment and monitoring. Hence, researchers from the fields of photogrammetry and remote sensing, as well as computer vision and artificial intelligence, are bringing together their particular skills for automating these tasks of information extraction. The paper will review some of the approaches used in knowledge representation and modelling for machine vision, and give examples of their applications in research for image understanding of aerial and satellite imagery.

  12. Image-based continental shelf habitat mapping using novel automated data extraction techniques

    NASA Astrophysics Data System (ADS)

    Seiler, Jan; Friedman, Ariell; Steinberg, Daniel; Barrett, Neville; Williams, Alan; Holbrook, Neil J.

    2012-08-01

    We automatically mapped the distribution of temperate continental shelf rocky reef habitats with a high degree of confidence using colour, texture, rugosity and patchiness features extracted from images in conjunction with machine-learning algorithms. This demonstrated the potential of novel automation routines to expedite the complex and time-consuming process of seabed mapping. The random forests ensemble classifier outperformed other tree-based algorithms and also offered some valuable built-in model performance assessment tools. Habitat prediction using random forests performed most accurately when all 26 image-derived predictors were included in the model. This produced an overall habitat prediction accuracy of 84% (with a kappa statistic of 0.793) when compared to nine distinct habitat classes assigned by a human annotator. Predictions for three habitat classes were all within the 95% confidence intervals, indicating close agreement between observed and predicted habitat classes. Misclassified images were mostly unevenly, partially or insufficiently illuminated and came mostly from rugged terrains and during the autonomous underwater vehicle's obstacle avoidance manoeuvres. The remaining misclassified images were wrongly or inconsistently labelled by the human annotator. This study demonstrates the suitability of autonomous underwater vehicles to effectively sample benthic habitats and the ability of automated data handling techniques to extract and reliably process large volumes of seabed image data. Our methods for image feature extraction and classification are repeatable, cost-effective and well suited to studies that require non-extractive and/or co-located sampling, e.g. in marine reserves and for monitoring the recovery from physical impacts, e.g. from bottom fishing activities. The methods are transferable to other continental shelf areas and to other disciplines such as seabed geology.

  13. Automated solid-phase extraction approaches for large scale biomonitoring studies.

    PubMed

    Kuklenyik, Zsuzsanna; Ye, Xiaoyun; Needham, Larry L; Calafat, Antonia M

    2009-01-01

    The main value in measuring environmental chemicals in biological specimens (i.e., biomonitoring) is the ability to minimize risk assessment uncertainties. The collection of biomonitoring data for risk assessment requires the analysis of a statistically significant number of samples from subjects with a significant prevalence of detectable internal dose levels. This paper addresses the practical laboratory challenges that arise from these statistical requirements: development of high throughput techniques that can handle, with high accuracy and precision, a large number of samples and can do a trace level analysis of multiple and diverse environmental chemicals (i.e., analytes). We review here examples of high throughput, automated solid-phase extraction methods developed in our laboratory for biomonitoring of analytes with representative hydrophobic properties and for typical biomonitoring matrices. We discuss key aspects of sample preparation, column, and solvent selection for off- and online extractions, and the so-called nuts-and-bolts of online column-switching systems necessary for developing-with minimal sample handling-rugged, automated methods.

  14. Automated sample preparation by pressurized liquid extraction-solid-phase extraction for the liquid chromatographic-mass spectrometric investigation of polyphenols in the brewing process.

    PubMed

    Papagiannopoulos, Menelaos; Mellenthin, Annett

    2002-11-08

    The analysis of polyphenols from solid plant or food samples usually requires laborious sample preparation. The liquid extraction of these compounds from the sample is compromised by apolar matrix interferences, an excess of which has to be eliminated prior to subsequent purification and separation. Applying pressurized liquid extraction to the extraction of polyphenols from hops, the use of different solvents sequentially can partly overcome these problems. Initial extraction with pentane eliminates hydrophobic compounds like hop resins and oils and enables the straightforward automated on-line solid-phase extraction as part of an optimized LC-MS analysis.

  15. ChemDataExtractor: A Toolkit for Automated Extraction of Chemical Information from the Scientific Literature.

    PubMed

    Swain, Matthew C; Cole, Jacqueline M

    2016-10-24

    The emergence of "big data" initiatives has led to the need for tools that can automatically extract valuable chemical information from large volumes of unstructured data, such as the scientific literature. Since chemical information can be present in figures, tables, and textual paragraphs, successful information extraction often depends on the ability to interpret all of these domains simultaneously. We present a complete toolkit for the automated extraction of chemical entities and their associated properties, measurements, and relationships from scientific documents that can be used to populate structured chemical databases. Our system provides an extensible, chemistry-aware, natural language processing pipeline for tokenization, part-of-speech tagging, named entity recognition, and phrase parsing. Within this scope, we report improved performance for chemical named entity recognition through the use of unsupervised word clustering based on a massive corpus of chemistry articles. For phrase parsing and information extraction, we present the novel use of multiple rule-based grammars that are tailored for interpreting specific document domains such as textual paragraphs, captions, and tables. We also describe document-level processing to resolve data interdependencies and show that this is particularly necessary for the autogeneration of chemical databases since captions and tables commonly contain chemical identifiers and references that are defined elsewhere in the text. The performance of the toolkit to correctly extract various types of data was evaluated, affording an F-score of 93.4%, 86.8%, and 91.5% for extracting chemical identifiers, spectroscopic attributes, and chemical property attributes, respectively; set against the CHEMDNER chemical name extraction challenge, ChemDataExtractor yields a competitive F-score of 87.8%. All tools have been released under the MIT license and are available to download from http://www.chemdataextractor.org .

  16. Extracting Dependence Relations from Unstructured Medical Text.

    PubMed

    Jochim, Charles; Lassoued, Yassine; Sacaleanu, Bogdan; Deleris, Léa A

    2015-01-01

    Dependence relations among disease and risk factors are a key ingredient in risk modeling and decision support models. Currently such information is either provided by experts (costly and time consuming) or extracted from data (if available). The published medical literature represents a promising source of such knowledge; however its manual processing is practically infeasible. While a number of solutions have been introduced to add structure to biomedical literature, none adequately recover dependence relations. The objective of our research is to build such an automatic dependence extraction solution, based on a sequence of natural language processing steps, which take as input a set of MEDLINE abstracts and provide as output a list of structured dependence statements. This paper presents a hybrid pipeline approach, a combination of rule-based and machine learning algorithms. We found that this approach outperforms a strictly rule-based approach.

  17. Extraction of gravitational waves in numerical relativity.

    PubMed

    Bishop, Nigel T; Rezzolla, Luciano

    2016-01-01

    A numerical-relativity calculation yields in general a solution of the Einstein equations including also a radiative part, which is in practice computed in a region of finite extent. Since gravitational radiation is properly defined only at null infinity and in an appropriate coordinate system, the accurate estimation of the emitted gravitational waves represents an old and non-trivial problem in numerical relativity. A number of methods have been developed over the years to "extract" the radiative part of the solution from a numerical simulation and these include: quadrupole formulas, gauge-invariant metric perturbations, Weyl scalars, and characteristic extraction. We review and discuss each method, in terms of both its theoretical background as well as its implementation. Finally, we provide a brief comparison of the various methods in terms of their inherent advantages and disadvantages.

  18. Extraction of gravitational waves in numerical relativity

    NASA Astrophysics Data System (ADS)

    Bishop, Nigel T.; Rezzolla, Luciano

    2016-12-01

    A numerical-relativity calculation yields in general a solution of the Einstein equations including also a radiative part, which is in practice computed in a region of finite extent. Since gravitational radiation is properly defined only at null infinity and in an appropriate coordinate system, the accurate estimation of the emitted gravitational waves represents an old and non-trivial problem in numerical relativity. A number of methods have been developed over the years to "extract" the radiative part of the solution from a numerical simulation and these include: quadrupole formulas, gauge-invariant metric perturbations, Weyl scalars, and characteristic extraction. We review and discuss each method, in terms of both its theoretical background as well as its implementation. Finally, we provide a brief comparison of the various methods in terms of their inherent advantages and disadvantages.

  19. Semi-automated solid-phase extraction method for studying the biodegradation of ochratoxin A by human intestinal microbiota.

    PubMed

    Camel, Valérie; Ouethrani, Minale; Coudray, Cindy; Philippe, Catherine; Rabot, Sylvie

    2012-04-15

    A simple and rapid semi-automated solid-phase (SPE) extraction method has been developed for the analysis of ochratoxin A in aqueous matrices related to biodegradation experiments (namely digestive contents and faecal excreta), with a view of using this method to follow OTA biodegradation by human intestinal microbiota. Influence of extraction parameters that could affect semi-automated SPE efficiency was studied, using C18-silica as the sorbent and water as the simplest matrix, being further applied to the matrices of interest. Conditions finally retained were as follows: 5-mL aqueous samples (pH 3) containing an organic modifier (20% ACN) were applied on 100-mg cartridges. After drying (9 mL of air), the cartridge was rinsed with 5-mL H(2)O/ACN (80:20, v/v), before eluting the compounds with 3 × 1 mL of MeOH/THF (10:90, v/v). Acceptable recoveries and limits of quantification could be obtained considering the complexity of the investigated matrices and the low volumes sampled; this method was also suitable for the analysis of ochratoxin B in faecal extracts. Applicability of the method is illustrated by preliminary results of ochratoxin A biodegradation studies by human intestinal microbiota under simple in vitro conditions. Interestingly, partial degradation of ochratoxin A was observed, with efficiencies ranging from 14% to 47% after 72 h incubation. In addition, three phase I metabolites could be identified using high resolution mass spectrometry, namely ochratoxin α, open ochratoxin A and ochratoxin B.

  20. Automated hand thermal image segmentation and feature extraction in the evaluation of rheumatoid arthritis.

    PubMed

    Snekhalatha, U; Anburajan, M; Sowmiya, V; Venkatraman, B; Menaka, M

    2015-04-01

    The aim of the study was (1) to perform an automated segmentation of hot spot regions of the hand from thermograph using the k-means algorithm and (2) to test the potential of features extracted from the hand thermograph and its measured skin temperature indices in the evaluation of rheumatoid arthritis. Thermal image analysis based on skin temperature measurement, heat distribution index and thermographic index was analyzed in rheumatoid arthritis patients and controls. The k-means algorithm was used for image segmentation, and features were extracted from the segmented output image using the gray-level co-occurrence matrix method. In metacarpo-phalangeal, proximal inter-phalangeal and distal inter-phalangeal regions, the calculated percentage difference in the mean values of skin temperatures was found to be higher in rheumatoid arthritis patients (5.3%, 4.9% and 4.8% in MCP3, PIP3 and DIP3 joints, respectively) as compared to the normal group. k-Means algorithm applied in the thermal imaging provided better segmentation results in evaluating the disease. In the total population studied, the measured mean average skin temperature of the MCP3 joint was highly correlated with most of the extracted features of the hand. In the total population studied, the statistical feature extracted parameters correlated significantly with skin surface temperature measurements and measured temperature indices. Hence, the developed computer-aided diagnostic tool using MATLAB could be used as a reliable method in diagnosing and analyzing the arthritis in hand thermal images.

  1. Automated Detection and Extraction of Coronal Dimmings from SDO/AIA Data

    NASA Astrophysics Data System (ADS)

    Davey, Alisdair R.; Attrill, G. D. R.; Wills-Davey, M. J.

    2010-05-01

    The sheer volume of data anticipated from the Solar Dynamics Observatory/Atmospheric Imaging Assembly (SDO/AIA) highlights the necessity for the development of automatic detection methods for various types of solar activity. Initially recognised in the 1970s, it is now well established that coronal dimmings are closely associated with coronal mass ejections (CMEs), and are particularly recognised as an indicator of front-side (halo) CMEs, which can be difficult to detect in white-light coronagraph data. An automated coronal dimming region detection and extraction algorithm removes visual observer bias from determination of physical quantities such as spatial location, area and volume. This allows reproducible, quantifiable results to be mined from very large datasets. The information derived may facilitate more reliable early space weather detection, as well as offering the potential for conducting large-sample studies focused on determining the geoeffectiveness of CMEs, coupled with analysis of their associated coronal dimmings. We present examples of dimming events extracted using our algorithm from existing EUV data, demonstrating the potential for the anticipated application to SDO/AIA data. Metadata returned by our algorithm include: location, area, volume, mass and dynamics of coronal dimmings. As well as running on historic datasets, this algorithm is capable of detecting and extracting coronal dimmings in near real-time. The coronal dimming detection and extraction algorithm described in this poster is part of the SDO/Computer Vision Center effort hosted at SAO (Martens et al., 2009). We acknowledge NASA grant NNH07AB97C.

  2. An automated system for liquid-liquid extraction in monosegmented flow analysis

    PubMed Central

    Facchin, Ileana; Pasquini, Celio

    1997-01-01

    An automated system to perform liquid-liquid extraction in monosegmented flow analysis is described. The system is controlled by a microcomputer that can track the localization of the aqueous monosegmented sample in the manifold. Optical switches are employed to sense the gas-liquid interface of the air bubbles that define the monosegment. The logical level changes, generated by the switches, are flagged by the computer through a home-made interface that also contains the analogue-to-digital converter for signal acquisition. The sequence of operations, necessary for a single extraction or for concentration of the analyte in the organic phase, is triggered by these logical transitions. The system was evaluated for extraction of Cd(II), Cu(II) and Zn(II) and concentration of Cd(II) from aqueous solutions at pH 9.9 (NH3/NH4Cl buffer) into chloroform containing PAN (1-(2-pyridylazo)-2-naphthol) . The results show a mean repeatability of 3% (rsd) for a 2.0 mg l-1 Cd(II) solution and a linear increase of the concentration factor for a 0.5mg l-1 Cd(II) solution observed for up to nine extraction cycles. PMID:18924792

  3. A Novel Validation Algorithm Allows for Automated Cell Tracking and the Extraction of Biologically Meaningful Parameters

    PubMed Central

    Madany Mamlouk, Amir; Schicktanz, Simone; Kruse, Charli

    2011-01-01

    Automated microscopy is currently the only method to non-invasively and label-free observe complex multi-cellular processes, such as cell migration, cell cycle, and cell differentiation. Extracting biological information from a time-series of micrographs requires each cell to be recognized and followed through sequential microscopic snapshots. Although recent attempts to automatize this process resulted in ever improving cell detection rates, manual identification of identical cells is still the most reliable technique. However, its tedious and subjective nature prevented tracking from becoming a standardized tool for the investigation of cell cultures. Here, we present a novel method to accomplish automated cell tracking with a reliability comparable to manual tracking. Previously, automated cell tracking could not rival the reliability of manual tracking because, in contrast to the human way of solving this task, none of the algorithms had an independent quality control mechanism; they missed validation. Thus, instead of trying to improve the cell detection or tracking rates, we proceeded from the idea to automatically inspect the tracking results and accept only those of high trustworthiness, while rejecting all other results. This validation algorithm works independently of the quality of cell detection and tracking through a systematic search for tracking errors. It is based only on very general assumptions about the spatiotemporal contiguity of cell paths. While traditional tracking often aims to yield genealogic information about single cells, the natural outcome of a validated cell tracking algorithm turns out to be a set of complete, but often unconnected cell paths, i.e. records of cells from mitosis to mitosis. This is a consequence of the fact that the validation algorithm takes complete paths as the unit of rejection/acceptance. The resulting set of complete paths can be used to automatically extract important biological parameters with high

  4. Semi-automated extraction of landslides in Taiwan based on SPOT imagery and DEMs

    NASA Astrophysics Data System (ADS)

    Eisank, Clemens; Hölbling, Daniel; Friedl, Barbara; Chen, Yi-Chin; Chang, Kang-Tsung

    2014-05-01

    The vast availability and improved quality of optical satellite data and digital elevation models (DEMs), as well as the need for complete and up-to-date landslide inventories at various spatial scales have fostered the development of semi-automated landslide recognition systems. Among the tested approaches for designing such systems, object-based image analysis (OBIA) stepped out to be a highly promising methodology. OBIA offers a flexible, spatially enabled framework for effective landslide mapping. Most object-based landslide mapping systems, however, have been tailored to specific, mainly small-scale study areas or even to single landslides only. Even though reported mapping accuracies tend to be higher than for pixel-based approaches, accuracy values are still relatively low and depend on the particular study. There is still room to improve the applicability and objectivity of object-based landslide mapping systems. The presented study aims at developing a knowledge-based landslide mapping system implemented in an OBIA environment, i.e. Trimble eCognition. In comparison to previous knowledge-based approaches, the classification of segmentation-derived multi-scale image objects relies on digital landslide signatures. These signatures hold the common operational knowledge on digital landslide mapping, as reported by 25 Taiwanese landslide experts during personal semi-structured interviews. Specifically, the signatures include information on commonly used data layers, spectral and spatial features, and feature thresholds. The signatures guide the selection and implementation of mapping rules that were finally encoded in Cognition Network Language (CNL). Multi-scale image segmentation is optimized by using the improved Estimation of Scale Parameter (ESP) tool. The approach described above is developed and tested for mapping landslides in a sub-region of the Baichi catchment in Northern Taiwan based on SPOT imagery and a high-resolution DEM. An object

  5. Semi-automated extraction of longitudinal subglacial bedforms from digital terrain models - Two new methods

    NASA Astrophysics Data System (ADS)

    Jorge, Marco G.; Brennand, Tracy A.

    2017-07-01

    Relict drumlin and mega-scale glacial lineation (positive relief, longitudinal subglacial bedforms - LSBs) morphometry has been used as a proxy for paleo ice-sheet dynamics. LSB morphometric inventories have relied on manual mapping, which is slow and subjective and thus potentially difficult to reproduce. Automated methods are faster and reproducible, but previous methods for LSB semi-automated mapping have not been highly successful. Here, two new object-based methods for the semi-automated extraction of LSBs (footprints) from digital terrain models are compared in a test area in the Puget Lowland, Washington, USA. As segmentation procedures to create LSB-candidate objects, the normalized closed contour method relies on the contouring of a normalized local relief model addressing LSBs on slopes, and the landform elements mask method relies on the classification of landform elements derived from the digital terrain model. For identifying which LSB-candidate objects correspond to LSBs, both methods use the same LSB operational definition: a ruleset encapsulating expert knowledge, published morphometric data, and the morphometric range of LSBs in the study area. The normalized closed contour method was separately applied to four different local relief models, two computed in moving windows and two hydrology-based. Overall, the normalized closed contour method outperformed the landform elements mask method. The normalized closed contour method performed on a hydrological relief model from a multiple direction flow routing algorithm performed best. For an assessment of its transferability, the normalized closed contour method was evaluated on a second area, the Chautauqua drumlin field, Pennsylvania and New York, USA where it performed better than in the Puget Lowland. A broad comparison to previous methods suggests that the normalized relief closed contour method may be the most capable method to date, but more development is required.

  6. Automated endmember extraction for subpixel classification of multispectral and hyperspectral data

    NASA Astrophysics Data System (ADS)

    Shrivastava, Deepali; Kumar, Vinay; Sharma, Richa U.

    2016-04-01

    Most of the multispectral sensors acquire data in several broad wavelength bands and are capable of extracting different Land Cover features while hyperspectral sensors contain ample spectral data in narrow bandwidth (10- 20nm). The spectrally rich data enable the extraction of useful quantitative information from earth surface features. Endmembers are the pure spectral components extracted from the remote sensing datasets. Most approaches for Endmember extraction (EME) are manual and have been designed from a spectroscopic viewpoint, thus neglecting the spatial arrangement of the pixels. Therefore, EME techniques which can consider both spectral and spatial aspects are required to find more accurate Endmembers for Subpixel classification. Multispectral (EO-1 ALI and Landsat 8 OLI) and Hyperspectral (EO-1 Hyperion) datasets of Udaipur region, Rajasthan is used in this study. All the above mentioned datasets are preprocessed and converted to surface reflectance using Fast Line-of-sight Atmospheric Analysis of Spectral Hypercube (FLAASH). Further Automated Endmember extraction and Subpixel classification is carried out using Multiple Endmember Spectral Mixture Analysis (MESMA). Endmembers are selected from spectral libraries to be given as input to MESMA. To optimize these spectral libraries three techniques are deployed i.e. Count based Endmember selection (CoB), Endmember Average RMSE (EAR) and Minimum Average Spectral Angle (MASA) for endmember selection. Further identified endmembers are used for classifying multispectral and hyperspectral data using MESMA and SAM. It was observed from the obtained classified results that diverse features, spread over a pixel, which are spectrally same are well classified by MESMA whereas SAM was unable to do so.

  7. Knowledge-based automated feature extraction to categorize secondary digitized radiographs

    NASA Astrophysics Data System (ADS)

    Kohnen, Michael; Vogelsang, Frank; Wein, Berthold B.; Kilbinger, Markus W.; Guenther, Rolf W.; Weiler, Frank; Bredno, Joerg; Dahmen, Joerg

    2000-06-01

    An essential part of the IRMA-project (Image Retrieval in Medical Applications) is the categorization of digitized images into predefined classes using a combination of different independent features. To obtain an automated and content-based categorization, the following features are extracted from the image data: Fourier coefficients of normalized projections are computed to supply a scale- and translation-invariant description. Furthermore, histogram information and Co-occurrence matrices are calculated to supply information about the gray value distribution and textural information. But the key part of the feature extraction is the shape information of the objects represented by an Active Shape Model. The Active Shape Model supports various form variations given by a representative training set; we use one particular Active Shape Model for each image class. These different Active Shape Models are matched on preprocessed image data with a simulated annealing optimization. The different extracted features were chosen with regard to the different characteristics of the image content. They give a comprehensive description of image content using only few different features. Using this combination of different features for categorization results in a robust classification of image data, which is a basic step towards medical archives that allow retrieval results for queries of diagnostic relevance.

  8. Evaluation of automated urban surface water extraction from Sentinel-2A imagery using different water indices

    NASA Astrophysics Data System (ADS)

    Yang, Xiucheng; Chen, Li

    2017-04-01

    Urban surface water is characterized by complex surface continents and small size of water bodies, and the mapping of urban surface water is currently a challenging task. The moderate-resolution remote sensing satellites provide effective ways of monitoring surface water. This study conducts an exploratory evaluation on the performance of the newly available Sentinel-2A multispectral instrument (MSI) imagery for detecting urban surface water. An automatic framework that integrates pixel-level threshold adjustment and object-oriented segmentation is proposed. Based on the automated workflow, different combinations of visible, near infrared, and short-wave infrared bands in Sentinel-2 image via different water indices are first compared. Results show that object-level modified normalized difference water index (MNDWI with band 11) and automated water extraction index are feasible in urban surface water mapping for Sentinel-2 MSI imagery. Moreover, comparative results are obtained utilizing optimal MNDWI from Sentinel-2 and Landsat 8 images, respectively. Consequently, Sentinel-2 MSI achieves the kappa coefficient of 0.92, compared with that of 0.83 from Landsat 8 operational land imager.

  9. Investigation of automated feature extraction techniques for applications in cancer detection from multispectral histopathology images

    NASA Astrophysics Data System (ADS)

    Harvey, Neal R.; Levenson, Richard M.; Rimm, David L.

    2003-05-01

    Recent developments in imaging technology mean that it is now possible to obtain high-resolution histological image data at multiple wavelengths. This allows pathologists to image specimens over a full spectrum, thereby revealing (often subtle) distinctions between different types of tissue. With this type of data, the spectral content of the specimens, combined with quantitative spatial feature characterization may make it possible not only to identify the presence of an abnormality, but also to classify it accurately. However, such are the quantities and complexities of these data, that without new automated techniques to assist in the data analysis, the information contained in the data will remain inaccessible to those who need it. We investigate the application of a recently developed system for the automated analysis of multi-/hyper-spectral satellite image data to the problem of cancer detection from multispectral histopathology image data. The system provides a means for a human expert to provide training data simply by highlighting regions in an image using a computer mouse. Application of these feature extraction techniques to examples of both training and out-of-training-sample data demonstrate that these, as yet unoptimized, techniques already show promise in the discrimination between benign and malignant cells from a variety of samples.

  10. Online coupling of pressurized liquid extraction, solid-phase extraction and high-performance liquid chromatography for automated analysis of proanthocyanidins in malt.

    PubMed

    Papagiannopoulos, Menelaos; Zimmermann, Benno; Mellenthin, Annett; Krappe, Martin; Maio, Giovanni; Galensa, Rudolf

    2002-06-07

    A new instrumental setup for automated extraction of solid samples by online coupling of pressurized liquid extraction, automated SPE (solid-phase extraction) and HPLC is presented. From the extraction to the chromatogram no manual sample handling is required. The application to the determination of proanthocyanidins in malt reduces time and manual work to a minimum compared to former manual methods. Twenty samples can be processed within 24 h in respect to eight samples with the manual method. Using the features of the instrumental coupling, an optimized strategy for SPE of proanthocyanidins from natural samples was developed, requiring no evaporation step, using commercial cartridges and delivering concentrated eluates. The recovery of five main malt proanthocyanidins was 97%, with a reproducibility of 5%. This new instrumental coupling is thought to reduce time and costs along with improved results for a broad range of solid sample materials.

  11. Thoughts in flight: automation use and pilots' task-related and task-unrelated thought.

    PubMed

    Casner, Stephen M; Schooler, Jonathan W

    2014-05-01

    The objective was to examine the relationship between cockpit automation use and task-related and task-unrelated thought among airline pilots. Studies find that cockpit automation can sometimes relieve pilots of tedious control tasks and afford them more time to think ahead. Paradoxically, automation has also been shown to lead to lesser awareness. These results prompt the question of what pilots think about while using automation. A total of 18 airline pilots flew a Boeing 747-400 simulator while we recorded which of two levels of automation they used. As they worked, pilots were verbally probed about what they were thinking. Pilots were asked to categorize their thoughts as pertaining to (a) a specific task at hand, (b) higher-level flight-related thoughts (e.g.,planning ahead), or (c) thoughts unrelated to the flight. Pilots' performance was also measured. Pilots reported a smaller percentage of task-at-hand thoughts (27% vs. 50%) and a greater percentage of higher-level flight-related thoughts (56% vs. 29%) when using the higher level of automation. However, when all was going according to plan, using either level of automation, pilots also reported a higher percentage of task-unrelated thoughts (21%) than they did when in the midst of an unsuccessful performance (7%). Task-unrelated thoughts peaked at 25% when pilots were not interacting with the automation. Although cockpit automation may provide pilots with more time to think, it may encourage pilots to reinvest only some of this mental free time in thinking flight-related thoughts. This research informs the design of human-automation systems that more meaningfully engage the human operator.

  12. Automated extraction of the cortical sulci based on a supervised learning approach.

    PubMed

    Tu, Zhuowen; Zheng, Songfeng; Yuille, Alan L; Reiss, Allan L; Dutton, Rebecca A; Lee, Agatha D; Galaburda, Albert M; Dinov, Ivo; Thompson, Paul M; Toga, Arthur W

    2007-04-01

    It is important to detect and extract the major cortical sulci from brain images, but manually annotating these sulci is a time-consuming task and requires the labeler to follow complex protocols. This paper proposes a learning-based algorithm for automated extraction of the major cortical sulci from magnetic resonance imaging (MRI) volumes and cortical surfaces. Unlike alternative methods for detecting the major cortical sulci, which use a small number of predefined rules based on properties of the cortical surface such as the mean curvature, our approach learns a discriminative model using the probabilistic boosting tree algorithm (PBT). PBT is a supervised learning approach which selects and combines hundreds of features at different scales, such as curvatures, gradients and shape index. Our method can be applied to either MRI volumes or cortical surfaces. It first outputs a probability map which indicates how likely each voxel lies on a major sulcal curve. Next, it applies dynamic programming to extract the best curve based on the probability map and a shape prior. The algorithm has almost no parameters to tune for extracting different major sulci. It is very fast (it runs in under 1 min per sulcus including the time to compute the discriminative models) due to efficient implementation of the features (e.g., using the integral volume to rapidly compute the responses of 3-D Haar filters). Because the algorithm can be applied to MRI volumes directly, there is no need to perform preprocessing such as tissue segmentation or mapping to a canonical space. The learning aspect of our approach makes the system very flexible and general. For illustration, we use volumes of the right hemisphere with several major cortical sulci manually labeled. The algorithm is tested on two groups of data, including some brains from patients with Williams Syndrome, and the results are very encouraging.

  13. Validation of the Total Visual Acuity Extraction Algorithm (TOVA) for Automated Extraction of Visual Acuity Data From Free Text, Unstructured Clinical Records

    PubMed Central

    Baughman, Douglas M.; Su, Grace L.; Tsui, Irena; Lee, Cecilia S.; Lee, Aaron Y.

    2017-01-01

    Purpose With increasing volumes of electronic health record data, algorithm-driven extraction may aid manual extraction. Visual acuity often is extracted manually in vision research. The total visual acuity extraction algorithm (TOVA) is presented and validated for automated extraction of visual acuity from free text, unstructured clinical notes. Methods Consecutive inpatient ophthalmology notes over an 8-year period from the University of Washington healthcare system in Seattle, WA were used for validation of TOVA. The total visual acuity extraction algorithm applied natural language processing to recognize Snellen visual acuity in free text notes and assign laterality. The best corrected measurement was determined for each eye and converted to logMAR. The algorithm was validated against manual extraction of a subset of notes. Results A total of 6266 clinical records were obtained giving 12,452 data points. In a subset of 644 validated notes, comparison of manually extracted data versus TOVA output showed 95% concordance. Interrater reliability testing gave κ statistics of 0.94 (95% confidence interval [CI], 0.89–0.99), 0.96 (95% CI, 0.94–0.98), 0.95 (95% CI, 0.92–0.98), and 0.94 (95% CI, 0.90–0.98) for acuity numerators, denominators, adjustments, and signs, respectively. Pearson correlation coefficient was 0.983. Linear regression showed an R2 of 0.966 (P < 0.0001). Conclusions The total visual acuity extraction algorithm is a novel tool for extraction of visual acuity from free text, unstructured clinical notes and provides an open source method of data extraction. Translational Relevance Automated visual acuity extraction through natural language processing can be a valuable tool for data extraction from free text ophthalmology notes. PMID:28299240

  14. BLINKER: Automated Extraction of Ocular Indices from EEG Enabling Large-Scale Analysis.

    PubMed

    Kleifges, Kelly; Bigdely-Shamlo, Nima; Kerick, Scott E; Robbins, Kay A

    2017-01-01

    Electroencephalography (EEG) offers a platform for studying the relationships between behavioral measures, such as blink rate and duration, with neural correlates of fatigue and attention, such as theta and alpha band power. Further, the existence of EEG studies covering a variety of subjects and tasks provides opportunities for the community to better characterize variability of these measures across tasks and subjects. We have implemented an automated pipeline (BLINKER) for extracting ocular indices such as blink rate, blink duration, and blink velocity-amplitude ratios from EEG channels, EOG channels, and/or independent components (ICs). To illustrate the use of our approach, we have applied the pipeline to a large corpus of EEG data (comprising more than 2000 datasets acquired at eight different laboratories) in order to characterize variability of certain ocular indicators across subjects. We also investigate dependence of ocular indices on task in a shooter study. We have implemented our algorithms in a freely available MATLAB toolbox called BLINKER. The toolbox, which is easy to use and can be applied to collections of data without user intervention, can automatically discover which channels or ICs capture blinks. The tools extract blinks, calculate common ocular indices, generate a report for each dataset, dump labeled images of the individual blinks, and provide summary statistics across collections. Users can run BLINKER as a script or as a plugin for EEGLAB. The toolbox is available at https://github.com/VisLab/EEG-Blinks. User documentation and examples appear at http://vislab.github.io/EEG-Blinks/.

  15. Automated data extraction from in situ protein stable isotope probing studies

    SciTech Connect

    Slysz, Gordon W.; Steinke, Laurey A.; Ward, David M.; Klatt, Christian G.; Clauss, Therese RW; Purvine, Samuel O.; Payne, Samuel H.; Anderson, Gordon A.; Smith, Richard D.; Lipton, Mary S.

    2014-01-27

    Protein stable isotope probing (protein-SIP) has strong potential for revealing key metabolizing taxa in complex microbial communities. While most protein-SIP work to date has been performed under controlled laboratory conditions to allow extensive isotope labeling of the target organism, a key application will be in situ studies of microbial communities under conditions that result in small degrees of partial labeling. One hurdle restricting large scale in situ protein-SIP studies is the lack of algorithms and software for automated data processing of the massive data sets resulting from such studies. In response, we developed Stable Isotope Probing Protein Extraction Resources software (SIPPER) and applied it for large scale extraction and visualization of data from short term (3 h) protein-SIP experiments performed in situ on Yellowstone phototrophic bacterial mats. Several metrics incorporated into the software allow it to support exhaustive analysis of the complex composite isotopic envelope observed as a result of low amounts of partial label incorporation. SIPPER also enables the detection of labeled molecular species without the need for any prior identification.

  16. BLINKER: Automated Extraction of Ocular Indices from EEG Enabling Large-Scale Analysis

    PubMed Central

    Kleifges, Kelly; Bigdely-Shamlo, Nima; Kerick, Scott E.; Robbins, Kay A.

    2017-01-01

    Electroencephalography (EEG) offers a platform for studying the relationships between behavioral measures, such as blink rate and duration, with neural correlates of fatigue and attention, such as theta and alpha band power. Further, the existence of EEG studies covering a variety of subjects and tasks provides opportunities for the community to better characterize variability of these measures across tasks and subjects. We have implemented an automated pipeline (BLINKER) for extracting ocular indices such as blink rate, blink duration, and blink velocity-amplitude ratios from EEG channels, EOG channels, and/or independent components (ICs). To illustrate the use of our approach, we have applied the pipeline to a large corpus of EEG data (comprising more than 2000 datasets acquired at eight different laboratories) in order to characterize variability of certain ocular indicators across subjects. We also investigate dependence of ocular indices on task in a shooter study. We have implemented our algorithms in a freely available MATLAB toolbox called BLINKER. The toolbox, which is easy to use and can be applied to collections of data without user intervention, can automatically discover which channels or ICs capture blinks. The tools extract blinks, calculate common ocular indices, generate a report for each dataset, dump labeled images of the individual blinks, and provide summary statistics across collections. Users can run BLINKER as a script or as a plugin for EEGLAB. The toolbox is available at https://github.com/VisLab/EEG-Blinks. User documentation and examples appear at http://vislab.github.io/EEG-Blinks/. PMID:28217081

  17. Automated data extraction from in situ protein-stable isotope probing studies.

    PubMed

    Slysz, Gordon W; Steinke, Laurey; Ward, David M; Klatt, Christian G; Clauss, Therese R W; Purvine, Samuel O; Payne, Samuel H; Anderson, Gordon A; Smith, Richard D; Lipton, Mary S

    2014-03-07

    Protein-stable isotope probing (protein-SIP) has strong potential for revealing key metabolizing taxa in complex microbial communities. While most protein-SIP work to date has been performed under controlled laboratory conditions to allow extensive isotope labeling of the target organism(s), a key application will be in situ studies of microbial communities for short periods of time under natural conditions that result in small degrees of partial labeling. One hurdle restricting large-scale in situ protein-SIP studies is the lack of algorithms and software for automated data processing of the massive data sets resulting from such studies. In response, we developed Stable Isotope Probing Protein Extraction Resources software (SIPPER) and applied it for large-scale extraction and visualization of data from short-term (3 h) protein-SIP experiments performed in situ on phototrophic bacterial mats isolated from Yellowstone National Park. Several metrics incorporated into the software allow it to support exhaustive analysis of the complex composite isotopic envelope observed as a result of low amounts of partial label incorporation. SIPPER also enables the detection of labeled molecular species without the need for any prior identification.

  18. Deep Learning for Automated Extraction of Primary Sites from Cancer Pathology Reports.

    PubMed

    Qiu, John; Yoon, Hong-Jun; Fearn, Paul A; Tourassi, Georgia D

    2017-05-03

    for cancer registries which process high volumes of free-text reports annually. Information extraction and coding is a manual, labor-intensive process. In this study we investigated deep learning and a convolutional neural network (CNN), for extracting ICDO- 3 topographic codes from a corpus of breast and lung cancer pathology reports. We performed two experiments, using a CNN and a more conventional term frequency vector approach, to assess the effects of class prevalence and inter-class transfer learning. The experiments were based on a set of 942 pathology reports with human expert annotations as the gold standard. CNN performance was compared against a more conventional term frequency vector space approach. We observed that the deep learning models consistently outperformed the conventional approaches in the class prevalence experiment, resulting in micro and macro-F score increases of up to 0.132 and 0.226 respectively when class labels were well populated. Specifically, the best performing CNN achieved a micro-F score of 0.722 over 12 ICD-O-3 topography codes. Transfer learning provided a consistent but modest performance boost for the deep learning methods but trends were contingent on CNN method and cancer site. These encouraging results demonstrate the potential of deep learning for automated abstraction of pathology reports.

  19. Rapid and automated sample preparation for nucleic acid extraction on a microfluidic CD (compact disk)

    NASA Astrophysics Data System (ADS)

    Kim, Jitae; Kido, Horacio; Zoval, Jim V.; Gagné, Dominic; Peytavi, Régis; Picard, François J.; Bastien, Martine; Boissinot, Maurice; Bergeron, Michel G.; Madou, Marc J.

    2006-01-01

    Rapid and automated preparation of PCR (polymerase chain reaction)-ready genomic DNA was demonstrated on a multiplexed CD (compact disk) platform by using hard-to-lyse bacterial spores. Cell disruption is carried out while beadcell suspensions are pushed back and forth in center-tapered lysing chambers by angular oscillation of the disk - keystone effect. During this lysis period, the cell suspensions are securely held within the lysing chambers by heatactivated wax valves. Upon application of a remote heat to the disk in motion, the wax valves release lysate solutions into centrifuge chambers where cell debris are separated by an elevated rotation of the disk. Only debris-free DNA extract is then transferred to collection chambers by capillary-assisted siphon and collected for heating that inactivates PCR inhibitors. Lysing capacity was evaluated using a real-time PCR assay to monitor the efficiency of Bacillus globigii spore lysis. PCR analysis showed that 5 minutes' CD lysis run gave spore lysis efficiency similar to that obtained with a popular commercial DNA extraction kit (i.e., IDI-lysis kit from GeneOhm Sciences Inc.) which is highly efficient for microbial cell and spore lysis. This work will contribute to the development of an integrated CD-based assay for rapid diagnosis of infectious diseases.

  20. FBI DRUGFIRE program: the development and deployment of an automated firearms identification system to support serial, gang, and drug-related shooting investigations

    NASA Astrophysics Data System (ADS)

    Sibert, Robert W.

    1994-03-01

    The FBI DRUGFIRE Program entails the continuing phased development and deployment of a scalable automated firearms identification system. The first phase of this system, a networked, database-driven firearms evidence imaging system, has been operational for approximately one year and has demonstrated its effectiveness in facilitating the sharing and linking of firearms evidence collected in serial, gang, and drug-related shooting investigations. However, there is a pressing need for development of enhancements which will more fully automate the system so that it is capable of processing very large volumes of firearms evidence. These enhancements would provide automated image analysis and pattern matching functionalities. Existing `spin off' technologies need to be integrated into the present DRUGFIRE system to automate the 3-D mensuration, registration, feature extraction, and matching of the microtopographical surface features imprinted on the primers of fired casings during firing.

  1. Managing expectations: assessment of chemistry databases generated by automated extraction of chemical structures from patents.

    PubMed

    Senger, Stefan; Bartek, Luca; Papadatos, George; Gaulton, Anna

    2015-12-01

    First public disclosure of new chemical entities often takes place in patents, which makes them an important source of information. However, with an ever increasing number of patent applications, manual processing and curation on such a large scale becomes even more challenging. An alternative approach better suited for this large corpus of documents is the automated extraction of chemical structures. A number of patent chemistry databases generated by using the latter approach are now available but little is known that can help to manage expectations when using them. This study aims to address this by comparing two such freely available sources, SureChEMBL and IBM SIIP (IBM Strategic Intellectual Property Insight Platform), with manually curated commercial databases. When looking at the percentage of chemical structures successfully extracted from a set of patents, using SciFinder as our reference, 59 and 51 % were also found in our comparison in SureChEMBL and IBM SIIP, respectively. When performing this comparison with compounds as starting point, i.e. establishing if for a list of compounds the databases provide the links between chemical structures and patents they appear in, we obtained similar results. SureChEMBL and IBM SIIP found 62 and 59 %, respectively, of the compound-patent pairs obtained from Reaxys. In our comparison of automatically generated vs. manually curated patent chemistry databases, the former successfully provided approximately 60 % of links between chemical structure and patents. It needs to be stressed that only a very limited number of patents and compound-patent pairs were used for our comparison. Nevertheless, our results will hopefully help to manage expectations of users of patent chemistry databases of this type and provide a useful framework for more studies like ours as well as guide future developments of the workflows used for the automated extraction of chemical structures from patents. The challenges we have encountered

  2. Streamlining DNA Barcoding Protocols: Automated DNA Extraction and a New cox1 Primer in Arachnid Systematics

    PubMed Central

    Vidergar, Nina; Toplak, Nataša; Kuntner, Matjaž

    2014-01-01

    Background DNA barcoding is a popular tool in taxonomic and phylogenetic studies, but for most animal lineages protocols for obtaining the barcoding sequences—mitochondrial cytochrome C oxidase subunit I (cox1 AKA CO1)—are not standardized. Our aim was to explore an optimal strategy for arachnids, focusing on the species-richest lineage, spiders by (1) improving an automated DNA extraction protocol, (2) testing the performance of commonly used primer combinations, and (3) developing a new cox1 primer suitable for more efficient alignment and phylogenetic analyses. Methodology We used exemplars of 15 species from all major spider clades, processed a range of spider tissues of varying size and quality, optimized genomic DNA extraction using the MagMAX Express magnetic particle processor—an automated high throughput DNA extraction system—and tested cox1 amplification protocols emphasizing the standard barcoding region using ten routinely employed primer pairs. Results The best results were obtained with the commonly used Folmer primers (LCO1490/HCO2198) that capture the standard barcode region, and with the C1-J-2183/C1-N-2776 primer pair that amplifies its extension. However, C1-J-2183 is designed too close to HCO2198 for well-interpreted, continuous sequence data, and in practice the resulting sequences from the two primer pairs rarely overlap. We therefore designed a new forward primer C1-J-2123 60 base pairs upstream of the C1-J-2183 binding site. The success rate of this new primer (93%) matched that of C1-J-2183. Conclusions The use of C1-J-2123 allows full, indel-free overlap of sequences obtained with the standard Folmer primers and with C1-J-2123 primer pair. Our preliminary tests suggest that in addition to spiders, C1-J-2123 will also perform in other arachnids and several other invertebrates. We provide optimal PCR protocols for these primer sets, and recommend using them for systematic efforts beyond DNA barcoding. PMID:25415202

  3. Automated hollow fiber microextraction based on two immiscible organic solvents for the extraction of two hormonal drugs.

    PubMed

    Tajik, Mohammad; Yamini, Yadollah; Esrafili, Ali; Ebrahimpour, Behnam

    2015-03-25

    In this research, a rapid efficient and automated instrument based on hollow fiber liquid-phase microextraction (HF-LPME) followed by high performance liquid chromatography (HPLC) with UV-vis detection was applied for the preconcentration and determination of two hormonal drugs (megestrol acetate and levonorgestrel) in water and urinary samples. n-Dodecane was used as the supported liquid membrane (SLM) and methanol was used as the acceptor phase in the hollow fiber lumen. The effects of different parameters such as fiber length, extraction time, stirring rate, and ionic strength on the extraction efficiency were investigated using modified simplex and central composite design as the screening and optimization methods, respectively. The composition effect of SLM and type of acceptor phase were optimized separately. For adjustment of the SLM composition, trioctylphosphine oxide (TOPO) was chosen. Under optimized condition, the calibration curves were linear (r(2)>0.997) in the range of 0.5-200 μg L(-1). LOD for both of the drugs were 0.25 μg L(-1). The applicability of this technique was examined by analyzing drugs in water and urine samples. The relative recoveries of the drugs were in the range of 86.2-102.3% that show the capability of the method for the determination of the drugs in various matrices. Copyright © 2014 Elsevier B.V. All rights reserved.

  4. Automated fast extraction of nitrated polycyclic aromatic hydrocarbons from soil by focused microwave-assisted Soxhlet extraction prior to gas chromatography--electron-capture detection.

    PubMed

    Priego-Capote, F; Luque-García, J L; Luque de Castro, M D

    2003-04-25

    An approach for the automated fast extraction of nitrated polycyclic aromatic hydrocarbons (nitroPAHs) from soil, using a focused microwave-assisted Soxhlet extractor, is proposed. The main factors affecting the extraction efficiency (namely: irradiation power, irradiation time, number of cycles and extractant volume) were optimised by using experimental design methodology. The reduction of the nitro-PAHs to amino-PAHs and the derivatisation of the reduced analytes with heptafluorobutyric anhydride was mandatory prior to the separation-determination step by gas chromatography--electron-capture detection. The proposed approach has allowed the extraction of these pollutants from spiked and "real" contaminated soils with extraction efficiencies similar to those provided by the US Environmental Protection Agency methods 3540-8091, but with a drastic reduction in both the extraction time and sample handling, and using less organic solvent, as 75-85% of it was recycled.

  5. Development of automated extraction method of biliary tract from abdominal CT volumes based on local intensity structure analysis

    NASA Astrophysics Data System (ADS)

    Koga, Kusuto; Hayashi, Yuichiro; Hirose, Tomoaki; Oda, Masahiro; Kitasaka, Takayuki; Igami, Tsuyoshi; Nagino, Masato; Mori, Kensaku

    2014-03-01

    In this paper, we propose an automated biliary tract extraction method from abdominal CT volumes. The biliary tract is the path by which bile is transported from liver to the duodenum. No extraction method have been reported for the automated extraction of the biliary tract from common contrast CT volumes. Our method consists of three steps including: (1) extraction of extrahepatic bile duct (EHBD) candidate regions, (2) extraction of intrahepatic bile duct (IHBD) candidate regions, and (3) combination of these candidate regions. The IHBD has linear structures and intensities of the IHBD are low in CT volumes. We use a dark linear structure enhancement (DLSE) filter based on a local intensity structure analysis method using the eigenvalues of the Hessian matrix for the IHBD candidate region extraction. The EHBD region is extracted using a thresholding process and a connected component analysis. In the combination process, we connect the IHBD candidate regions to each EHBD candidate region and select a bile duct region from the connected candidate regions. We applied the proposed method to 22 cases of CT volumes. An average Dice coefficient of extraction result was 66.7%.

  6. Automated feature extraction for retinal vascular biometry in zebrafish using OCT angiography

    NASA Astrophysics Data System (ADS)

    Bozic, Ivan; Rao, Gopikrishna M.; Desai, Vineet; Tao, Yuankai K.

    2017-02-01

    Zebrafish have been identified as an ideal model for angiogenesis because of anatomical and functional similarities with other vertebrates. The scale and complexity of zebrafish assays are limited by the need to manually treat and serially screen animals, and recent technological advances have focused on automation and improving throughput. Here, we use optical coherence tomography (OCT) and OCT angiography (OCT-A) to perform noninvasive, in vivo imaging of retinal vasculature in zebrafish. OCT-A summed voxel projections were low pass filtered and skeletonized to create an en face vascular map prior to connectivity analysis. Vascular segmentation was referenced to the optic nerve head (ONH), which was identified by automatically segmenting the retinal pigment epithelium boundary on the OCT structural volume. The first vessel branch generation was identified as skeleton segments with branch points closest to the ONH, and subsequent generations were found iteratively by expanding the search space outwards from the ONH. Biometric parameters, including length, curvature, and branch angle of each vessel segment were calculated and grouped by branch generation. Despite manual handling and alignment of each animal over multiple time points, we observe distinct qualitative patterns that enable unique identification of each eye from individual animals. We believe this OCT-based retinal biometry method can be applied for automated animal identification and handling in high-throughput organism-level pharmacological assays and genetic screens. In addition, these extracted features may enable high-resolution quantification of longitudinal vascular changes as a method for studying zebrafish models of retinal neovascularization and vascular remodeling.

  7. Californian demonstration and validation of automated agricultural field extraction from multi-temporal Landsat data

    NASA Astrophysics Data System (ADS)

    Yan, L.; Roy, D. P.

    2013-12-01

    The spatial distribution of agricultural fields is a fundamental description of rural landscapes and the location and extent of fields is important to establish the area of land utilized for agricultural yield prediction, resource allocation, and for economic planning. To date, field objects have not been extracted from satellite data over large areas because of computational constraints and because consistently processed appropriate resolution data have not been available or affordable. We present a fully automated computational methodology to extract agricultural fields from 30m Web Enabled Landsat data (WELD) time series and results for approximately 250,000 square kilometers (eleven 150 x 150 km WELD tiles) encompassing all the major agricultural areas of California. The extracted fields, including rectangular, circular, and irregularly shaped fields, are evaluated by comparison with manually interpreted Landsat field objects. Validation results are presented in terms of standard confusion matrix accuracy measures and also the degree of field object over-segmentation, under-segmentation, fragmentation and shape distortion. The apparent success of the presented field extraction methodology is due to several factors. First, the use of multi-temporal Landsat data, as opposed to single Landsat acquisitions, that enables crop rotations and inter-annual variability in the state of the vegetation to be accommodated for and provides more opportunities for cloud-free, non-missing and atmospherically uncontaminated surface observations. Second, the adoption of an object based approach, namely the variational region-based geometric active contour method that enables robust segmentation with only a small number of parameters and that requires no training data collection. Third, the use of a watershed algorithm to decompose connected segments belonging to multiple fields into coherent isolated field segments and a geometry based algorithm to detect and associate parts of

  8. Semi-automated extraction of microbial DNA from feces for qPCR and phylogenetic microarray analysis.

    PubMed

    Nylund, Lotta; Heilig, Hans G H J; Salminen, Seppo; de Vos, Willem M; Satokari, Reetta

    2010-11-01

    The human gastrointestinal tract (GI-tract) harbors a complex microbial ecosystem, largely composed of so far uncultured species, which can be detected only by using techniques such as PCR and by different hybridization techniques including phylogenetic microarrays. Manual DNA extraction from feces is laborious and is one of the bottlenecks holding up the application of microarray and other DNA-based techniques in large cohort studies. In order to enhance the DNA extraction step we combined mechanical disruption of microbial cells by repeated bead-beating (RBB) with two automated DNA extraction methods, KingFisher with InviMag Stool DNA kit (KF) and NucliSENS easyMAG (NeM). The semi-automated DNA extraction methods, RBB combined with either KF or NeM, were compared to the manual extraction method currently considered the most suited method for fecal DNA extraction by assessing the yield of 16S rRNA gene copies by qPCR and total microbiota composition by the HITChip, a phylogenetic microarray. Parallel DNA extractions from infant fecal samples by using the three methods showed that the KF and manual methods gave comparable yields of 16S rRNA gene copies as assessed by qPCR, whereas NeM showed a significantly lower yield. All three methods showed highly similar microbiota profiles in HITChip. Both KF and NeM were found to be suitable methods for DNA extraction from fecal samples after the mechanical disruption of microbial cells by bead-beating. The semi-automated methods could be performed in half of the time required for the manual protocol, while being comparable to the manual method in terms of reagent costs. Copyright © 2010 Elsevier B.V. All rights reserved.

  9. Determination of 21 drugs in oral fluid using fully automated supported liquid extraction and UHPLC-MS/MS.

    PubMed

    Valen, Anja; Leere Øiestad, Åse Marit; Strand, Dag Helge; Skari, Ragnhild; Berg, Thomas

    2016-07-28

    Collection of oral fluid (OF) is easy and non-invasive compared to the collection of urine and blood, and interest in OF for drug screening and diagnostic purposes is increasing. A high-throughput ultra-high-performance liquid chromatography-tandem mass spectrometry method for determination of 21 drugs in OF using fully automated 96-well plate supported liquid extraction for sample preparation is presented. The method contains a selection of classic drugs of abuse, including amphetamines, cocaine, cannabis, opioids, and benzodiazepines. The method was fully validated for 200 μL OF/buffer mix using an Intercept OF sampling kit; validation included linearity, sensitivity, precision, accuracy, extraction recovery, matrix effects, stability, and carry-over. Inter-assay precision (RSD) and accuracy (relative error) were <15% and 13 to 5%, respectively, for all compounds at concentrations equal to or higher than the lower limit of quantification. Extraction recoveries were between 58 and 76% (RSD < 8%), except for tetrahydrocannabinol and three 7-amino benzodiazepine metabolites with recoveries between 23 and 33% (RSD between 51 and 52 % and 11 and 25%, respectively). Ion enhancement or ion suppression effects were observed for a few compounds; however, to a large degree they were compensated for by the internal standards used. Deuterium-labelled and (13) C-labelled internal standards were used for 8 and 11 of the compounds, respectively. In a comparison between Intercept and Quantisal OF kits, better recoveries and fewer matrix effects were observed for some compounds using Quantisal. The method is sensitive and robust for its purposes and has been used successfully since February 2015 for analysis of Intercept OF samples from 2600 cases in a 12-month period. Copyright © 2016 John Wiley & Sons, Ltd.

  10. A comparison of methods for forensic DNA extraction: Chelex-100® and the QIAGEN DNA Investigator Kit (manual and automated).

    PubMed

    Phillips, Kirsty; McCallum, Nicola; Welch, Lindsey

    2012-03-01

    Efficient isolation of DNA from a sample is the basis for successful forensic DNA profiling. There are many DNA extraction methods available and they vary in their ability to efficiently extract the DNA; as well as in processing time, operator intervention, contamination risk and ease of use. In recent years, automated robots have been made available which speed up processing time and decrease the amount of operator input. This project was set up to investigate the efficiency of three DNA extraction methods, two manual (Chelex(®)-100 and the QIAGEN DNA Investigator Kit) and one automated (QIAcube), using both buccal cells and blood stains as the DNA source. Extracted DNA was quantified using real-time PCR in order to assess the amount of DNA present in each sample. Selected samples were then amplified using AmpFlSTR SGM Plus amplification kit. The results suggested that there was no statistical difference between results gained for the different methods investigated, but the automated QIAcube robot made sample processing much simpler and quicker without introducing DNA contamination.

  11. A new automated spectral feature extraction method and its application in spectral classification and defective spectra recovery

    NASA Astrophysics Data System (ADS)

    Wang, Ke; Guo, Ping; Luo, A.-Li

    2017-03-01

    Spectral feature extraction is a crucial procedure in automated spectral analysis. This procedure starts from the spectral data and produces informative and non-redundant features, facilitating the subsequent automated processing and analysis with machine-learning and data-mining techniques. In this paper, we present a new automated feature extraction method for astronomical spectra, with application in spectral classification and defective spectra recovery. The basic idea of our approach is to train a deep neural network to extract features of spectra with different levels of abstraction in different layers. The deep neural network is trained with a fast layer-wise learning algorithm in an analytical way without any iterative optimization procedure. We evaluate the performance of the proposed scheme on real-world spectral data. The results demonstrate that our method is superior regarding its comprehensive performance, and the computational cost is significantly lower than that for other methods. The proposed method can be regarded as a new valid alternative general-purpose feature extraction method for various tasks in spectral data analysis.

  12. A methodology for automated CPA extraction using liver biopsy image analysis and machine learning techniques.

    PubMed

    Tsipouras, Markos G; Giannakeas, Nikolaos; Tzallas, Alexandros T; Tsianou, Zoe E; Manousou, Pinelopi; Hall, Andrew; Tsoulos, Ioannis; Tsianos, Epameinondas

    2017-03-01

    Collagen proportional area (CPA) extraction in liver biopsy images provides the degree of fibrosis expansion in liver tissue, which is the most characteristic histological alteration in hepatitis C virus (HCV). Assessment of the fibrotic tissue is currently based on semiquantitative staging scores such as Ishak and Metavir. Since its introduction as a fibrotic tissue assessment technique, CPA calculation based on image analysis techniques has proven to be more accurate than semiquantitative scores. However, CPA has yet to reach everyday clinical practice, since the lack of standardized and robust methods for computerized image analysis for CPA assessment have proven to be a major limitation. The current work introduces a three-stage fully automated methodology for CPA extraction based on machine learning techniques. Specifically, clustering algorithms have been employed for background-tissue separation, as well as for fibrosis detection in liver tissue regions, in the first and the third stage of the methodology, respectively. Due to the existence of several types of tissue regions in the image (such as blood clots, muscle tissue, structural collagen, etc.), classification algorithms have been employed to identify liver tissue regions and exclude all other non-liver tissue regions from CPA computation. For the evaluation of the methodology, 79 liver biopsy images have been employed, obtaining 1.31% mean absolute CPA error, with 0.923 concordance correlation coefficient. The proposed methodology is designed to (i) avoid manual threshold-based and region selection processes, widely used in similar approaches presented in the literature, and (ii) minimize CPA calculation time. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  13. A multi-atlas based method for automated anatomical rat brain MRI segmentation and extraction of PET activity.

    PubMed

    Lancelot, Sophie; Roche, Roxane; Slimen, Afifa; Bouillot, Caroline; Levigoureux, Elise; Langlois, Jean-Baptiste; Zimmer, Luc; Costes, Nicolas

    2014-01-01

    Preclinical in vivo imaging requires precise and reproducible delineation of brain structures. Manual segmentation is time consuming and operator dependent. Automated segmentation as usually performed via single atlas registration fails to account for anatomo-physiological variability. We present, evaluate, and make available a multi-atlas approach for automatically segmenting rat brain MRI and extracting PET activies. High-resolution 7T 2DT2 MR images of 12 Sprague-Dawley rat brains were manually segmented into 27-VOI label volumes using detailed protocols. Automated methods were developed with 7/12 atlas datasets, i.e. the MRIs and their associated label volumes. MRIs were registered to a common space, where an MRI template and a maximum probability atlas were created. Three automated methods were tested: 1/registering individual MRIs to the template, and using a single atlas (SA), 2/using the maximum probability atlas (MP), and 3/registering the MRIs from the multi-atlas dataset to an individual MRI, propagating the label volumes and fusing them in individual MRI space (propagation & fusion, PF). Evaluation was performed on the five remaining rats which additionally underwent [18F]FDG PET. Automated and manual segmentations were compared for morphometric performance (assessed by comparing volume bias and Dice overlap index) and functional performance (evaluated by comparing extracted PET measures). Only the SA method showed volume bias. Dice indices were significantly different between methods (PF>MP>SA). PET regional measures were more accurate with multi-atlas methods than with SA method. Multi-atlas methods outperform SA for automated anatomical brain segmentation and PET measure's extraction. They perform comparably to manual segmentation for FDG-PET quantification. Multi-atlas methods are suitable for rapid reproducible VOI analyses.

  14. Sensitivity testing of trypanosome detection by PCR from whole blood samples using manual and automated DNA extraction methods.

    PubMed

    Dunlop, J; Thompson, C K; Godfrey, S S; Thompson, R C A

    2014-11-01

    Automated extraction of DNA for testing of laboratory samples is an attractive alternative to labour-intensive manual methods when higher throughput is required. However, it is important to maintain the maximum detection sensitivity possible to reduce the occurrence of type II errors (false negatives; failure to detect the target when it is present), especially in the biomedical field, where PCR is used for diagnosis. We used blood infected with known concentrations of Trypanosoma copemani to test the impact of analysis techniques on trypanosome detection sensitivity by PCR. We compared combinations of a manual and an automated DNA extraction method and two different PCR primer sets to investigate the impact of each on detection levels. Both extraction techniques and specificity of primer sets had a significant impact on detection sensitivity. Samples extracted using the same DNA extraction technique performed substantially differently for each of the separate primer sets. Type I errors (false positives; detection of the target when it is not present), produced by contaminants, were avoided with both extraction methods. This study highlights the importance of testing laboratory techniques with known samples to optimise accuracy of test results.

  15. The BUME method: a novel automated chloroform-free 96-well total lipid extraction method for blood plasma[S

    PubMed Central

    Löfgren, Lars; Ståhlman, Marcus; Forsberg, Gun-Britt; Saarinen, Sinikka; Nilsson, Ralf; Hansson, Göran I.

    2012-01-01

    Lipid extraction from biological samples is a critical and often tedious preanalytical step in lipid research. Primarily on the basis of automation criteria, we have developed the BUME method, a novel chloroform-free total lipid extraction method for blood plasma compatible with standard 96-well robots. In only 60 min, 96 samples can be automatically extracted with lipid profiles of commonly analyzed lipid classes almost identically and with absolute recoveries similar or better to what is obtained using the chloroform-based reference method. Lipid recoveries were linear from 10–100 µl plasma for all investigated lipids using the developed extraction protocol. The BUME protocol includes an initial one-phase extraction of plasma into 300 µl butanol:methanol (BUME) mixture (3:1) followed by two-phase extraction into 300 µl heptane:ethyl acetate (3:1) using 300 µl 1% acetic acid as buffer. The lipids investigated included the most abundant plasma lipid classes (e.g., cholesterol ester, free cholesterol, triacylglycerol, phosphatidylcholine, and sphingomyelin) as well as less abundant but biologically important lipid classes, including ceramide, diacylglycerol, and lyso-phospholipids. This novel method has been successfully implemented in our laboratory and is now used daily. We conclude that the fully automated, high-throughput BUME method can replace chloroform-based methods, saving both human and environmental resources. PMID:22645248

  16. Support Vector Machine with Ensemble Tree Kernel for Relation Extraction

    PubMed Central

    Fu, Hui; Du, Zhiguo

    2016-01-01

    Relation extraction is one of the important research topics in the field of information extraction research. To solve the problem of semantic variation in traditional semisupervised relation extraction algorithm, this paper proposes a novel semisupervised relation extraction algorithm based on ensemble learning (LXRE). The new algorithm mainly uses two kinds of support vector machine classifiers based on tree kernel for integration and integrates the strategy of constrained extension seed set. The new algorithm can weaken the inaccuracy of relation extraction, which is caused by the phenomenon of semantic variation. The numerical experimental research based on two benchmark data sets (PropBank and AIMed) shows that the LXRE algorithm proposed in the paper is superior to other two common relation extraction methods in four evaluation indexes (Precision, Recall, F-measure, and Accuracy). It indicates that the new algorithm has good relation extraction ability compared with others. PMID:27118966

  17. Simultaneous analysis of cortisol and cortisone in saliva using XLC-MS/MS for fully automated online solid phase extraction.

    PubMed

    Jones, Rachel L; Owen, Laura J; Adaway, Joanne E; Keevil, Brian G

    2012-01-15

    Salivary cortisol measurements are increasingly being used in the investigation of disorders of the hypothalamic-pituitary-adrenal axis. In the salivary gland, cortisol is metabolised to cortisone by the action of 11β-hydroxysteroid dehydrogenase type 2, and cortisone is partly responsible for the variable interference observed in current salivary cortisol immunoassays. The aim of this study was to validate an assay for the simultaneous analysis of salivary cortisol and cortisone using the Spark Holland Symbiosis™ in eXtraction liquid chromatography-tandem mass spectrometry (XLC-MS/MS) mode for fully automated online solid phase extraction (SPE). Saliva samples were diluted in water with the addition of internal standard (d4-cortisol and d7-cortisone). Online SPE was performed using the Spark Holland Symbiosis™ with HySphere™ C18 SPE cartridges and compounds were eluted onto a Phenomenex® C18 guard column attached to a Phenomenex® Onyx monolithic C18 column for chromatography. Mass spectrometry used the Waters® Xevo™ TQ MS in electrospray positive mode. Cortisol and cortisone eluted with their internal standards at 1.95 and 2.17 min, respectively, with a total run time of four minutes. No evidence of ion-suppression was observed. The assay was linear up to 3393 nmol/L for cortisol and 3676 nmol/L for cortisone, with lower limits of quantitation of 0.75 nmol/L and 0.50 nmol/L, respectively. Intra- and inter-assay imprecision was <8.9% for cortisol and <6.5% for cortisone across three levels of internal quality control, with accuracy and recovery within accepted limits. High specificity was demonstrated following interference studies which assessed 29 structurally-related steroids at supra-physiological concentrations. We have successfully validated an assay for the simultaneous analysis of salivary cortisol and cortisone using XLC-MS/MS and fully automated online SPE. The assay benefits from increased specificity compared to immunoassay and minimal

  18. Automated Analysis of Clozapine and Norclozapine in Human Plasma Using Novel Extraction Plate Technology and Flow-Injection Tandem Mass Spectrometry.

    PubMed

    Couchman, Lewis; Subramaniam, Krithika; Fisher, Danielle S; Belsey, Sarah L; Handley, Simon A; Flanagan, Robert J

    2016-02-01

    Analysis of plasma clozapine and N-desmethylclozapine (norclozapine) for therapeutic drug monitoring purposes is well established. To minimize analysis times and facilitate rapid reporting of results, we have fully automated sample preparation using novel AC Extraction Plates and a Tecan Freedom EVO 100 liquid handling platform, and minimized extract analysis times using flow-injection tandem mass spectrometry (FIA-MS/MS). Analytes and deuterium-labeled internal standards were extracted from plasma (100 μL) at pH 10.6 and extracts analyzed directly using tandem mass spectrometry [20 μL injection, 0.7 mL/min methanol carrier flow, analysis time (injection-to-injection) approximately 60 seconds]. Validation data showed excellent intraplate and interplate accuracy (95%-104% nominal concentrations). Interbatch precision (% RSD) at the limit of quantitation (0.01 mg/L) was 3.5% and 5.5% for clozapine and norclozapine, respectively. Matrix effects were observed for both clozapine and norclozapine, but were compensated for by the internal standards. Overall process efficiency was 56%-70% and 66%-77% for clozapine and norclozapine, respectively. Mean relative process efficiency was 98% and 99% for clozapine and norclozapine, respectively. Comparison of results from patient samples (n = 81) analyzed using (1) manual liquid-liquid extraction with liquid chromatography-tandem mass spectrometry (LC-MS/MS) and (2) automated extraction with FIA-MS/MS gave y = 1.01x - 0.002, R(2) = 0.9943 and y = 1.01x + 0.009, R(2) = 0.9957 for clozapine and norclozapine, respectively. Bland-Altman plots revealed a [mean (95% limits of agreement) bias of 0.0074 (-0.04 to 0.06) mg/L and of 0.015 (-0.02 to 0.05) mg/L for clozapine and norclozapine, respectively]. FIA-MS/MS used with automated extraction offers a rapid, simple, cost-effective alternative to manual liquid-liquid extraction and conventional LC analysis for clozapine therapeutic drug monitoring.

  19. Rapid and Semi-Automated Extraction of Neuronal Cell Bodies and Nuclei from Electron Microscopy Image Stacks

    PubMed Central

    Holcomb, Paul S.; Morehead, Michael; Doretto, Gianfranco; Chen, Peter; Berg, Stuart; Plaza, Stephen; Spirou, George

    2016-01-01

    Connectomics—the study of how neurons wire together in the brain—is at the forefront of modern neuroscience research. However, many connectomics studies are limited by the time and precision needed to correctly segment large volumes of electron microscopy (EM) image data. We present here a semi-automated segmentation pipeline using freely available software that can significantly decrease segmentation time for extracting both nuclei and cell bodies from EM image volumes. PMID:27259933

  20. Rapid and Semi-automated Extraction of Neuronal Cell Bodies and Nuclei from Electron Microscopy Image Stacks.

    PubMed

    Holcomb, Paul S; Morehead, Michael; Doretto, Gianfranco; Chen, Peter; Berg, Stuart; Plaza, Stephen; Spirou, George

    2016-01-01

    Connectomics-the study of how neurons wire together in the brain-is at the forefront of modern neuroscience research. However, many connectomics studies are limited by the time and precision needed to correctly segment large volumes of electron microscopy (EM) image data. We present here a semi-automated segmentation pipeline using freely available software that can significantly decrease segmentation time for extracting both nuclei and cell bodies from EM image volumes.

  1. Evaluation of automated and manual commercial DNA extraction methods for recovery of Brucella DNA from suspensions and spiked swabs.

    PubMed

    Dauphin, Leslie A; Hutchins, Rebecca J; Bost, Liberty A; Bowen, Michael D

    2009-12-01

    This study evaluated automated and manual commercial DNA extraction methods for their ability to recover DNA from Brucella species in phosphate-buffered saline (PBS) suspension and from spiked swab specimens. Six extraction methods, representing several of the methodologies which are commercially available for DNA extraction, as well as representing various throughput capacities, were evaluated: the MagNA Pure Compact and the MagNA Pure LC instruments, the IT 1-2-3 DNA sample purification kit, the MasterPure Complete DNA and RNA purification kit, the QIAamp DNA blood mini kit, and the UltraClean microbial DNA isolation kit. These six extraction methods were performed upon three pathogenic Brucella species: B. abortus, B. melitensis, and B. suis. Viability testing of the DNA extracts indicated that all six extraction methods were efficient at inactivating virulent Brucella spp. Real-time PCR analysis using Brucella genus- and species-specific TaqMan assays revealed that use of the MasterPure kit resulted in superior levels of detection from bacterial suspensions, while the MasterPure kit and MagNA Pure Compact performed equally well for extraction of spiked swab samples. This study demonstrated that DNA extraction methodologies differ in their ability to recover Brucella DNA from PBS bacterial suspensions and from swab specimens and, thus, that the extraction method used for a given type of sample matrix can influence the sensitivity of real-time PCR assays for Brucella.

  2. PKDE4J: Entity and relation extraction for public knowledge discovery.

    PubMed

    Song, Min; Kim, Won Chul; Lee, Dahee; Heo, Go Eun; Kang, Keun Young

    2015-10-01

    Due to an enormous number of scientific publications that cannot be handled manually, there is a rising interest in text-mining techniques for automated information extraction, especially in the biomedical field. Such techniques provide effective means of information search, knowledge discovery, and hypothesis generation. Most previous studies have primarily focused on the design and performance improvement of either named entity recognition or relation extraction. In this paper, we present PKDE4J, a comprehensive text-mining system that integrates dictionary-based entity extraction and rule-based relation extraction in a highly flexible and extensible framework. Starting with the Stanford CoreNLP, we developed the system to cope with multiple types of entities and relations. The system also has fairly good performance in terms of accuracy as well as the ability to configure text-processing components. We demonstrate its competitive performance by evaluating it on many corpora and found that it surpasses existing systems with average F-measures of 85% for entity extraction and 81% for relation extraction.

  3. An automated method of on-line extraction coupled with flow injection and capillary electrophoresis for phytochemical analysis.

    PubMed

    Chen, Hongli; Ding, Xiuping; Wang, Min; Chen, Xingguo

    2010-11-01

    In this study, an automated system for phytochemical analysis was successfully fabricated for the first time in our laboratory. The system included on-line decocting, filtering, cooling, sample introducing, separation, and detection, which greatly simplified the sample preparation and shortened the analysis time. Samples from the decoction extract were drawn every 5 min through an on-line filter and a condenser pipe to the sample loop from which 20-μL samples were injected into the running buffer and transported into a split-flow interface coupling the flow injection and capillary electrophoresis systems. The separation of glycyrrhetinic acid (GTA) and glycyrrhizic acid (GA) took less than 5 min by using a 10 mM borate buffer (adjusted pH to 8.8) and +10 kV voltage. Calibration curves showed good linearity with correlation coefficients (R) more than 0.9991. The intra-day repeatabilities (n = 5, expressed as relative standard deviation) of the proposed system, obtained using GTA and GA standards, were 1.1% and 0.8% for migration time and 0.7% and 0.9% for peak area, respectively. The mean recoveries of GTA and GA in the off-line extract of Glycyrrhiza uralensis Fisch root were better than 99.0%. The limits of detection (signal-to-noise ratio = 3) of the proposed method were 6.2 μg/mL and 6.9 μg/mL for GTA and GA, respectively. The dynamic changes of GTA and GA on the decoction time were obtained during the on-line decoction process of Glycyrrhiza uralensis Fisch root.

  4. Automated extraction of information from the literature on chemical-CYP3A4 interactions.

    PubMed

    Feng, Chunlai; Yamashita, Fumiyoshi; Hashida, Mitsuru

    2007-01-01

    A text mining system is presented for automatically extracting information from the literature on chemical-CYP3A4 interactions (i.e., substrate, induction, inhibition). The system identifies chemicals and CYP3A4 forms according to a combination of name dictionaries and context features. In addition, it transforms sentences into multiple simple clauses each containing a single event and extracts information on chemical-CYP3A4 interactions using a simple but effective pattern matching method based on the order of three keywords (chemicals, CYP3A4, key verbs). Using this system, 2990 relations including 2700 identified interactions with CYP3A4 for 600 chemicals were extracted from a corpus of 2900 PubMed abstracts. In an evaluation test using 100 randomly selected abstracts, it achieved 87.4% recall and 92.3% precision for identification of the chemical name and 85.2% recall and 92.0% precision for the extraction of chemical-CYP3A4 interactions, respectively. This system will be applicable to interactions of chemicals with any functional proteins, such as enzymes and transporters, simply by changing the list of key verbs.

  5. A simple rapid process for semi-automated brain extraction from magnetic resonance images of the whole mouse head.

    PubMed

    Delora, Adam; Gonzales, Aaron; Medina, Christopher S; Mitchell, Adam; Mohed, Abdul Faheem; Jacobs, Russell E; Bearer, Elaine L

    2016-01-15

    Magnetic resonance imaging (MRI) is a well-developed technique in neuroscience. Limitations in applying MRI to rodent models of neuropsychiatric disorders include the large number of animals required to achieve statistical significance, and the paucity of automation tools for the critical early step in processing, brain extraction, which prepares brain images for alignment and voxel-wise statistics. This novel timesaving automation of template-based brain extraction ("skull-stripping") is capable of quickly and reliably extracting the brain from large numbers of whole head images in a single step. The method is simple to install and requires minimal user interaction. This method is equally applicable to different types of MR images. Results were evaluated with Dice and Jacquard similarity indices and compared in 3D surface projections with other stripping approaches. Statistical comparisons demonstrate that individual variation of brain volumes are preserved. A downloadable software package not otherwise available for extraction of brains from whole head images is included here. This software tool increases speed, can be used with an atlas or a template from within the dataset, and produces masks that need little further refinement. Our new automation can be applied to any MR dataset, since the starting point is a template mask generated specifically for that dataset. The method reliably and rapidly extracts brain images from whole head images, rendering them useable for subsequent analytical processing. This software tool will accelerate the exploitation of mouse models for the investigation of human brain disorders by MRI. Copyright © 2015 Elsevier B.V. All rights reserved.

  6. Automated Agricultural Field Extraction from Multi-temporal Web Enabled Landsat Data

    NASA Astrophysics Data System (ADS)

    Yan, L.; Roy, D. P.

    2012-12-01

    Agriculture has caused significant anthropogenic surface change. In many regions agricultural field sizes may be increasing to maximize yields and reduce costs resulting in decreased landscape spatial complexity and increased homogenization of land uses with potential for significant biogeochemical and ecological effects. To date, studies of the incidence, drivers and impacts of changing field sizes have not been undertaken over large areas because of computational constraints and because consistently processed appropriate resolution data have not been available or affordable. The Landsat series of satellites provides near-global coverage, long term, and appropriate spatial resolution (30m) satellite data to document changing field sizes. The recent free availability of all the Landsat data in the U.S. Landsat archive now provides the opportunity to study field size changes in a global and consistent way. Commercial software can be used to extract fields from Landsat data but are inappropriate for large area application because they require considerable human interaction. This paper presents research to develop and validate an automated computational Geographic Object Based Image Analysis methodology to extract agricultural fields and derive field sizes from Web Enabled Landsat Data (WELD) (http://weld.cr.usgs.gov/). WELD weekly products (30m reflectance and brightness temperature) are classified into Satellite Image Automatic Mapper™ (SIAM™) spectral categories and an edge intensity map and a map of the probability of each pixel being agricultural are derived from five years of 52 weeks of WELD and corresponding SIAM™ data. These data are fused to derive candidate agriculture field segments using a variational region-based geometric active contour model. Geometry-based algorithms are used to decompose connected segments belonging to multiple fields into coherent isolated field objects with a divide and conquer strategy to detect and merge partial circle

  7. Arsenic fractionation in agricultural soil using an automated three-step sequential extraction method coupled to hydride generation-atomic fluorescence spectrometry.

    PubMed

    Rosas-Castor, J M; Portugal, L; Ferrer, L; Guzmán-Mar, J L; Hernández-Ramírez, A; Cerdà, V; Hinojosa-Reyes, L

    2015-05-18

    A fully automated modified three-step BCR flow-through sequential extraction method was developed for the fractionation of the arsenic (As) content from agricultural soil based on a multi-syringe flow injection analysis (MSFIA) system coupled to hydride generation-atomic fluorescence spectrometry (HG-AFS). Critical parameters that affect the performance of the automated system were optimized by exploiting a multivariate approach using a Doehlert design. The validation of the flow-based modified-BCR method was carried out by comparison with the conventional BCR method. Thus, the total As content was determined in the following three fractions: fraction 1 (F1), the acid-soluble or interchangeable fraction; fraction 2 (F2), the reducible fraction; and fraction 3 (F3), the oxidizable fraction. The limits of detection (LOD) were 4.0, 3.4, and 23.6 μg L(-1) for F1, F2, and F3, respectively. A wide working concentration range was obtained for the analysis of each fraction, i.e., 0.013-0.800, 0.011-0.900 and 0.079-1.400 mg L(-1) for F1, F2, and F3, respectively. The precision of the automated MSFIA-HG-AFS system, expressed as the relative standard deviation (RSD), was evaluated for a 200 μg L(-1) As standard solution, and RSD values between 5 and 8% were achieved for the three BCR fractions. The new modified three-step BCR flow-based sequential extraction method was satisfactorily applied for arsenic fractionation in real agricultural soil samples from an arsenic-contaminated mining zone to evaluate its extractability. The frequency of analysis of the proposed method was eight times higher than that of the conventional BCR method (6 vs 48 h), and the kinetics of lixiviation were established for each fraction.

  8. Wnt pathway curation using automated natural language processing: combining statistical methods with partial and full parse for knowledge extraction.

    PubMed

    Santos, Carlos; Eggle, Daniela; States, David J

    2005-04-15

    Wnt signaling is a very active area of research with highly relevant publications appearing at a rate of more than one per day. Building and maintaining databases describing signal transduction networks is a time-consuming and demanding task that requires careful literature analysis and extensive domain-specific knowledge. For instance, more than 50 factors involved in Wnt signal transduction have been identified as of late 2003. In this work we describe a natural language processing (NLP) system that is able to identify references to biological interaction networks in free text and automatically assembles a protein association and interaction map. A 'gold standard' set of names and assertions was derived by manual scanning of the Wnt genes website (http://www.stanford.edu/~rnusse/wntwindow.html) including 53 interactions involved in Wnt signaling. This system was used to analyze a corpus of peer-reviewed articles related to Wnt signaling including 3369 Pubmed and 1230 full text papers. Names for key Wnt-pathway associated proteins and biological entities are identified using a chi-squared analysis of noun phrases over-represented in the Wnt literature as compared to the general signal transduction literature. Interestingly, we identified several instances where generic terms were used on the website when more specific terms occur in the literature, and one typographic error on the Wnt canonical pathway. Using the named entity list and performing an exhaustive assertion extraction of the corpus, 34 of the 53 interactions in the 'gold standard' Wnt signaling set were successfully identified (64% recall). In addition, the automated extraction found several interactions involving key Wnt-related molecules which were missing or different from those in the canonical diagram, and these were confirmed by manual review of the text. These results suggest that a combination of NLP techniques for information extraction can form a useful first-pass tool for assisting human

  9. Using mobile laser scanning data for automated extraction of road markings

    NASA Astrophysics Data System (ADS)

    Guan, Haiyan; Li, Jonathan; Yu, Yongtao; Wang, Cheng; Chapman, Michael; Yang, Bisheng

    2014-01-01

    A mobile laser scanning (MLS) system allows direct collection of accurate 3D point information in unprecedented detail at highway speeds and at less than traditional survey costs, which serves the fast growing demands of transportation-related road surveying including road surface geometry and road environment. As one type of road feature in traffic management systems, road markings on paved roadways have important functions in providing guidance and information to drivers and pedestrians. This paper presents a stepwise procedure to recognize road markings from MLS point clouds. To improve computational efficiency, we first propose a curb-based method for road surface extraction. This method first partitions the raw MLS data into a set of profiles according to vehicle trajectory data, and then extracts small height jumps caused by curbs in the profiles via slope and elevation-difference thresholds. Next, points belonging to the extracted road surface are interpolated into a geo-referenced intensity image using an extended inverse-distance-weighted (IDW) approach. Finally, we dynamically segment the geo-referenced intensity image into road-marking candidates with multiple thresholds that correspond to different ranges determined by point-density appropriate normality. A morphological closing operation with a linear structuring element is finally used to refine the road-marking candidates by removing noise and improving completeness. This road-marking extraction algorithm is comprehensively discussed in the analysis of parameter sensitivity and overall performance. An experimental study performed on a set of road markings with ground-truth shows that the proposed algorithm provides a promising solution to the road-marking extraction from MLS data.

  10. Detecting and extracting clusters in atom probe data: a simple, automated method using Voronoi cells.

    PubMed

    Felfer, P; Ceguerra, A V; Ringer, S P; Cairney, J M

    2015-03-01

    The analysis of the formation of clusters in solid solutions is one of the most common uses of atom probe tomography. Here, we present a method where we use the Voronoi tessellation of the solute atoms and its geometric dual, the Delaunay triangulation to test for spatial/chemical randomness of the solid solution as well as extracting the clusters themselves. We show how the parameters necessary for cluster extraction can be determined automatically, i.e. without user interaction, making it an ideal tool for the screening of datasets and the pre-filtering of structures for other spatial analysis techniques. Since the Voronoi volumes are closely related to atomic concentrations, the parameters resulting from this analysis can also be used for other concentration based methods such as iso-surfaces. Copyright © 2014 Elsevier B.V. All rights reserved.

  11. A novel automated device for rapid nucleic acid extraction utilizing a zigzag motion of magnetic silica beads.

    PubMed

    Yamaguchi, Akemi; Matsuda, Kazuyuki; Uehara, Masayuki; Honda, Takayuki; Saito, Yasunori

    2016-02-04

    We report a novel automated device for nucleic acid extraction, which consists of a mechanical control system and a disposable cassette. The cassette is composed of a bottle, a capillary tube, and a chamber. After sample injection in the bottle, the sample is lysed, and nucleic acids are adsorbed on the surface of magnetic silica beads. These magnetic beads are transported and are vibrated through the washing reagents in the capillary tube under the control of the mechanical control system, and thus, the nucleic acid is purified without centrifugation. The purified nucleic acid is automatically extracted in 3 min for the polymerase chain reaction (PCR). The nucleic acid extraction is dependent on the transport speed and the vibration frequency of the magnetic beads, and optimizing these two parameters provided better PCR efficiency than the conventional manual procedure. There was no difference between the detection limits of our novel device and that of the conventional manual procedure. We have already developed the droplet-PCR machine, which can amplify and detect specific nucleic acids rapidly and automatically. Connecting the droplet-PCR machine to our novel automated extraction device enables PCR analysis within 15 min, and this system can be made available as a point-of-care testing in clinics as well as general hospitals.

  12. A System for Automated Extraction of Metadata from Scanned Documents using Layout Recognition and String Pattern Search Models.

    PubMed

    Misra, Dharitri; Chen, Siyuan; Thoma, George R

    2009-01-01

    One of the most expensive aspects of archiving digital documents is the manual acquisition of context-sensitive metadata useful for the subsequent discovery of, and access to, the archived items. For certain types of textual documents, such as journal articles, pamphlets, official government records, etc., where the metadata is contained within the body of the documents, a cost effective method is to identify and extract the metadata in an automated way, applying machine learning and string pattern search techniques.At the U. S. National Library of Medicine (NLM) we have developed an automated metadata extraction (AME) system that employs layout classification and recognition models with a metadata pattern search model for a text corpus with structured or semi-structured information. A combination of Support Vector Machine and Hidden Markov Model is used to create the layout recognition models from a training set of the corpus, following which a rule-based metadata search model is used to extract the embedded metadata by analyzing the string patterns within and surrounding each field in the recognized layouts.In this paper, we describe the design of our AME system, with focus on the metadata search model. We present the extraction results for a historic collection from the Food and Drug Administration, and outline how the system may be adapted for similar collections. Finally, we discuss some ongoing enhancements to our AME system.

  13. A System for Automated Extraction of Metadata from Scanned Documents using Layout Recognition and String Pattern Search Models

    PubMed Central

    Misra, Dharitri; Chen, Siyuan; Thoma, George R.

    2010-01-01

    One of the most expensive aspects of archiving digital documents is the manual acquisition of context-sensitive metadata useful for the subsequent discovery of, and access to, the archived items. For certain types of textual documents, such as journal articles, pamphlets, official government records, etc., where the metadata is contained within the body of the documents, a cost effective method is to identify and extract the metadata in an automated way, applying machine learning and string pattern search techniques. At the U. S. National Library of Medicine (NLM) we have developed an automated metadata extraction (AME) system that employs layout classification and recognition models with a metadata pattern search model for a text corpus with structured or semi-structured information. A combination of Support Vector Machine and Hidden Markov Model is used to create the layout recognition models from a training set of the corpus, following which a rule-based metadata search model is used to extract the embedded metadata by analyzing the string patterns within and surrounding each field in the recognized layouts. In this paper, we describe the design of our AME system, with focus on the metadata search model. We present the extraction results for a historic collection from the Food and Drug Administration, and outline how the system may be adapted for similar collections. Finally, we discuss some ongoing enhancements to our AME system. PMID:21179386

  14. Automated Purification and Suspension Array Detection of 16S rRNA from Soil and Sediment Extracts Using Tunable Surface Microparticles

    SciTech Connect

    Chandler, Darrell P.; Jarrell, Ann E.

    2004-05-01

    Autonomous, field-deployable molecular detection systems require seamless integration of complex biochemical solutions and physical or mechanical processing steps. In an attempt to simplify the fluidic requirements for integrated biodetection systems, we used tunable surface microparticles both as an rRNA affinity purification resin in a renewable microcolumn sample preparation system and as the sensor surface in a flow cytometer detector. The tunable surface detection limits in both low- and high-salt buffers were 1 ng of total RNA (~104 cell equivalents) in 15-min test tube hybridizations and 10 ng of total RNA (~105 cell equivalents) in hybridizations with the automated system (30-s contact time). RNA fragmentation was essential for achieving tunable surface suspension array specificity. Chaperone probes reduced but did not completely eliminate cross-hybridization, even with probes sharing <50% identity to target sequences. Nonpurified environmental extracts did not irreparably affect our ability to classify color-coded microparticles, but residual environmental constituents significantly quenched the Alexa-532 reporter fluor. Modulating surface charge did not influence the interaction of soluble environmental contaminants with conjugated beads. The automated system greatly reduced the effects of fluorescence quenching, especially in the soil background. The automated system was as efficacious as manual methods for simultaneous sample purification, hybridization, and washing prior to flow cytometry detection. The implications of unexpected target cross-hybridization and fluorescence quenching are discussed relative to the design and implementation of an integrated microbial monitoring system.

  15. Comparative evaluation of commercially available manual and automated nucleic acid extraction methods for rotavirus RNA detection in stools.

    PubMed

    Esona, Mathew D; McDonald, Sharla; Kamili, Shifaq; Kerin, Tara; Gautam, Rashi; Bowen, Michael D

    2013-12-01

    Rotaviruses are a major cause of viral gastroenteritis in children. For accurate and sensitive detection of rotavirus RNA from stool samples by reverse transcription-polymerase chain reaction (RT-PCR), the extraction process must be robust. However, some extraction methods may not remove the strong RT-PCR inhibitors known to be present in stool samples. The objective of this study was to evaluate and compare the performance of six extraction methods used commonly for extraction of rotavirus RNA from stool, which have never been formally evaluated: the MagNA Pure Compact, KingFisher Flex and NucliSENS easyMAG instruments, the NucliSENS miniMAG semi-automated system, and two manual purification kits, the QIAamp Viral RNA kit and a modified RNaid kit. Using each method, total nucleic acid or RNA was extracted from eight rotavirus-positive stool samples with enzyme immunoassay optical density (EIA OD) values ranging from 0.176 to 3.098. Extracts prepared using the MagNA Pure Compact instrument yielded the most consistent results by qRT-PCR and conventional RT-PCR. When extracts prepared from a dilution series were extracted by the 6 methods and tested, rotavirus RNA was detected in all samples by qRT-PCR but by conventional RT-PCR testing, only the MagNA Pure Compact and KingFisher Flex extracts were positive in all cases. RT-PCR inhibitors were detected in extracts produced with the QIAamp Viral RNA Mini kit. The findings of this study should prove useful for selection of extraction methods to be incorporated into future rotavirus detection and genotyping protocols.

  16. Automated Semantic Indices Related to Cognitive Function and Rate of Cognitive Decline

    ERIC Educational Resources Information Center

    Pakhomov, Serguei V. S.; Hemmy, Laura S.; Lim, Kelvin O.

    2012-01-01

    The objective of our study is to introduce a fully automated, computational linguistic technique to quantify semantic relations between words generated on a standard semantic verbal fluency test and to determine its cognitive and clinical correlates. Cognitive differences between patients with Alzheimer's disease and mild cognitive impairment are…

  17. Automated Semantic Indices Related to Cognitive Function and Rate of Cognitive Decline

    ERIC Educational Resources Information Center

    Pakhomov, Serguei V. S.; Hemmy, Laura S.; Lim, Kelvin O.

    2012-01-01

    The objective of our study is to introduce a fully automated, computational linguistic technique to quantify semantic relations between words generated on a standard semantic verbal fluency test and to determine its cognitive and clinical correlates. Cognitive differences between patients with Alzheimer's disease and mild cognitive impairment are…

  18. Mixed-mode isolation of triazine metabolites from soil and aquifer sediments using automated solid-phase extraction

    USGS Publications Warehouse

    Mills, M.S.; Thurman, E.M.

    1992-01-01

    Reversed-phase isolation and ion-exchange purification were combined in the automated solid-phase extraction of two polar s-triazine metabolites, 2-amino-4-chloro-6-(isopropylamino)-s-triazine (deethylatrazine) and 2-amino-4-chloro-6-(ethylamino)-s-triazine (deisopropylatrazine) from clay-loam and slit-loam soils and sandy aquifer sediments. First, methanol/ water (4/1, v/v) soil extracts were transferred to an automated workstation following evaporation of the methanol phase for the rapid reversed-phase isolation of the metabolites on an octadecylresin (C18). The retention of the triazine metabolites on C18 decreased substantially when trace methanol concentrations (1%) remained. Furthermore, the retention on C18 increased with decreasing aqueous solubility and increasing alkyl-chain length of the metabolites and parent herbicides, indicating a reversed-phase interaction. The analytes were eluted with ethyl acetate, which left much of the soil organic-matter impurities on the resin. Second, the small-volume organic eluate was purified on an anion-exchange resin (0.5 mL/min) to extract the remaining soil pigments that could foul the ion source of the GC/MS system. Recoveries of the analytes were 75%, using deuterated atrazine as a surrogate, and were comparable to recoveries by soxhlet extraction. The detection limit was 0.1 ??g/kg with a coefficient of variation of 15%. The ease and efficiency of this automated method makes it viable, practical technique for studying triazine metabolites in the environment.

  19. Automated extraction of oscillation parameters for Kepler observations of solar-type stars

    NASA Astrophysics Data System (ADS)

    Huber, D.; Stello, D.; Bedding, T. R.; Chaplin, W. J.; Arentoft, T.; Quirion, P.-O.; Kjeldsen, H.

    2009-10-01

    The recent launch of the Kepler space telescope brings the opportunity to study oscillations systematically in large numbers of solar-like stars. In the framework of the asteroFLAG project, we have developed an automated pipeline to estimate global oscillation parameters, such as the frequency of maximum power (νmax ) and the large frequency spacing (Δν), for a large number of time series. We present an effective method based on the autocorrelation function to find excess power and use a scaling relation to estimate granulation timescales as initial conditions for background modelling. We derive reliable uncertainties for νmax and Δν through extensive simulations. We have tested the pipeline on about 2000 simulated Kepler stars with magnitudes of V ˜ 7-12 and were able to correctly determine νmax and Δν for about half of the sample. For about 20%, the returned large frequency spacing is accurate enough to determine stellar radii to a 1% precision. We conclude that the methods presented here are a promising approach to process the large amount of data expected from Kepler.

  20. Automated solid phase extraction, on-support derivatization and isotope dilution-GC/MS method for the detection of urinary dialkyl phosphates in humans.

    PubMed

    De Alwis, G K Hemakanthi; Needham, Larry L; Barr, Dana B

    2009-01-15

    We developed an analytical method based on solid phase extraction, on-support derivatization and isotope dilution-GC/MS for the detection of dialkyl phosphate (DAP) metabolites, dimethyl thiophosphate, diethyl thiophosphate, dimethyl dithiophosphate, and diethyl dithiophosphate in human urine. The sample preparative procedure is simple and fully automated. In this method, the analytes were extracted from the urinary matrix onto a styrene-divinyl benzene polymer-based solid phase extraction cartridge and derivatized on-column with pentafluorobenzyl bromide. The ester conjugated analytes are eluted from the column with acetonitrile, concentrated and analyzed. Compared to extraction-post extraction derivatization methods for the analysis of DAP metabolites, this on-support derivatization is fast, efficient, and less labor-intensive. Furthermore, it has fewer steps in the sample preparation, uses less solvent and produces less interference. The method is highly sensitive with limits of detection for the analytes ranging from 0.1 to 0.3 ng/mL. The recoveries were high and comparable with those of our previous method. Relative standard deviation, indicative of the repeatability and precision of the method, was 1-17% for the metabolites.

  1. Automated Extraction of Buildings and Roads in a Graph Partitioning Framework

    NASA Astrophysics Data System (ADS)

    Ok, A. O.

    2013-10-01

    This paper presents an original unsupervised framework to identify regions belonging to buildings and roads from monocular very high resolution (VHR) satellite images. The proposed framework consists of three main stages. In the first stage, we extract information only related to building regions using shadow evidence and probabilistic fuzzy landscapes. Firstly, the shadow areas cast by building objects are detected and the directional spatial relationship between buildings and their shadows is modelled with the knowledge of illumination direction. Thereafter, each shadow region is handled separately and initial building regions are identified by iterative graph-cuts designed in a two-label partitioning. The second stage of the framework automatically classifies the image into four classes: building, shadow, vegetation, and others. In this step, the previously labelled building regions as well as the shadow and vegetation areas are involved in a four-label graph optimization performed in the entire image domain to achieve the unsupervised classification result. The final stage aims to extend this classification to five classes in which the class road is involved. For that purpose, we extract the regions that might belong to road segments and utilize that information in a final graph optimization. This final stage eventually characterizes the regions belonging to buildings and roads. Experiments performed on seven test images selected from GeoEye-1 VHR datasets show that the presented approach has ability to extract the regions belonging to buildings and roads in a single graph theory framework.

  2. SAR matrices: automated extraction of information-rich SAR tables from large compound data sets.

    PubMed

    Wassermann, Anne Mai; Haebel, Peter; Weskamp, Nils; Bajorath, Jürgen

    2012-07-23

    We introduce the SAR matrix data structure that is designed to elucidate SAR patterns produced by groups of structurally related active compounds, which are extracted from large data sets. SAR matrices are systematically generated and sorted on the basis of SAR information content. Matrix generation is computationally efficient and enables processing of large compound sets. The matrix format is reminiscent of SAR tables, and SAR patterns revealed by different categories of matrices are easily interpretable. The structural organization underlying matrix formation is more flexible than standard R-group decomposition schemes. Hence, the resulting matrices capture SAR information in a comprehensive manner.

  3. Highly integrated flow assembly for automated dynamic extraction and determination of readily bioaccessible chromium(VI) in soils exploiting carbon nanoparticle-based solid-phase extraction.

    PubMed

    Rosende, María; Miró, Manuel; Segundo, Marcela A; Lima, José L F C; Cerdà, Víctor

    2011-06-01

    An automated dynamic leaching test integrated in a portable flow-based setup is herein proposed for reliable determination of readily bioaccessible Cr(VI) under worst-case scenarios in soils containing varying levels of contamination. The manifold is devised to accommodate bi-directional flow extraction followed by processing of extracts via either in-line clean-up/preconcentration using multi-walled carbon nanotubes or automatic dilution at will, along with Cr(VI) derivatization and flow-through spectrophotometric detection. The magnitude of readily mobilizable Cr(VI) pools was ascertained by resorting to water extraction as promulgated by current standard leaching tests. The role of carbon nanomaterials for the uptake of Cr(VI) in soil leachates and the configuration of the packed column integrated in the flow manifold were investigated in detail. The analytical performance of the proposed system for in vitro bioaccessibility tests was evaluated in chromium-enriched soils at environmentally relevant levels and in a standard reference soil material (SRM 2701) with a certified value of total hexavalent chromium. The automated method was proven to afford unbiased assessment of water-soluble Cr(VI) in soils as a result of the minimization of the chromium species transformation. By combination of the kinetic leaching profile and a first-order leaching model, the water-soluble Cr(VI) fraction in soils was determined in merely 6 h against >24 h taken in batchwise steady-state standard methods.

  4. Automated Solid Phase Extraction (SPE) LC/NMR Applied to the Structural Analysis of Extractable Compounds from a Pharmaceutical Packaging Material of Construction.

    PubMed

    Norwood, Daniel L; Mullis, James O; Davis, Mark; Pennino, Scott; Egert, Thomas; Gonnella, Nina C

    2013-01-01

    The structural analysis (i.e., identification) of organic chemical entities leached into drug product formulations has traditionally been accomplished with techniques involving the combination of chromatography with mass spectrometry. These include gas chromatography/mass spectrometry (GC/MS) for volatile and semi-volatile compounds, and various forms of liquid chromatography/mass spectrometry (LC/MS or HPLC/MS) for semi-volatile and relatively non-volatile compounds. GC/MS and LC/MS techniques are complementary for structural analysis of leachables and potentially leachable organic compounds produced via laboratory extraction of pharmaceutical container closure/delivery system components and corresponding materials of construction. Both hyphenated analytical techniques possess the separating capability, compound specific detection attributes, and sensitivity required to effectively analyze complex mixtures of trace level organic compounds. However, hyphenated techniques based on mass spectrometry are limited by the inability to determine complete bond connectivity, the inability to distinguish between many types of structural isomers, and the inability to unambiguously determine aromatic substitution patterns. Nuclear magnetic resonance spectroscopy (NMR) does not have these limitations; hence it can serve as a complement to mass spectrometry. However, NMR technology is inherently insensitive and its ability to interface with chromatography has been historically challenging. This article describes the application of NMR coupled with liquid chromatography and automated solid phase extraction (SPE-LC/NMR) to the structural analysis of extractable organic compounds from a pharmaceutical packaging material of construction. The SPE-LC/NMR technology combined with micro-cryoprobe technology afforded the sensitivity and sample mass required for full structure elucidation. Optimization of the SPE-LC/NMR analytical method was achieved using a series of model compounds

  5. Single-trial event-related potential extraction through one-unit ICA-with-reference

    NASA Astrophysics Data System (ADS)

    Lih Lee, Wee; Tan, Tele; Falkmer, Torbjörn; Leung, Yee Hong

    2016-12-01

    Objective. In recent years, ICA has been one of the more popular methods for extracting event-related potential (ERP) at the single-trial level. It is a blind source separation technique that allows the extraction of an ERP without making strong assumptions on the temporal and spatial characteristics of an ERP. However, the problem with traditional ICA is that the extraction is not direct and is time-consuming due to the need for source selection processing. In this paper, the application of an one-unit ICA-with-Reference (ICA-R), a constrained ICA method, is proposed. Approach. In cases where the time-region of the desired ERP is known a priori, this time information is utilized to generate a reference signal, which is then used for guiding the one-unit ICA-R to extract the source signal of the desired ERP directly. Main results. Our results showed that, as compared to traditional ICA, ICA-R is a more effective method for analysing ERP because it avoids manual source selection and it requires less computation thus resulting in faster ERP extraction. Significance. In addition to that, since the method is automated, it reduces the risks of any subjective bias in the ERP analysis. It is also a potential tool for extracting the ERP in online application.

  6. A Model-Based Analysis of Semi-Automated Data Discovery and Entry Using Automated Content Extraction

    DTIC Science & Technology

    2011-02-01

    10. SPONSOR/MONITOR’S ACRONYM(S) 11. SPONSOR/MONITOR’S REPORT NUMBER(S) 12. DISTRIBUTION /AVAILABILITY STATEMENT Approved for public release... distribution unlimited 13. SUPPLEMENTARY NOTES 14. ABSTRACT 15. SUBJECT TERMS 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT Same as...all sentences S = number of sentences across all documents WSa = words per sentence containing a relation for SW WPa = words per paragraph

  7. Automated identification and geometrical features extraction of individual trees from Mobile Laser Scanning data in Budapest

    NASA Astrophysics Data System (ADS)

    Koma, Zsófia; Székely, Balázs; Folly-Ritvay, Zoltán; Skobrák, Ferenc; Koenig, Kristina; Höfle, Bernhard

    2016-04-01

    Mobile Laser Scanning (MLS) is an evolving operational measurement technique for urban environment providing large amounts of high resolution information about trees, street features, pole-like objects on the street sides or near to motorways. In this study we investigate a robust segmentation method to extract the individual trees automatically in order to build an object-based tree database system. We focused on the large urban parks in Budapest (Margitsziget and Városliget; KARESZ project) which contained large diversity of different kind of tree species. The MLS data contained high density point cloud data with 1-8 cm mean absolute accuracy 80-100 meter distance from streets. The robust segmentation method contained following steps: The ground points are determined first. As a second step cylinders are fitted in vertical slice 1-1.5 meter relative height above ground, which is used to determine the potential location of each single trees trunk and cylinder-like object. Finally, residual values are calculated as deviation of each point from a vertically expanded fitted cylinder; these residual values are used to separate cylinder-like object from individual trees. After successful parameterization, the model parameters and the corresponding residual values of the fitted object are extracted and imported into the tree database. Additionally, geometric features are calculated for each segmented individual tree like crown base, crown width, crown length, diameter of trunk, volume of the individual trees. In case of incompletely scanned trees, the extraction of geometric features is based on fitted circles. The result of the study is a tree database containing detailed information about urban trees, which can be a valuable dataset for ecologist, city planners, planting and mapping purposes. Furthermore, the established database will be the initial point for classification trees into single species. MLS data used in this project had been measured in the framework of

  8. Automated solid-phase extraction coupled online with HPLC-FLD for the quantification of zearalenone in edible oil.

    PubMed

    Drzymala, Sarah S; Weiz, Stefan; Heinze, Julia; Marten, Silvia; Prinz, Carsten; Zimathies, Annett; Garbe, Leif-Alexander; Koch, Matthias

    2015-05-01

    Established maximum levels for the mycotoxin zearalenone (ZEN) in edible oil require monitoring by reliable analytical methods. Therefore, an automated SPE-HPLC online system based on dynamic covalent hydrazine chemistry has been developed. The SPE step comprises a reversible hydrazone formation by ZEN and a hydrazine moiety covalently attached to a solid phase. Seven hydrazine materials with different properties regarding the resin backbone, pore size, particle size, specific surface area, and loading have been evaluated. As a result, a hydrazine-functionalized silica gel was chosen. The final automated online method was validated and applied to the analysis of three maize germ oil samples including a provisionally certified reference material. Important performance criteria for the recovery (70-120 %) and precision (RSDr <25 %) as set by the Commission Regulation EC 401/2006 were fulfilled: The mean recovery was 78 % and RSDr did not exceed 8 %. The results of the SPE-HPLC online method were further compared to results obtained by liquid-liquid extraction with stable isotope dilution analysis LC-MS/MS and found to be in good agreement. The developed SPE-HPLC online system with fluorescence detection allows a reliable, accurate, and sensitive quantification (limit of quantification, 30 μg/kg) of ZEN in edible oils while significantly reducing the workload. To our knowledge, this is the first report on an automated SPE-HPLC method based on a covalent SPE approach.

  9. Automated Feature Extraction by Combining Polarimetric SAR and Object-Based Image Analysis for Monitoring of Natural Resource Exploitation

    NASA Astrophysics Data System (ADS)

    Plank, Simon; Mager, Alexander; Schoepfer, Elizabeth

    2015-04-01

    An automated feature extraction procedure based on the combination of a pixel-based unsupervised classification of polarimetric synthetic aperture radar data (co-co dual-polarimetric TerraSAR-X) and an object-based post-classification is presented. The former is based on the entropy/alpha decomposition and the hereon based unsupervised Wishart classification, while the latter considers in addition feature properties such as shape and area. The feature extraction procedure is developed for monitoring oil field infrastructure. For developing countries, several studies reported a high correlation between the dependence of oil exports and violent conflicts. Consequently, to support problem solving, an independent monitoring of the oil field infrastructure by Earth observation is proposed.

  10. Novel automated extraction method for quantitative analysis of urinary 11-nor-delta(9)-tetrahydrocannabinol-9-carboxylic acid (THC-COOH).

    PubMed

    Fu, Shanlin; Lewis, John

    2008-05-01

    An automated extraction method for extracting the major urinary metabolite of cannabis, 11-nor-Delta(9)-tetrahydrocannabinol-9-carboxylic acid (THC-COOH) was developed on the four-probe Gilson ASPEC XL4trade mark solid-phase extraction (SPE) system. The method works on liquid-liquid extraction principles but does not require the use of SPE cartridges. The limits of detection and quantitation and the upper limit of linearity (ULOL) of the developed method were found to be 1, 2, and 1,500 ng/mL, respectively. There was no detectable carry over after 10,000 ng/mL analyte. For a batch of 76 samples, the process uses less than 100 mL methanol, 450 mL extracting solvent hexane/ethyl acetate (5:1, v/v) and 1 L rinsing solvent, 30% methanol in water. The automated extraction process takes 5 h to complete. Precision and accuracy of the method are comparable to both manual liquid-liquid extraction and automated SPE methods. The method has proven to be a simple, speedy, and economical alternative to the currently popular automated SPE method for the quantitative analysis of urinary THC-COOH.

  11. Screening for anabolic steroids in urine of forensic cases using fully automated solid phase extraction and LC-MS-MS.

    PubMed

    Andersen, David W; Linnet, Kristian

    2014-01-01

    A screening method for 18 frequently measured exogenous anabolic steroids and the testosterone/epitestosterone (T/E) ratio in forensic cases has been developed and validated. The method involves a fully automated sample preparation including enzyme treatment, addition of internal standards and solid phase extraction followed by analysis by liquid chromatography-tandem mass spectrometry (LC-MS-MS) using electrospray ionization with adduct formation for two compounds. Urine samples from 580 forensic cases were analyzed to determine the T/E ratio and occurrence of exogenous anabolic steroids. Extraction recoveries ranged from 77 to 95%, matrix effects from 48 to 78%, overall process efficiencies from 40 to 54% and the lower limit of identification ranged from 2 to 40 ng/mL. In the 580 urine samples analyzed from routine forensic cases, 17 (2.9%) were found positive for one or more anabolic steroids. Only seven different steroids including testosterone were found in the material, suggesting that only a small number of common steroids are likely to occur in a forensic context. The steroids were often in high concentrations (>100 ng/mL), and a combination of steroids and/or other drugs of abuse were seen in the majority of cases. The method presented serves as a fast and automated screening procedure, proving the suitability of LC-MS-MS for analyzing anabolic steroids. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  12. A Multi-view Approach for Relation Extraction

    NASA Astrophysics Data System (ADS)

    Zhou, Junsheng; Xu, Qian; Chen, Jiajun; Qu, Weiguang

    Relation extraction is an important problem in information extraction. In this paper, we explore a multi-view strategy for relation extracting task. Motivated by the fact, as in work of Jiang and Zhai's [1], that combining different feature subspaces into a single view does not generate much improvement, we propose a two-stage multi-view learning approach. First, we learn two different classifiers from two different views of relation instances: sequence representation and syntactic parse tree representation, respectively. Then, a meta-learner is trained using the meta data constructed along with other contextual information to achieve a strong predictive performance, as the final classification model. The experimental results conducted on ACE 2005 corpus show that the multi-view approach outperforms each single-view one for relation extraction task.

  13. INVESTIGATION OF ARSENIC SPECIATION ON DRINKING WATER TREATMENT MEDIA UTILIZING AUTOMATED SEQUENTIAL CONTINUOUS FLOW EXTRACTION WITH IC-ICP-MS DETECTION

    EPA Science Inventory

    Three treatment media, used for the removal of arsenic from drinking water, were sequentially extracted using 10mM MgCl2 (pH 8), 10mM NaH2PO4 (pH 7) followed by 10mM (NH4)2C2O4 (pH 3). The media were extracted using an on-line automated continuous extraction system which allowed...

  14. INVESTIGATION OF ARSENIC SPECIATION ON DRINKING WATER TREATMENT MEDIA UTILIZING AUTOMATED SEQUENTIAL CONTINUOUS FLOW EXTRACTION WITH IC-ICP-MS DETECTION

    EPA Science Inventory

    Three treatment media, used for the removal of arsenic from drinking water, were sequentially extracted using 10mM MgCl2 (pH 8), 10mM NaH2PO4 (pH 7) followed by 10mM (NH4)2C2O4 (pH 3). The media were extracted using an on-line automated continuous extraction system which allowed...

  15. Direct Sampling and Analysis from Solid Phase Extraction Cards using an Automated Liquid Extraction Surface Analysis Nanoelectrospray Mass Spectrometry System

    SciTech Connect

    Walworth, Matthew J; ElNaggar, Mariam S; Stankovich, Joseph J; WitkowskiII, Charles E.; Norris, Jeremy L; Van Berkel, Gary J

    2011-01-01

    Direct liquid extraction based surface sampling, a technique previously demonstrated with continuous flow and autonomous pipette liquid microjunction surface sampling probes, has recently been implemented as the Liquid Extraction Surface Analysis (LESA) mode on the commercially available Advion NanoMate chip-based infusion nanoelectrospray ionization system. In the present paper, the LESA mode was applied to the analysis of 96-well format custom solid phase extraction (SPE) cards, with each well consisting of either a 1 or 2 mm diameter monolithic hydrophobic stationary phase. These substrate wells were conditioned, loaded with either single or multi-component aqueous mixtures, and read out using the LESA mode of a TriVersa NanoMate or a Nanomate 100 coupled to an ABI/Sciex 4000QTRAPTM hybrid triple quadrupole/linear ion trap mass spectrometer and a Thermo LTQ XL linear ion trap mass spectrometer. Extraction conditions, including extraction/nanoESI solvent composition, volume, and dwell times, were optimized in the analysis of targeted compounds. Limit of detection and quantitation as well as analysis reproducibility figures of merit were measured. Calibration data was obtained for propranolol using a deuterated internal standard which demonstrated linearity and reproducibility. A 10x increase in signal and cleanup of micromolar Angiotensin II from a concentrated salt solution was demonstrated. Additionally, a multicomponent herbicide mixture at ppb concentration levels was analyzed using MS3 spectra for compound identification in the presence of isobaric interferences.

  16. Extraction of hydroxyaromatic compounds in river water by liquid-liquid-liquid microextraction with automated movement of the acceptor and the donor phase.

    PubMed

    Melwanki, Mahaveer B; Huang, Shang-Da

    2006-08-01

    Liquid-liquid-liquid microextraction with automated movement of the acceptor and the donor phase technique is described for the extraction of six hydroxyaromatic compounds in river water using a disposable and ready to use hollow fiber. Separation and quantitative analyses were performed using LC with UV detection at 254 nm. Analytes were extracted from the acidified sample solution (donor phase) into the organic solvent impregnated in the pores of the hollow fiber and then back extracted into the alkaline solution (acceptor phase) inside the lumen of the hollow fiber. The fiber was held by a conventional 10 microL LC syringe. The acceptor phase was sandwitched between the plunger and a small volume of the organic solvent (microcap). The acceptor solution was repeatedly moved in and out of the hollow fiber using a syringe pump. This movement provides a fresh acceptor phase to come in contact with the organic phase and thus enhancing extraction kinetics thereby leading to the improvement in enrichment of the analytes. The microcap separates the acceptor phase and the donor phase in addition to being partially responsible for mass transfer of the analytes from the donor solution to the acceptor solution. Under stirring, a fresh donor phase will enter through the open end of the fiber that will also contribute to the mass transfer. Various parameters affecting the extraction efficiency viz type of organic solvent, extraction time, stirring speed, effect of sodium chloride, and concentration of donor and acceptor phases were studied. RSD (3.9-5.6%), correlation coefficient (0.995-0.997), detection limit (2.0-51.2 ng/mL), enrichment factor (339-630), relative recovery (93.2-97.9%), and absolute recovery (33.9-63.0%) have also been investigated. The developed method was applied for the analysis of river water.

  17. Automated wide-angle SAR stereo height extraction in rugged terrain using shift-scaling correlation.

    SciTech Connect

    Yocky, David Alan; Jakowatz, Charles V., Jr.

    2003-07-01

    Coherent stereo pairs from cross-track synthetic aperture radar (SAR) collects allow fully automated correlation matching using magnitude and phase data. Yet, automated feature matching (correspondence) becomes more difficult when imaging rugged terrain utilizing large stereo crossing angle geometries because high-relief features can undergo significant spatial distortions. These distortions sometimes cause traditional, shift-only correlation matching to fail. This paper presents a possible solution addressing this difficulty. Changing the complex correlation maximization search from shift-only to shift-and-scaling using the downhill simplex method results in higher correlation. This is shown on eight coherent spotlight-mode cross-track stereo pairs with stereo crossing angles averaging 93.7{sup o} collected over terrain with slopes greater than 20{sup o}. The resulting digital elevation maps (DEMs) are compared to ground truth. Using the shift-scaling correlation approach to calculate disparity, height errors decrease and the number of reliable DEM posts increase.

  18. Sequential Chomospheric Brightening: An Automated Approach to Extracting Physics from Ephemeral Brightening

    DTIC Science & Technology

    2012-10-17

    University; 2 Space Vehicles Directorate, Air Force Research Laboratory; 3 National Solar Observatory, Sunspot, NM; 4 Astrophysics Research Centre...propose a connection of the small-scale features to solar flares. Our automated routine detects and distinguishes three separate types of brightening...88003-8001; mskirk@nmsu.edu 2 Space Vehicles Directorate, Air Force Research Laboratory, Kirtland AFB, NM 87114 3 National Solar Observatory, Sunspot

  19. High performance liquid chromatography for quantification of gatifloxacin in rat plasma following automated on-line solid phase extraction.

    PubMed

    Tasso, Leandro; Dalla Costa, Teresa

    2007-05-09

    An automated system using on-line solid phase extraction and HPLC with fluorimetric detection was developed and validated for quantification of gatifloxacin in rat plasma. The extraction was carried out using C(18) cartridges (BondElut), with a high extraction yield. After washing, gatifloxacin was eluted from the cartridge with mobile phase onto a C(18) HPLC column. The mobile phase consisted of a mixture of phosphoric acid (2.5mM), methanol, acetonitrile and triethylamine (64.8:15:20:0.2, v/v/v/v, apparent pH(app.) 2.8). All samples and standard solutions were chromatographed at 28 degrees C. The method developed was selective and linear for drug concentrations ranging between 20 and 600 ng/ml. Gatifloxacin recovery ranged from 95.6 to 99.7%, and the limit of quantification was 20 ng/ml. The intra and inter-assay accuracy were up to 94.3%. The precision determined not exceed 5.8% of the CV. High extraction yield up to 95% was obtained. Drug stability in plasma was shown in freezer at -20 degrees C up to 1 month, after three freeze-thaw cycles and for 24h in the autosampler after processing. The assay has been successfully applied to measure gatifloxacin plasma concentrations in pharmacokinetic study in rats.

  20. Performance verification of the Maxwell 16 Instrument and DNA IQ Reference Sample Kit for automated DNA extraction of known reference samples.

    PubMed

    Krnajski, Z; Geering, S; Steadman, S

    2007-12-01

    Advances in automation have been made for a number of processes conducted in the forensic DNA laboratory. However, because most robotic systems are designed for high-throughput laboratories batching large numbers of samples, smaller laboratories are left with a limited number of cost-effective options for employing automation. The Maxwell 16 Instrument and DNA IQ Reference Sample Kit marketed by Promega are designed for rapid, automated purification of DNA extracts from sample sets consisting of sixteen or fewer samples. Because the system is based on DNA capture by paramagnetic particles with maximum binding capacity, it is designed to generate extracts with yield consistency. The studies herein enabled evaluation of STR profile concordance, consistency of yield, and cross-contamination performance for the Maxwell 16 Instrument. Results indicate that the system performs suitably for streamlining the process of extracting known reference samples generally used for forensic DNA analysis and has many advantages in a small or moderate-sized laboratory environment.

  1. Automated on-line liquid-liquid extraction system for temporal mass spectrometric analysis of dynamic samples.

    PubMed

    Hsieh, Kai-Ta; Liu, Pei-Han; Urban, Pawel L

    2015-09-24

    Most real samples cannot directly be infused to mass spectrometers because they could contaminate delicate parts of ion source and guides, or cause ion suppression. Conventional sample preparation procedures limit temporal resolution of analysis. We have developed an automated liquid-liquid extraction system that enables unsupervised repetitive treatment of dynamic samples and instantaneous analysis by mass spectrometry (MS). It incorporates inexpensive open-source microcontroller boards (Arduino and Netduino) to guide the extraction and analysis process. Duration of every extraction cycle is 17 min. The system enables monitoring of dynamic processes over many hours. The extracts are automatically transferred to the ion source incorporating a Venturi pump. Operation of the device has been characterized (repeatability, RSD = 15%, n = 20; concentration range for ibuprofen, 0.053-2.000 mM; LOD for ibuprofen, ∼0.005 mM; including extraction and detection). To exemplify its usefulness in real-world applications, we implemented this device in chemical profiling of pharmaceutical formulation dissolution process. Temporal dissolution profiles of commercial ibuprofen and acetaminophen tablets were recorded during 10 h. The extraction-MS datasets were fitted with exponential functions to characterize the rates of release of the main and auxiliary ingredients (e.g. ibuprofen, k = 0.43 ± 0.01 h(-1)). The electronic control unit of this system interacts with the operator via touch screen, internet, voice, and short text messages sent to the mobile phone, which is helpful when launching long-term (e.g. overnight) measurements. Due to these interactive features, the platform brings the concept of the Internet-of-Things (IoT) to the chemistry laboratory environment. Copyright © 2015 Elsevier B.V. All rights reserved.

  2. A high yield DNA extraction method for medically important Candida species: A comparison of manual versus QIAcube-based automated system.

    PubMed

    Das, P; Pandey, P; Harishankar, A; Chandy, M; Bhattacharya, S

    2016-01-01

    The prognosis of infected individuals with candidemia depends on rapid and precise diagnosis which enables optimising treatment. Three fungal DNA extraction protocols have been compared in this study for medically important Candida species. The quality and quantity of the DNA extracted by physical, chemical and automated protocols was compared using NanoDrop ND-2000 spectrophotometer. It was found that the yield and purity (260/230) ratio of extracted DNA was significantly high in the physical treatment-based protocol as compared to chemical based or automated protocol. Extracted DNA-based real time-polymerase chain reaction showed an analytical sensitivity of 103 cfu/mL. The result of this study suggests physical treatment is the most successful extraction technique compared to other two protocols.

  3. Cardiovascular Risk in Hypertension in Relation to Achieved Blood Pressure Using Automated Office Blood Pressure Measurement.

    PubMed

    Myers, Martin G; Kaczorowski, Janusz; Dolovich, Lisa; Tu, Karen; Paterson, J Michael

    2016-10-01

    The SPRINT (Systolic Blood Pressure Intervention Trial) reported that some older, higher risk patients might benefit from a target systolic blood pressure (BP) of <120 versus <140 mm Hg. However, it is not yet known how the BP target and measurement methods used in SPRINT relate to cardiovascular outcomes in real-world practice. SPRINT used the automated office BP technique, which requires the patient to be resting quietly and alone, with multiple readings being recorded automatically using an electronic oscillometric sphygmomanometer. We studied the relationship between achieved automated office BP at baseline and cardiovascular events in 6183 community-dwelling residents of Ontario aged ≥66 years who were receiving antihypertensive therapy and followed for a mean of 4.6 years. Adjusted hazard ratios (95% confidence intervals) were computed for 10 mm Hg increments in achieved automated office BP at baseline using Cox proportional hazards regression and the BP category with the lowest event rate as the reference category. Based on 904 fatal and nonfatal cardiovascular events, the nadir of cardiovascular events was at the systolic pressure category of 110 to 119 mm Hg, which was lower than the next highest category of 120 to 129 mm Hg (hazard ratio 1.30 [1.01, 1.66]). The hazard ratio for diastolic pressure was relatively unchanged above 60 mm Hg. Pulse pressure exhibited an increase in hazard ratio (1.33 [1.02, 1.72]) at ≥80 mm Hg. These results using automated office BP measurement in a usual treatment setting extend the finding in SPRINT of an optimum target systolic BP of <120 mm Hg to routine clinical practice. © 2016 American Heart Association, Inc.

  4. Chemical-induced disease relation extraction with various linguistic features

    PubMed Central

    Gu, Jinghang; Qian, Longhua; Zhou, Guodong

    2016-01-01

    Understanding the relations between chemicals and diseases is crucial in various biomedical tasks such as new drug discoveries and new therapy developments. While manually mining these relations from the biomedical literature is costly and time-consuming, such a procedure is often difficult to keep up-to-date. To address these issues, the BioCreative-V community proposed a challenging task of automatic extraction of chemical-induced disease (CID) relations in order to benefit biocuration. This article describes our work on the CID relation extraction task on the BioCreative-V tasks. We built a machine learning based system that utilized simple yet effective linguistic features to extract relations with maximum entropy models. In addition to leveraging various features, the hypernym relations between entity concepts derived from the Medical Subject Headings (MeSH)-controlled vocabulary were also employed during both training and testing stages to obtain more accurate classification models and better extraction performance, respectively. We demoted relation extraction between entities in documents to relation extraction between entity mentions. In our system, pairs of chemical and disease mentions at both intra- and inter-sentence levels were first constructed as relation instances for training and testing, then two classification models at both levels were trained from the training examples and applied to the testing examples. Finally, we merged the classification results from mention level to document level to acquire final relations between chemicals and diseases. Our system achieved promising F-scores of 60.4% on the development dataset and 58.3% on the test dataset using gold-standard entity annotations, respectively. Database URL: https://github.com/JHnlp/BC5CIDTask PMID:27052618

  5. Chemical-induced disease relation extraction with various linguistic features.

    PubMed

    Gu, Jinghang; Qian, Longhua; Zhou, Guodong

    2016-01-01

    Understanding the relations between chemicals and diseases is crucial in various biomedical tasks such as new drug discoveries and new therapy developments. While manually mining these relations from the biomedical literature is costly and time-consuming, such a procedure is often difficult to keep up-to-date. To address these issues, the BioCreative-V community proposed a challenging task of automatic extraction of chemical-induced disease (CID) relations in order to benefit biocuration. This article describes our work on the CID relation extraction task on the BioCreative-V tasks. We built a machine learning based system that utilized simple yet effective linguistic features to extract relations with maximum entropy models. In addition to leveraging various features, the hypernym relations between entity concepts derived from the Medical Subject Headings (MeSH)-controlled vocabulary were also employed during both training and testing stages to obtain more accurate classification models and better extraction performance, respectively. We demoted relation extraction between entities in documents to relation extraction between entity mentions. In our system, pairs of chemical and disease mentions at both intra- and inter-sentence levels were first constructed as relation instances for training and testing, then two classification models at both levels were trained from the training examples and applied to the testing examples. Finally, we merged the classification results from mention level to document level to acquire final relations between chemicals and diseases. Our system achieved promisingF-scores of 60.4% on the development dataset and 58.3% on the test dataset using gold-standard entity annotations, respectively. Database URL:https://github.com/JHnlp/BC5CIDTask.

  6. Automated methods of tree boundary extraction and foliage transparency estimation from digital imagery

    Treesearch

    Sang-Mook Lee; Neil A. Clark; Philip A. Araman

    2003-01-01

    Foliage transparency in trees is an important indicator for forest health assessment. This paper helps advance transparency measurement research by presenting methods of automatic tree boundary extraction and foliage transparency estimation from digital images taken from the ground of open grown trees.Extraction of proper boundaries of tree crowns is the...

  7. Online in situ analysis of selected semi-volatile organic compounds in water by automated microscale solid-phase extraction with large-volume injection/gas chromatography/mass spectrometry.

    PubMed

    Li, Yongtao; George, John E; McCarty, Christina L

    2007-12-28

    A fully automated analytical method was developed for the online in situ analysis of selected semi-volatile organic compounds in water. The method used a large-volume injection/gas chromatography/mass spectrometry coupled with a fully automated microscale solid-phase extraction technique, which was based on x-y-z robotic techniques. Water samples were extracted by using a 96-well solid-phase extraction plate. For most analytes included in this study, the obtained linear calibrations ranged from 0.05 to 5.0 microg/L with correlation coefficients of 0.996-1.000, the method detection limits were less than 0.1 microg/L, and the relative recoveries were in the range of 70-120% with a relative standard deviation of less than 15% for fortified reagent water samples. The applications to chlorinated tap water, well water, and river water have been validated. The obtained results were similar to those resulting from fortified reagent water samples for all analytes except metribuzin, bromacil, aldrin, and methoxychlor. Matrix effects were observed for these analytes. In general, this fully automated analytical method was rugged, reliable, and easy to operate, and was capable of providing real-time data to water treatment and distribution systems as well as water reservation and protection systems. In addition, the method could reduce the analytical costs associated with sample collection, transportation, storage, and preparation.

  8. Liquid-liquid-liquid microextraction with automated movement of the acceptor and the donor phase for the extraction of phenoxyacetic acids prior to liquid chromatography detection.

    PubMed

    Chen, Chung-Chiang; Melwanki, Mahaveer B; Huang, Shang-Da

    2006-02-03

    A simple liquid-liquid-liquid microextraction with automated movement of the acceptor and the donor phase (LLLME/AMADP) technique is described for the quantitative determination of five phenoxyacetic acids in water using a disposable and ready to use hollow fiber. The target compounds were extracted from the acidified sample solution (donor phase) into the organic solvent residing in the pores of the hollow fiber and then back extracted into the alkaline solution (acceptor phase) inside the lumen of the hollow fiber. The fiber was held by a conventional 10-microl syringe. The acceptor phase was sandwiched between the plunger and a small volume of the organic solvent (microcap). The acceptor solution was repeatedly moved in and out of the hollow fiber assisted by a programmable syringe pump. This repeated movement provides a fresh acceptor phase to come in-contact with the organic phase and thus enhancing extraction kinetics leading to high enrichment of the analytes. The microcap separates the aqueous acceptor phase and the donor phase in addition of being partially responsible for mass transfer of the analytes from donor solution (moving in and out of the hollow fiber from the open end of the fiber) to the acceptor solution. Separation and quantitative analyses were then performed using liquid chromatography (LC) with ultraviolet (UV) detection at 280 nm. Various parameters affecting the extraction efficiency viz. type of organic solvent used for immobilization in the pores of the hollow fiber, extraction time, stirring speed, effect of sodium chloride, and concentration of donor and acceptor phases were studied. Repeatability (RSD, 3.2-7.4%), correlation coefficient (0.996-0.999), detection limit (0.2-2.8 ng ml(-1)) and enrichment factors (129-240) were also investigated. Relative recovery (87-101%) and absolute recoveries (4.6-13%) have also been calculated. The developed method was applied for the analysis of river water.

  9. Characterization and Application of Superlig 620 Solid Phase Extraction Resin for Automated Process Monitoring of 90Sr

    SciTech Connect

    Devol, Timothy A.; Clements, John P.; Farawila, Anne F.; O'Hara, Matthew J.; Egorov, Oleg; Grate, Jay W.

    2009-11-30

    Characterization of SuperLig® 620 solid phase extraction resin was performed in order to develop an automated on-line process monitor for 90Sr. The main focus was on strontium separation from barium, with the goal of developing an automated separation process for 90Sr in high-level wastes. High-level waste contains significant 137Cs activity, of which 137mBa is of great concern as an interference to the quantification of strontium. In addition barium, yttrium and plutonium were studied as potential interferences to strontium uptake and detection. A number of complexants were studied in a series of batch Kd experiments, as SuperLig® 620 was not previously known to elute strontium in typical mineral acids. The optimal separation was found using a 2M nitric acid load solution with a strontium elution step of ~0.49M ammonium citrate and a barium elution step of ~1.8M ammonium citrate. 90Sr quantification of Hanford high-level tank waste was performed on a sequential injection analysis microfluidics system coupled to a flow-cell detector. The results of the on-line procedure are compared to standard radiochemical techniques in this paper.

  10. Automated Control of the Organic and Inorganic Composition of Aloe vera Extracts Using (1)H NMR Spectroscopy.

    PubMed

    Monakhova, Yulia B; Randel, Gabriele; Diehl, Bernd W K

    2016-09-01

    Recent classification of Aloe vera whole-leaf extract by the International Agency for Research and Cancer as a possible carcinogen to humans as well as the continuous adulteration of A. vera's authentic material have generated renewed interest in controlling A. vera. The existing NMR spectroscopic method for the analysis of A. vera, which is based on a routine developed at Spectral Service, was extended. Apart from aloverose, glucose, malic acid, lactic acid, citric acid, whole-leaf material (WLM), acetic acid, fumaric acid, sodium benzoate, and potassium sorbate, the quantification of Mg(2+), Ca(2+), and fructose is possible with the addition of a Cs-EDTA solution to sample. The proposed methodology was automated, which includes phasing, baseline-correction, deconvolution (based on the Lorentzian function), integration, quantification, and reporting. The NMR method was applied to 41 A. vera preparations in the form of liquid A. vera juice and solid A. vera powder. The advantages of the new NMR methodology over the previous method were discussed. Correlation between the new and standard NMR methodologies was significant for aloverose, glucose, malic acid, lactic acid, citric acid, and WLM (P < 0.0001, R(2) = 0.99). NMR was found to be suitable for the automated simultaneous quantitative determination of 13 parameters in A. vera.

  11. A Semi-automated Vector Migration Tool Based on Road Feature Extraction from High Resolution Imagery

    NASA Astrophysics Data System (ADS)

    Haithcoat, T. L.; Song, W.

    2001-05-01

    A major stumbling block to the integration of remotely sensed data into existing GIS data base structures is the issue of positional accuracy of the existing line-work within the vector database. This inaccuracy manifests itself when overlain to more positional consistent imagery data. In the example case presented within this paper, the parcel map had a variable accuracy of up to 40 ft plus or minus once the various parcel map tiles were combined. This is the result of data being built by hand historically and remaining un-edgematched between tiles within a mylar mapping system. The investment to convert this (the only base map widely used) was made and the sheets were scanned and vectorized by the private sector, which very accurately reproduced the inherent errors of this mapping approach. With the incorporation of GPS and the associated problems of edgematching the tiles into a seamless database the local government consortium was stymied. This lead to the development of an image based reference for these data layers from the existing DOQQs (1995 vintage) and 1m Pan IKONOS imagery. A process was developed that uses road feature extraction from these imagery sources as well as road intersections derived from within the parcel map layer to create a continuum of linearized adjustments. The parcel linework is then degenerated into points and topological relatinships and the positional locations altered based on the adjustment surface. Once adjusted, the linework is re-built and topology re-established on the adjusted layer. This is a tool that can assist counties and cities in migrating their vector data to the image base while maintaining the integrity and the relative-positional accuracy of the vector data.

  12. Prospective evaluation of a new automated nucleic acid extraction system using routine clinical respiratory specimens.

    PubMed

    Mengelle, C; Mansuy, J-M; Sandres-Sauné, K; Barthe, C; Boineau, J; Izopet, J

    2012-06-01

    The aim of the study was to evaluate the MagNA Pure 96™ nucleic acid extraction system using clinical respiratory specimens for identifying viruses by qualitative real-time PCR assays. Three extraction methods were tested, that is, the MagNA Pure LC™, the COBAS Ampliprep™, and the MagNA Pure 96™ with 10-fold dilutions of an influenza A(H1N1)pdm09 sample. Two hundred thirty-nine respiratory specimens, 35 throat swabs, 164 nasopharyngeal specimens, and 40 broncho-alveolar fluids, were extracted with the MagNA Pure 96™ and the COBAS Ampliprep™ instruments. Forty COBAS Ampliprep™ positive samples were also tested. Real-time PCRs were used to identify influenza A and influenza A(H1N1)pdm09, rhinovirus, enterovirus, adenovirus, varicella zoster virus, cytomegalovirus, and herpes simplex virus. Similar results were obtained on RNA extracted from dilutions of influenza A(H1N1)pdm09 with the three systems: the MagNA Pure LC™, the COBAS Ampliprep™, and the MagNA Pure 96™. Data from clinical respiratory specimens extracted with the MagNA Pure 96™ and COBAS Ampliprep™ instruments were in 98.5% in agreement (P < 0.0001) for influenza A and influenza A(H1N1)pdm09. Data for rhinovirus were in 97.3% agreement (P < 0.0001) and in 96.8% agreement for enterovirus. They were in 100% agreement for adenovirus. Data for cytomegalovirus and HSV1-2 were in 95.2% agreement (P < 0.0001). The MagNA Pure 96™ instrument is easy-to-use, reliable, and has a high throughput for extracting total nucleic acid from respiratory specimens. These extracts are suitable for molecular diagnosis with any type of real-time PCR assay.

  13. Enhancing biomedical text summarization using semantic relation extraction.

    PubMed

    Shang, Yue; Li, Yanpeng; Lin, Hongfei; Yang, Zhihao

    2011-01-01

    Automatic text summarization for a biomedical concept can help researchers to get the key points of a certain topic from large amount of biomedical literature efficiently. In this paper, we present a method for generating text summary for a given biomedical concept, e.g., H1N1 disease, from multiple documents based on semantic relation extraction. Our approach includes three stages: 1) We extract semantic relations in each sentence using the semantic knowledge representation tool SemRep. 2) We develop a relation-level retrieval method to select the relations most relevant to each query concept and visualize them in a graphic representation. 3) For relations in the relevant set, we extract informative sentences that can interpret them from the document collection to generate text summary using an information retrieval based method. Our major focus in this work is to investigate the contribution of semantic relation extraction to the task of biomedical text summarization. The experimental results on summarization for a set of diseases show that the introduction of semantic knowledge improves the performance and our results are better than the MEAD system, a well-known tool for text summarization.

  14. Automated 96-well solid phase extraction and hydrophilic interaction liquid chromatography-tandem mass spectrometric method for the analysis of cetirizine (ZYRTEC) in human plasma--with emphasis on method ruggedness.

    PubMed

    Song, Qi; Junga, Heiko; Tang, Yong; Li, Austin C; Addison, Tom; McCort-Tipton, Melanie; Beato, Brian; Naidong, Weng

    2005-01-05

    A high-throughput bioanalytical method based on automated sample transfer, automated solid phase extraction, and hydrophilic interaction liquid chromatography-tandem mass spectrometry (HILIC-MS/MS) analysis, has been developed for the determination of cetirizine, a selective H(1)-receptor antagonist. Deuterated cetirizine (cetirizine-d(8)) was synthesized as described and was used as the internal standard. Samples were transferred into 96-well plates using an automated sample handling system. Automated solid phase extraction was carried out using a 96-channel programmable liquid-handling workstation. Solid phase extraction 96-well plate on polymer sorbent (Strata X) was used to extract the analyte. The extracted samples were injected onto a Betasil silica column (50 x 3, 5 microm) using a mobile phase of acetonitrile-water-acetic acid-trifluroacetic acid (93:7:1:0.025, v/v/v/v) at a flow rate of 0.5 ml/min. The chromatographic run time is 2.0 min per injection, with retention time of cetirizine and cetirizine-d(8) both at 1.1 min. The system consisted of a Shimadzu HPLC system and a PE Sciex API 3000 or API 4000 tandem mass spectrometer with (+) ESI. The method has been validated over the concentration range of 1.00-1000 ng/ml cetirizine in human plasma, based on a 0.10-ml sample size. The inter-day precision and accuracy of the quality control (QC) samples demonstrated <3.0% relative standard deviation (R.S.D.) and <6.0% relative error (RE). Stability of cetirizine in stock solution, in plasma, and in reconstitution solution was established. The absolute extraction recovery was 85.8%, 84.5%, and 88.0% at 3, 40, and 800 ng/ml, respectively. The recovery for the internal standard was 84.1%. No adverse matrix effects were noticed for this assay. The automation of the sample preparation steps not only increased the analysis throughput, but also increased method ruggedness. The use of a stable isotope-labeled internal standard further improved the method ruggedness

  15. Automated extraction and analysis of rock discontinuity characteristics from 3D point clouds

    NASA Astrophysics Data System (ADS)

    Bianchetti, Matteo; Villa, Alberto; Agliardi, Federico; Crosta, Giovanni B.

    2016-04-01

    A reliable characterization of fractured rock masses requires an exhaustive geometrical description of discontinuities, including orientation, spacing, and size. These are required to describe discontinuum rock mass structure, perform Discrete Fracture Network and DEM modelling, or provide input for rock mass classification or equivalent continuum estimate of rock mass properties. Although several advanced methodologies have been developed in the last decades, a complete characterization of discontinuity geometry in practice is still challenging, due to scale-dependent variability of fracture patterns and difficult accessibility to large outcrops. Recent advances in remote survey techniques, such as terrestrial laser scanning and digital photogrammetry, allow a fast and accurate acquisition of dense 3D point clouds, which promoted the development of several semi-automatic approaches to extract discontinuity features. Nevertheless, these often need user supervision on algorithm parameters which can be difficult to assess. To overcome this problem, we developed an original Matlab tool, allowing fast, fully automatic extraction and analysis of discontinuity features with no requirements on point cloud accuracy, density and homogeneity. The tool consists of a set of algorithms which: (i) process raw 3D point clouds, (ii) automatically characterize discontinuity sets, (iii) identify individual discontinuity surfaces, and (iv) analyse their spacing and persistence. The tool operates in either a supervised or unsupervised mode, starting from an automatic preliminary exploration data analysis. The identification and geometrical characterization of discontinuity features is divided in steps. First, coplanar surfaces are identified in the whole point cloud using K-Nearest Neighbor and Principal Component Analysis algorithms optimized on point cloud accuracy and specified typical facet size. Then, discontinuity set orientation is calculated using Kernel Density Estimation and

  16. Technical note: Comparative analyses of the quality and yield of genomic DNA from invasive and noninvasive, automated and manual extraction methods.

    PubMed

    Foley, C; O'Farrelly, C; Meade, K G

    2011-06-01

    Several new automated methods have recently become available for high-throughput DNA extraction, including the Maxwell 16 System (Promega UK, Southampton, UK). The purpose of this report is to compare automated with manual DNA extraction methods, and invasive with noninvasive sample collection methods, in terms of DNA yield and quality. Milk, blood, and nasal swab samples were taken from 10 cows for DNA extraction. Nasal swabs were also taken from 10 calves and semen samples from 15 bulls for comparative purposes. The Performagene Livestock (DNA Genotek, Kanata, Ontario, Canada) method was compared with similar samples taken from the same animal using manual extraction methods. All samples were analyzed using both the Qubit Quantification Platform (Invitrogen Ltd., Paisley, UK) and NanoDrop spectrophotometer (NanoDrop Technologies, Inc., Wilmington, DE) to accurately assess DNA quality and quantity. In general, the automated Maxwell 16 System performed best, consistently yielding high quantity and quality DNA across the sample range tested. Average yields of 28.7, 10.3, and 19.2 μg of DNA were obtained from 450 μL of blood, 400 μL of milk, and a single straw of semen, respectively. The quality of DNA obtained from buffy coat and from semen was significantly higher with the automated method than with the manual methods (260/280 ratio of 1.9 and 1.8, respectively). Centrifugation of whole blood facilitated the concentration of leukocytes in the buffy coat, which significantly increased DNA yield after manual extraction. The Performagene method also yielded 18.4 and 49.8 μg of high quality (260/280 ratio of 1.8) DNA from the cow and calf nasal samples, respectively. These results show the advantages of noninvasive sample collection and automated methods for high-throughput extraction and biobanking of high quality DNA. Copyright © 2011 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  17. Automated extraction of information on chemical-P-glycoprotein interactions from the literature.

    PubMed

    Yoshida, Shuya; Yamashita, Fumiyoshi; Ose, Atsushi; Maeda, Kazuya; Sugiyama, Yuichi; Hashida, Mitsuru

    2013-10-28

    Knowledge of the interactions between drugs and transporters is important for drug discovery and development as well as for the evaluation of their clinical safety. We recently developed a text-mining system for the automatic extraction of information on chemical-CYP3A4 interactions from the literature. This system is based on natural language processing and can extract chemical names and their interaction patterns according to sentence context. The present study aimed to extend this system to the extraction of information regarding chemical-transporter interactions. For this purpose, the key verb list designed for cytochrome P450 enzymes was replaced with that for known drug transporters. The performance of the system was then tested by examining the accuracy of information on chemical-P-glycoprotein (P-gp) interactions extracted from randomly selected PubMed abstracts. The system achieved 89.8% recall and 84.2% precision for the identification of chemical names and 71.7% recall and 78.6% precision for the extraction of chemical-P-gp interactions.

  18. [Corrected Title: Solid-Phase Extraction of Polar Compounds from Water] Automated Electrostatics Environmental Chamber

    NASA Technical Reports Server (NTRS)

    Sauer, Richard; Rutz, Jeffrey; Schultz, John

    2005-01-01

    A solid-phase extraction (SPE) process has been developed for removing alcohols, carboxylic acids, aldehydes, ketones, amines, and other polar organic compounds from water. This process can be either a subprocess of a water-reclamation process or a means of extracting organic compounds from water samples for gas-chromatographic analysis. This SPE process is an attractive alternative to an Environmental Protection Administration liquid-liquid extraction process that generates some pollution and does not work in a microgravitational environment. In this SPE process, one forces a water sample through a resin bed by use of positive pressure on the upstream side and/or suction on the downstream side, thereby causing organic compounds from the water to be adsorbed onto the resin. If gas-chromatographic analysis is to be done, the resin is dried by use of a suitable gas, then the adsorbed compounds are extracted from the resin by use of a solvent. Unlike the liquid-liquid process, the SPE process works in both microgravity and Earth gravity. In comparison with the liquid-liquid process, the SPE process is more efficient, extracts a wider range of organic compounds, generates less pollution, and costs less.

  19. Comparative Evaluation of a Commercially Available Automated System for Extraction of Viral DNA from Whole Blood: Application to Monitoring of Epstein-Barr Virus and Cytomegalovirus Load ▿

    PubMed Central

    Pillet, Sylvie; Bourlet, Thomas; Pozzetto, Bruno

    2009-01-01

    The NucliSENS easyMAG automated system was compared to the column-based Qiagen method for Epstein-Barr virus (EBV) or cytomegalovirus (CMV) DNA extraction from whole blood before viral load determination using the corresponding R-gene amplification kits. Both extraction techniques exhibited a total agreement of 81.3% for EBV and 87.2% for CMV. PMID:19710270

  20. Automation of static and dynamic non-dispersive liquid phase microextraction. Part 1: Approaches based on extractant drop-, plug-, film- and microflow-formation.

    PubMed

    Alexovič, Michal; Horstkotte, Burkhard; Solich, Petr; Sabo, Ján

    2016-02-04

    Simplicity, effectiveness, swiftness, and environmental friendliness - these are the typical requirements for the state of the art development of green analytical techniques. Liquid phase microextraction (LPME) stands for a family of elegant sample pretreatment and analyte preconcentration techniques preserving these principles in numerous applications. By using only fractions of solvent and sample compared to classical liquid-liquid extraction, the extraction kinetics, the preconcentration factor, and the cost efficiency can be increased. Moreover, significant improvements can be made by automation, which is still a hot topic in analytical chemistry. This review surveys comprehensively and in two parts the developments of automation of non-dispersive LPME methodologies performed in static and dynamic modes. Their advantages and limitations and the reported analytical performances are discussed and put into perspective with the corresponding manual procedures. The automation strategies, techniques, and their operation advantages as well as their potentials are further described and discussed. In this first part, an introduction to LPME and their static and dynamic operation modes as well as their automation methodologies is given. The LPME techniques are classified according to the different approaches of protection of the extraction solvent using either a tip-like (needle/tube/rod) support (drop-based approaches), a wall support (film-based approaches), or microfluidic devices. In the second part, the LPME techniques based on porous supports for the extraction solvent such as membranes and porous media are overviewed. An outlook on future demands and perspectives in this promising area of analytical chemistry is finally given.

  1. ISRU: Automated Water Extraction Ffrom Mars Surface Soils for Sample Return Missions

    NASA Astrophysics Data System (ADS)

    Willson, D.

    2012-06-01

    An ISRU option for Mars sample return vehicles is to employ a Sojourner/MER sized bucket excavation rover that mines and extracts water from the top 5 cm of surface soils and delivers it to an ISRU on the lander. The option is mass competitive.

  2. An automated algorithm for extracting road edges from terrestrial mobile LiDAR data

    NASA Astrophysics Data System (ADS)

    Kumar, Pankaj; McElhinney, Conor P.; Lewis, Paul; McCarthy, Timothy

    2013-11-01

    Terrestrial mobile laser scanning systems provide rapid and cost effective 3D point cloud data which can be used for extracting features such as the road edge along a route corridor. This information can assist road authorities in carrying out safety risk assessment studies along road networks. The knowledge of the road edge is also a prerequisite for the automatic estimation of most other road features. In this paper, we present an algorithm which has been developed for extracting left and right road edges from terrestrial mobile LiDAR data. The algorithm is based on a novel combination of two modified versions of the parametric active contour or snake model. The parameters involved in the algorithm are selected empirically and are fixed for all the road sections. We have developed a novel way of initialising the snake model based on the navigation information obtained from the mobile mapping vehicle. We tested our algorithm on different types of road sections representing rural, urban and national primary road sections. The successful extraction of road edges from these multiple road section environments validates our algorithm. These findings and knowledge provide valuable insights as well as a prototype road edge extraction tool-set, for both national road authorities and survey companies.

  3. Table Extraction from Web Pages Using Conditional Random Fields to Extract Toponym Related Data

    NASA Astrophysics Data System (ADS)

    Luthfi Hanifah, Hayyu’; Akbar, Saiful

    2017-01-01

    Table is one of the ways to visualize information on web pages. The abundant number of web pages that compose the World Wide Web has been the motivation of information extraction and information retrieval research, including the research for table extraction. Besides, there is a need for a system which is designed to specifically handle location-related information. Based on this background, this research is conducted to provide a way to extract location-related data from web tables so that it can be used in the development of Geographic Information Retrieval (GIR) system. The location-related data will be identified by the toponym (location name). In this research, a rule-based approach with gazetteer is used to recognize toponym from web table. Meanwhile, to extract data from a table, a combination of rule-based approach and statistical-based approach is used. On the statistical-based approach, Conditional Random Fields (CRF) model is used to understand the schema of the table. The result of table extraction is presented on JSON format. If a web table contains toponym, a field will be added on the JSON document to store the toponym values. This field can be used to index the table data in accordance to the toponym, which then can be used in the development of GIR system.

  4. Automated Template-based Brain Localization and Extraction for Fetal Brain MRI Reconstruction.

    PubMed

    Tourbier, Sébastien; Velasco-Annis, Clemente; Taimouri, Vahid; Hagmann, Patric; Meuli, Reto; Warfield, Simon K; Cuadra, Meritxell Bach; Gholipour, Ali

    2017-04-10

    Most fetal brain MRI reconstruction algorithms rely only on brain tissue-relevant voxels of low-resolution (LR) images to enhance the quality of inter-slice motion correction and image reconstruction. Consequently the fetal brain needs to be localized and extracted as a first step, which is usually a laborious and time consuming manual or semi-automatic task. We have proposed in this work to use age-matched template images as prior knowledge to automatize brain localization and extraction. This has been achieved through a novel automatic brain localization and extraction method based on robust template-to-slice block matching and deformable slice-to-template registration. Our template-based approach has also enabled the reconstruction of fetal brain images in standard radiological anatomical planes in a common coordinate space. We have integrated this approach into our new reconstruction pipeline that involves intensity normalization, inter-slice motion correction, and super-resolution (SR) reconstruction. To this end we have adopted a novel approach based on projection of every slice of the LR brain masks into the template space using a fusion strategy. This has enabled the refinement of brain masks in the LR images at each motion correction iteration. The overall brain localization and extraction algorithm has shown to produce brain masks that are very close to manually drawn brain masks, showing an average Dice overlap measure of 94.5%. We have also demonstrated that adopting a slice-to-template registration and propagation of the brain mask slice-by-slice leads to a significant improvement in brain extraction performance compared to global rigid brain extraction and consequently in the quality of the final reconstructed images. Ratings performed by two expert observers show that the proposed pipeline can achieve similar reconstruction quality to reference reconstruction based on manual slice-by-slice brain extraction. The proposed brain mask refinement and

  5. Automatic extraction of relations between medical concepts in clinical texts

    PubMed Central

    Harabagiu, Sanda; Roberts, Kirk

    2011-01-01

    Objective A supervised machine learning approach to discover relations between medical problems, treatments, and tests mentioned in electronic medical records. Materials and methods A single support vector machine classifier was used to identify relations between concepts and to assign their semantic type. Several resources such as Wikipedia, WordNet, General Inquirer, and a relation similarity metric inform the classifier. Results The techniques reported in this paper were evaluated in the 2010 i2b2 Challenge and obtained the highest F1 score for the relation extraction task. When gold standard data for concepts and assertions were available, F1 was 73.7, precision was 72.0, and recall was 75.3. F1 is defined as 2*Precision*Recall/(Precision+Recall). Alternatively, when concepts and assertions were discovered automatically, F1 was 48.4, precision was 57.6, and recall was 41.7. Discussion Although a rich set of features was developed for the classifiers presented in this paper, little knowledge mining was performed from medical ontologies such as those found in UMLS. Future studies should incorporate features extracted from such knowledge sources, which we expect to further improve the results. Moreover, each relation discovery was treated independently. Joint classification of relations may further improve the quality of results. Also, joint learning of the discovery of concepts, assertions, and relations may also improve the results of automatic relation extraction. Conclusion Lexical and contextual features proved to be very important in relation extraction from medical texts. When they are not available to the classifier, the F1 score decreases by 3.7%. In addition, features based on similarity contribute to a decrease of 1.1% when they are not available. PMID:21846787

  6. Background Knowledge in Learning-Based Relation Extraction

    ERIC Educational Resources Information Center

    Do, Quang Xuan

    2012-01-01

    In this thesis, we study the importance of background knowledge in relation extraction systems. We not only demonstrate the benefits of leveraging background knowledge to improve the systems' performance but also propose a principled framework that allows one to effectively incorporate knowledge into statistical machine learning models for…

  7. Background Knowledge in Learning-Based Relation Extraction

    ERIC Educational Resources Information Center

    Do, Quang Xuan

    2012-01-01

    In this thesis, we study the importance of background knowledge in relation extraction systems. We not only demonstrate the benefits of leveraging background knowledge to improve the systems' performance but also propose a principled framework that allows one to effectively incorporate knowledge into statistical machine learning models for…

  8. Extracting infrared absolute reflectance from relative reflectance measurements.

    PubMed

    Berets, Susan L; Milosevic, Milan

    2012-06-01

    Absolute reflectance measurements are valuable to the optics industry for development of new materials and optical coatings. Yet, absolute reflectance measurements are notoriously difficult to make. In this paper, we investigate the feasibility of extracting the absolute reflectance from a relative reflectance measurement using a reference material with known refractive index.

  9. Quantitative analysis of ex vivo colorectal epithelium using an automated feature extraction algorithm for microendoscopy image data

    PubMed Central

    Prieto, Sandra P.; Lai, Keith K.; Laryea, Jonathan A.; Mizell, Jason S.; Muldoon, Timothy J.

    2016-01-01

    Abstract. Qualitative screening for colorectal polyps via fiber bundle microendoscopy imaging has shown promising results, with studies reporting high rates of sensitivity and specificity, as well as low interobserver variability with trained clinicians. A quantitative image quality control and image feature extraction algorithm (QFEA) was designed to lessen the burden of training and provide objective data for improved clinical efficacy of this method. After a quantitative image quality control step, QFEA extracts field-of-view area, crypt area, crypt circularity, and crypt number per image. To develop and validate this QFEA, a training set of microendoscopy images was collected from freshly resected porcine colon epithelium. The algorithm was then further validated on ex vivo image data collected from eight human subjects, selected from clinically normal appearing regions distant from grossly visible tumor in surgically resected colorectal tissue. QFEA has proven flexible in application to both mosaics and individual images, and its automated crypt detection sensitivity ranges from 71 to 94% despite intensity and contrast variation within the field of view. It also demonstrates the ability to detect and quantify differences in grossly normal regions among different subjects, suggesting the potential efficacy of this approach in detecting occult regions of dysplasia. PMID:27335893

  10. Fully automated Liquid Extraction-Based Surface Sampling and Ionization Using a Chip-Based Robotic Nanoelectrospray Platform

    SciTech Connect

    Kertesz, Vilmos; Van Berkel, Gary J

    2010-01-01

    A fully automated liquid extraction-based surface sampling device utilizing an Advion NanoMate chip-based infusion nanoelectrospray ionization system is reported. Analyses were enabled for discrete spot sampling by using the Advanced User Interface of the current commercial control software. This software interface provided the parameter control necessary for the NanoMate robotic pipettor to both form and withdraw a liquid microjunction for sampling from a surface. The system was tested with three types of analytically important sample surface types, viz., spotted sample arrays on a MALDI plate, dried blood spots on paper, and whole-body thin tissue sections from drug dosed mice. The qualitative and quantitative data were consistent with previous studies employing other liquid extraction-based surface sampling techniques. The successful analyses performed here utilized the hardware and software elements already present in the NanoMate system developed to handle and analyze liquid samples. Implementation of an appropriate sample (surface) holder, a solvent reservoir, faster movement of the robotic arm, finer control over solvent flow rate when dispensing and retrieving the solution at the surface, and the ability to select any location on a surface to sample from would improve the analytical performance and utility of the platform.

  11. A Simple Method for Automated Solid Phase Extraction of Water Samples for Immunological Analysis of Small Pollutants.

    PubMed

    Heub, Sarah; Tscharner, Noe; Kehl, Florian; Dittrich, Petra S; Follonier, Stéphane; Barbe, Laurent

    2016-01-01

    A new method for solid phase extraction (SPE) of environmental water samples is proposed. The developed prototype is cost-efficient and user friendly, and enables to perform rapid, automated and simple SPE. The pre-concentrated solution is compatible with analysis by immunoassay, with a low organic solvent content. A method is described for the extraction and pre-concentration of natural hormone 17β-estradiol in 100 ml water samples. Reverse phase SPE is performed with octadecyl-silica sorbent and elution is done with 200 µl of methanol 50% v/v. Eluent is diluted by adding di-water to lower the amount of methanol. After preparing manually the SPE column, the overall procedure is performed automatically within 1 hr. At the end of the process, estradiol concentration is measured by using a commercial enzyme-linked immune-sorbent assay (ELISA). 100-fold pre-concentration is achieved and the methanol content in only 10% v/v. Full recoveries of the molecule are achieved with 1 ng/L spiked de-ionized and synthetic sea water samples.

  12. Automated reference region extraction and population-based input function for brain [11C]TMSX PET image analyses

    PubMed Central

    Rissanen, Eero; Tuisku, Jouni; Luoto, Pauliina; Arponen, Eveliina; Johansson, Jarkko; Oikonen, Vesa; Parkkola, Riitta; Airas, Laura; Rinne, Juha O

    2015-01-01

    [11C]TMSX ([7-N-methyl-11C]-(E)-8-(3,4,5-trimethoxystyryl)-1,3,7-trimethylxanthine) is a selective adenosine A2A receptor (A2AR) radioligand. In the central nervous system (CNS), A2AR are linked to dopamine D2 receptor function in striatum, but they are also important modulators of inflammation. The golden standard for kinetic modeling of brain [11C]TMSX positron emission tomography (PET) is to obtain arterial input function via arterial blood sampling. However, this method is laborious, prone to errors and unpleasant for study subjects. The aim of this work was to evaluate alternative input function acquisition methods for brain [11C]TMSX PET imaging. First, a noninvasive, automated method for the extraction of gray matter reference region using supervised clustering (SCgm) was developed. Second, a method for obtaining a population-based arterial input function (PBIF) was implemented. These methods were created using data from 28 study subjects (7 healthy controls, 12 multiple sclerosis patients, and 9 patients with Parkinson's disease). The results with PBIF correlated well with original plasma input, and the SCgm yielded similar results compared with cerebellum as a reference region. The clustering method for extracting reference region and the population-based approach for acquiring input for dynamic [11C]TMSX brain PET image analyses appear to be feasible and robust methods, that can be applied in patients with CNS pathology. PMID:25370856

  13. Progress in automated extraction and purification of in situ 14C from quartz: Results from the Purdue in situ 14C laboratory

    NASA Astrophysics Data System (ADS)

    Lifton, Nathaniel; Goehring, Brent; Wilson, Jim; Kubley, Thomas; Caffee, Marc

    2015-10-01

    Current extraction methods for in situ 14C from quartz [e.g., Lifton et al., (2001), Pigati et al., (2010), Hippe et al., (2013)] are time-consuming and repetitive, making them an attractive target for automation. We report on the status of in situ 14C extraction and purification systems originally automated at the University of Arizona that have now been reconstructed and upgraded at the Purdue Rare Isotope Measurement Laboratory (PRIME Lab). The Purdue in situ 14C laboratory builds on the flow-through extraction system design of Pigati et al. (2010), automating most of the procedure by retrofitting existing valves with external servo-controlled actuators, regulating the pressure of research purity O2 inside the furnace tube via a PID-based pressure controller in concert with an inlet mass flow controller, and installing an automated liquid N2 distribution system, all driven by LabView® software. A separate system for cryogenic CO2 purification, dilution, and splitting is also fully automated, ensuring a highly repeatable process regardless of the operator. We present results from procedural blanks and an intercomparison material (CRONUS-A), as well as results of experiments to increase the amount of material used in extraction, from the standard 5 g to 10 g or above. Results thus far are quite promising with procedural blanks comparable to previous work and significant improvements in reproducibility for CRONUS-A measurements. The latter analyses also demonstrate the feasibility of quantitative extraction of in situ 14C from sample masses up to 10 g. Our lab is now analyzing unknowns routinely, but lowering overall blank levels is the focus of ongoing research.

  14. Coreference based event-argument relation extraction on biomedical text

    PubMed Central

    2011-01-01

    This paper presents a new approach to exploit coreference information for extracting event-argument (E-A) relations from biomedical documents. This approach has two advantages: (1) it can extract a large number of valuable E-A relations based on the concept of salience in discourse; (2) it enables us to identify E-A relations over sentence boundaries (cross-links) using transitivity of coreference relations. We propose two coreference-based models: a pipeline based on Support Vector Machine (SVM) classifiers, and a joint Markov Logic Network (MLN). We show the effectiveness of these models on a biomedical event corpus. Both models outperform the systems that do not use coreference information. When the two proposed models are compared to each other, joint MLN outperforms pipeline SVM with gold coreference information. PMID:22166257

  15. An automated system for retrieving herb-drug interaction related articles from MEDLINE

    PubMed Central

    Lin, Kuo; Friedman, Carol; Finkelstein, Joseph

    2016-01-01

    An automated, user-friendly and accurate system for retrieving herb-drug interaction (HDIs) related articles in MEDLINE can increase the safety of patients, as well as improve the physicians’ article retrieving ability regarding speed and experience. Previous studies show that MeSH based queries associated with negative effects of drugs can be customized, resulting in good performance in retrieving relevant information, but no study has focused on the area of herb-drug interactions (HDI). This paper adapted the characteristics of HDI related papers and created a multilayer HDI article searching system. It achieved a sensitivity of 92% at a precision of 93% in a preliminary evaluation. Instead of requiring physicians to conduct PubMed searches directly, this system applies a more user-friendly approach by employing a customized system that enhances PubMed queries, shielding users from having to write queries, dealing with PubMed, or reading many irrelevant articles. The system provides automated processes and outputs target articles based on the input. PMID:27570662

  16. Simultaneous determination of 15 aminoglycoside(s) residues in animal derived foods by automated solid-phase extraction and liquid chromatography-tandem mass spectrometry.

    PubMed

    Tao, Yanfei; Chen, Dongmei; Yu, Huan; Huang, Lingli; Liu, Zhaoying; Cao, Xiaoqin; Yan, Caixia; Pan, Yuanhu; Liu, Zhenli; Yuan, Zonghui

    2012-11-15

    An automated method has been developed for the simultaneous quantification of 15 aminoglycosides in muscle, liver (pigs, chicken and cattle), kidney (pigs and cattle), cow milk, and hen eggs by liquid chromatography tandem mass spectrometry. Homogenized samples were extracted by monopotassium phosphate buffer (including ethylene diamine tetraacetic acid), and cleaned up with auto solid-phase extraction by carboxylic acid cartridges. The analytes were separated by a specialized column for aminoglycosides, and eluted with trifluoroacetic acid and acetonitrile. The decision limits (CCα) of apramycin, gentamycin, tobramycin, paromomycin, hygromycin, neomycin, kanamycin, sisomicin, netilmicin, ribostamycin, kasugamycin, amikacin, streptomycin, dihydrostreptomycin and spectinomycin were ranged from 8.1 to 11.8 μg/kg and detection capabilities (CCβ) from 16.4 to 21.8 μg/kg. High correlation coefficients (r(2)>0.99) of calibration curves for the analytes were obtained within linear from 20 to 1000 μg/kg. Reasonable recoveries (71-108%) were demonstrated with excellent relative standard deviation (RSD). This method is simple pretreatment, rapid determination and high sensitivity, which can be used in the determination of multi-aminoglycosides in complex samples. Copyright © 2012 Elsevier Ltd. All rights reserved.

  17. Rhythmic brushstrokes distinguish van Gogh from his contemporaries: findings via automated brushstroke extraction.

    PubMed

    Li, Jia; Yao, Lei; Hendriks, Ella; Wang, James Z

    2012-06-01

    Art historians have long observed the highly characteristic brushstroke styles of Vincent van Gogh and have relied on discerning these styles for authenticating and dating his works. In our work, we compared van Gogh with his contemporaries by statistically analyzing a massive set of automatically extracted brushstrokes. A novel extraction method is developed by exploiting an integration of edge detection and clustering-based segmentation. Evidence substantiates that van Gogh's brushstrokes are strongly rhythmic. That is, regularly shaped brushstrokes are tightly arranged, creating a repetitive and patterned impression. We also found that the traits that distinguish van Gogh's paintings in different time periods of his development are all different from those distinguishing van Gogh from his peers. This study confirms that the combined brushwork features identified as special to van Gogh are consistently held throughout his French periods of production (1886-1890).

  18. Automated object extraction from remote sensor image based on adaptive thresholding technique

    NASA Astrophysics Data System (ADS)

    Zhao, Tongzhou; Ma, Shuaijun; Li, Jin; Ming, Hui; Luo, Xiaobo

    2009-10-01

    Detection and extraction of the dim moving small objects in the infrared image sequences is an interesting research area. A system for detection of the dim moving small targets in the IR image sequences is presented, and a new algorithm having high performance for extracting moving small targets in infrared image sequences containing cloud clutter is proposed in the paper. This method can get the better detection precision than some other methods, and two independent units can realize the calculative process. The novelty of the algorithm is that it uses adaptive thresholding technique of the moving small targets in both the spatial domain and temporal domain. The results of experiment show that the algorithm we presented has high ratio of detection precision.

  19. Automating identification of avian vocalizations using time-frequency information extracted from the Gabor transform.

    PubMed

    Connor, Edward F; Li, Shidong; Li, Steven

    2012-07-01

    Based on the Gabor transform, a metric is developed and applied to automatically identify bird species from a sample of 568 digital recordings of songs/calls from 67 species of birds. The Gabor frequency-amplitude spectrum and the Gabor time-amplitude profile are proposed as a means to characterize the frequency and time patterns of a bird song. An approach based on template matching where unknown song clips are compared to a library of known song clips is used. After adding noise to simulate the background environment and using an adaptive high-pass filter to de-noise the recordings, the successful identification rate exceeded 93% even at signal-to-noise ratios as low as 5 dB. Bird species whose songs/calls were dominated by low frequencies were more difficult to identify than species whose songs were dominated by higher frequencies. The results suggest that automated identification may be practical if comprehensive libraries of recordings that encompass the vocal variation within species can be assembled.

  20. Quantification of lung tumor rotation with automated landmark extraction using orthogonal cine MRI images

    NASA Astrophysics Data System (ADS)

    Paganelli, Chiara; Lee, Danny; Greer, Peter B.; Baroni, Guido; Riboldi, Marco; Keall, Paul

    2015-09-01

    The quantification of tumor motion in sites affected by respiratory motion is of primary importance to improve treatment accuracy. To account for motion, different studies analyzed the translational component only, without focusing on the rotational component, which was quantified in a few studies on the prostate with implanted markers. The aim of our study was to propose a tool able to quantify lung tumor rotation without the use of internal markers, thus providing accurate motion detection close to critical structures such as the heart or liver. Specifically, we propose the use of an automatic feature extraction method in combination with the acquisition of fast orthogonal cine MRI images of nine lung patients. As a preliminary test, we evaluated the performance of the feature extraction method by applying it on regions of interest around (i) the diaphragm and (ii) the tumor and comparing the estimated motion with that obtained by (i) the extraction of the diaphragm profile and (ii) the segmentation of the tumor, respectively. The results confirmed the capability of the proposed method in quantifying tumor motion. Then, a point-based rigid registration was applied to the extracted tumor features between all frames to account for rotation. The median lung rotation values were  -0.6   ±   2.3° and  -1.5   ±   2.7° in the sagittal and coronal planes respectively, confirming the need to account for tumor rotation along with translation to improve radiotherapy treatment.

  1. Exploratory normalized difference water indices for semi-automated extraction of Antarctic lake features

    NASA Astrophysics Data System (ADS)

    Jawak, Shridhar D.; Luis, Alvarinho J.

    2016-05-01

    This work presents various normalized difference water indices (NDWI) to delineate lakes from Schirmacher Oasis, East Antarctica, by using a very high resolution WorldView-2 (WV-2) satellite imagery. Schirmacher oasis region hosts a number of fresh as well as saline water lakes, such as epishelf lakes, ice-free or landlocked lakes, which are completely frozen or semi-frozen and in a ice-free state. Hence, detecting all these types of lakes distinctly on satellite imagery was the major challenge, as the spectral characteristics of various types of lakes were identical to the other land cover targets. Multiband spectral index pixel-based approach is most experimented and recently growing technique because of its unbeatable advantages such as its simplicity and comparatively lesser amount of processing-time. In present study, semiautomatic extraction of lakes in cryospheric region was carried out by designing specific spectral indices. The study utilized number of existing spectral indices to extract lakes but none could deliver satisfactory results and hence we modified NDWI. The potentials of newly added bands in WV-2 satellite imagery was explored by developing spectral indices comprising of Yellow (585 - 625 nm) band, in combination with Blue (450 - 510 nm), Coastal (400 - 450 nm) and Green (510 - 580 nm) bands. For extraction of frozen lakes, use of Yellow (585 - 625 nm) and near-infrared 2 (NIR2) band pair, and Yellow and Green band pair worked well, whereas for ice-free lakes extraction, a combination of Blue and Coastal band yielded appreciable results, when compared with manually digitized data. The results suggest that the modified NDWI approach rendered bias error varying from 1 to 34 m2.

  2. Automated extraction of DNA from blood and PCR setup using a Tecan Freedom EVO liquid handler for forensic genetic STR typing of reference samples.

    PubMed

    Stangegaard, Michael; Frøslev, Tobias G; Frank-Hansen, Rune; Hansen, Anders J; Morling, Niels

    2011-04-01

    We have implemented and validated automated protocols for DNA extraction and PCR setup using a Tecan Freedom EVO liquid handler mounted with the Te-MagS magnetic separation device (Tecan, Männedorf, Switzerland). The protocols were validated for accredited forensic genetic work according to ISO 17025 using the Qiagen MagAttract DNA Mini M48 kit (Qiagen GmbH, Hilden, Germany) from fresh whole blood and blood from deceased individuals. The workflow was simplified by returning the DNA extracts to the original tubes minimizing the risk of misplacing samples. The tubes that originally contained the samples were washed with MilliQ water before the return of the DNA extracts. The PCR was setup in 96-well microtiter plates. The methods were validated for the kits: AmpFℓSTR Identifiler, SGM Plus and Yfiler (Applied Biosystems, Foster City, CA), GenePrint FFFL and PowerPlex Y (Promega, Madison, WI). The automated protocols allowed for extraction and addition of PCR master mix of 96 samples within 3.5h. In conclusion, we demonstrated that (1) DNA extraction with magnetic beads and (2) PCR setup for accredited, forensic genetic short tandem repeat typing can be implemented on a simple automated liquid handler leading to the reduction of manual work, and increased quality and throughput.

  3. Automated sample preparation based on the sequential injection principle. Solid-phase extraction on a molecularly imprinted polymer coupled on-line to high-performance liquid chromatography.

    PubMed

    Theodoridis, Georgios; Zacharis, Constantinos K; Tzanavaras, Paraskevas D; Themelis, Demetrius G; Economou, Anastasios

    2004-03-19

    A molecularly imprinted polymer (MIP) prepared using caffeine, as a template, was validated as a selective sorbent for solid-phase extraction (SPE), within an automated on-line sample preparation method. The polymer produced was packed in a polypropylene cartridge, which was incorporated in a flow system prior to the HPLC analytical instrumentation. The principle of sequential injection was utilised for a rapid automated and efficient SPE procedure on the MIP. Samples, buffers, washing and elution solvents were introduced to the extraction cartridge via a peristaltic pump and a multi-position valve, both controlled by appropriate software developed in-house. The method was optimised in terms of flow rates, extraction time and volume. After extraction, the final eluent from the extraction cartridge was directed to the injection loop and was subsequently analysed on HPLC. The overall set-up facilitated unattended operation, operation and improved both mixing fluidics and method development flexibility. This system may be readily built in the laboratory and can be further used as an automated platform for on-line sample preparation.

  4. Evaluation of an Automated Information Extraction Tool for Imaging Data Elements to Populate a Breast Cancer Screening Registry.

    PubMed

    Lacson, Ronilda; Harris, Kimberly; Brawarsky, Phyllis; Tosteson, Tor D; Onega, Tracy; Tosteson, Anna N A; Kaye, Abby; Gonzalez, Irina; Birdwell, Robyn; Haas, Jennifer S

    2015-10-01

    Breast cancer screening is central to early breast cancer detection. Identifying and monitoring process measures for screening is a focus of the National Cancer Institute's Population-based Research Optimizing Screening through Personalized Regimens (PROSPR) initiative, which requires participating centers to report structured data across the cancer screening continuum. We evaluate the accuracy of automated information extraction of imaging findings from radiology reports, which are available as unstructured text. We present prevalence estimates of imaging findings for breast imaging received by women who obtained care in a primary care network participating in PROSPR (n = 139,953 radiology reports) and compared automatically extracted data elements to a "gold standard" based on manual review for a validation sample of 941 randomly selected radiology reports, including mammograms, digital breast tomosynthesis, ultrasound, and magnetic resonance imaging (MRI). The prevalence of imaging findings vary by data element and modality (e.g., suspicious calcification noted in 2.6% of screening mammograms, 12.1% of diagnostic mammograms, and 9.4% of tomosynthesis exams). In the validation sample, the accuracy of identifying imaging findings, including suspicious calcifications, masses, and architectural distortion (on mammogram and tomosynthesis); masses, cysts, non-mass enhancement, and enhancing foci (on MRI); and masses and cysts (on ultrasound), range from 0.8 to1.0 for recall, precision, and F-measure. Information extraction tools can be used for accurate documentation of imaging findings as structured data elements from text reports for a variety of breast imaging modalities. These data can be used to populate screening registries to help elucidate more effective breast cancer screening processes.

  5. An energy minimization approach to automated extraction of regular building footprints from airborne LiDAR data

    NASA Astrophysics Data System (ADS)

    He, Y.; Zhang, C.; Fraser, C. S.

    2014-08-01

    This paper presents an automated approach to the extraction of building footprints from airborne LiDAR data based on energy minimization. Automated 3D building reconstruction in complex urban scenes has been a long-standing challenge in photogrammetry and computer vision. Building footprints constitute a fundamental component of a 3D building model and they are useful for a variety of applications. Airborne LiDAR provides large-scale elevation representation of urban scene and as such is an important data source for object reconstruction in spatial information systems. However, LiDAR points on building edges often exhibit a jagged pattern, partially due to either occlusion from neighbouring objects, such as overhanging trees, or to the nature of the data itself, including unavoidable noise and irregular point distributions. The explicit 3D reconstruction may thus result in irregular or incomplete building polygons. In the presented work, a vertex-driven Douglas-Peucker method is developed to generate polygonal hypotheses from points forming initial building outlines. The energy function is adopted to examine and evaluate each hypothesis and the optimal polygon is determined through energy minimization. The energy minimization also plays a key role in bridging gaps, where the building outlines are ambiguous due to insufficient LiDAR points. In formulating the energy function, hard constraints such as parallelism and perpendicularity of building edges are imposed, and local and global adjustments are applied. The developed approach has been extensively tested and evaluated on datasets with varying point cloud density over different terrain types. Results are presented and analysed. The successful reconstruction of building footprints, of varying structural complexity, along with a quantitative assessment employing accurate reference data, demonstrate the practical potential of the proposed approach.

  6. A device for automated direct sampling and quantitation from solid-phase sorbent extraction cards by electrospray tandem mass spectrometry.

    PubMed

    Wachs, Timothy; Henion, Jack

    2003-04-01

    A new solid-phase extraction (SPE) device in the 96-well format (SPE Card) has been employed for automated off-line sample preparation of low-volume urine samples. On-line automated analyte elution via SPE and direct quantitation by micro ion spray mass spectrometry is reported. This sample preparation device has the format of a microtiter plate and is molded in a plastic frame which houses 96 separate sandwiched 3M Empore sorbents (0.5-mm-thickness, 8-microm particles) covered on both sides by a microfiber support material. Ninety-six discrete SPE zones, each 7 mm in diameter, are imbedded into the sheet in the conventional 9-mm pitch (spacing) of a 96-well microtiter plate. In this study one-quarter of an SPE Card (24 individual zones) was used merely as a convenience. After automated off-line interference elution of applied human urine from 24 samples, a section of SPE Card is mounted vertically on a computer-controlled X, Y, Z positioner in front of a micro ion spray direct sampling tube equipped with a beveled tip. The beveled tip of this needle robotically penetrates each SPE elution zone (sorbent disk) or stationary phase in a serial fashion. The eluted analytes are sequentially transferred directly to a microelectrosprayer to obtain tandem mass spectrometric (MS/MS) analysis. This strategy precludes any HPLC separation and the associated method development. The quantitative determination of Ritalin (methylphenidate) from fortified human urine samples is demonstrated. A trideuterated internal standard of methylphenidate was used to obtain ion current response ratios between the parent drug and the internal standard. Human control urine samples fortified from 6.6 to 3300 ng/mL (normal therapeutic levels have been determined in other studies to be between 50 and 100 ng/mL urine) were analyzed and a linear calibration curve was obtained with a correlation coefficient of 0.9999, where the precision of the quality control (QC) samples ranged from 9.6% at the 24

  7. Sortal anaphora resolution to enhance relation extraction from biomedical literature.

    PubMed

    Kilicoglu, Halil; Rosemblat, Graciela; Fiszman, Marcelo; Rindflesch, Thomas C

    2016-04-14

    Entity coreference is common in biomedical literature and it can affect text understanding systems that rely on accurate identification of named entities, such as relation extraction and automatic summarization. Coreference resolution is a foundational yet challenging natural language processing task which, if performed successfully, is likely to enhance such systems significantly. In this paper, we propose a semantically oriented, rule-based method to resolve sortal anaphora, a specific type of coreference that forms the majority of coreference instances in biomedical literature. The method addresses all entity types and relies on linguistic components of SemRep, a broad-coverage biomedical relation extraction system. It has been incorporated into SemRep, extending its core semantic interpretation capability from sentence level to discourse level. We evaluated our sortal anaphora resolution method in several ways. The first evaluation specifically focused on sortal anaphora relations. Our methodology achieved a F1 score of 59.6 on the test portion of a manually annotated corpus of 320 Medline abstracts, a 4-fold improvement over the baseline method. Investigating the impact of sortal anaphora resolution on relation extraction, we found that the overall effect was positive, with 50 % of the changes involving uninformative relations being replaced by more specific and informative ones, while 35 % of the changes had no effect, and only 15 % were negative. We estimate that anaphora resolution results in changes in about 1.5 % of approximately 82 million semantic relations extracted from the entire PubMed. Our results demonstrate that a heavily semantic approach to sortal anaphora resolution is largely effective for biomedical literature. Our evaluation and error analysis highlight some areas for further improvements, such as coordination processing and intra-sentential antecedent selection.

  8. Validation of an automated solid-phase extraction method for the analysis of 23 opioids, cocaine, and metabolites in urine with ultra-performance liquid chromatography-tandem mass spectrometry.

    PubMed

    Ramírez Fernández, María del Mar; Van Durme, Filip; Wille, Sarah M R; di Fazio, Vincent; Kummer, Natalie; Samyn, Nele

    2014-06-01

    The aim of this work was to automate a sample preparation procedure extracting morphine, hydromorphone, oxymorphone, norcodeine, codeine, dihydrocodeine, oxycodone, 6-monoacetyl-morphine, hydrocodone, ethylmorphine, benzoylecgonine, cocaine, cocaethylene, tramadol, meperidine, pentazocine, fentanyl, norfentanyl, buprenorphine, norbuprenorphine, propoxyphene, methadone and 2-ethylidene-1,5-dimethyl-3,3-diphenylpyrrolidine from urine samples. Samples were extracted by solid-phase extraction (SPE) with cation exchange cartridges using a TECAN Freedom Evo 100 base robotic system, including a hydrolysis step previous extraction when required. Block modules were carefully selected in order to use the same consumable material as in manual procedures to reduce cost and/or manual sample transfers. Moreover, the present configuration included pressure monitoring pipetting increasing pipetting accuracy and detecting sampling errors. The compounds were then separated in a chromatographic run of 9 min using a BEH Phenyl analytical column on a ultra-performance liquid chromatography-tandem mass spectrometry system. Optimization of the SPE was performed with different wash conditions and elution solvents. Intra- and inter-day relative standard deviations (RSDs) were within ±15% and bias was within ±15% for most of the compounds. Recovery was >69% (RSD < 11%) and matrix effects ranged from 1 to 26% when compensated with the internal standard. The limits of quantification ranged from 3 to 25 ng/mL depending on the compound. No cross-contamination in the automated SPE system was observed. The extracted samples were stable for 72 h in the autosampler (4°C). This method was applied to authentic samples (from forensic and toxicology cases) and to proficiency testing schemes containing cocaine, heroin, buprenorphine and methadone, offering fast and reliable results. Automation resulted in improved precision and accuracy, and a minimum operator intervention, leading to safer sample

  9. Automated Extraction of VTE Events From Narrative Radiology Reports in Electronic Health Records: A Validation Study.

    PubMed

    Tian, Zhe; Sun, Simon; Eguale, Tewodros; Rochefort, Christian M

    2017-10-01

    Surveillance of venous thromboembolisms (VTEs) is necessary for improving patient safety in acute care hospitals, but current detection methods are inaccurate and inefficient. With the growing availability of clinical narratives in an electronic format, automated surveillance using natural language processing (NLP) techniques may represent a better method. We assessed the accuracy of using symbolic NLP for identifying the 2 clinical manifestations of VTE, deep vein thrombosis (DVT) and pulmonary embolism (PE), from narrative radiology reports. A random sample of 4000 narrative reports was selected among imaging studies that could diagnose DVT or PE, and that were performed between 2008 and 2012 in a university health network of 5 adult-care hospitals in Montreal (Canada). The reports were coded by clinical experts to identify positive and negative cases of DVT and PE, which served as the reference standard. Using data from the largest hospital (n=2788), 2 symbolic NLP classifiers were trained; one for DVT, the other for PE. The accuracy of these classifiers was tested on data from the other 4 hospitals (n=1212). On manual review, 663 DVT-positive and 272 PE-positive reports were identified. In the testing dataset, the DVT classifier achieved 94% sensitivity (95% CI, 88%-97%), 96% specificity (95% CI, 94%-97%), and 73% positive predictive value (95% CI, 65%-80%), whereas the PE classifier achieved 94% sensitivity (95% CI, 89%-97%), 96% specificity (95% CI, 95%-97%), and 80% positive predictive value (95% CI, 73%-85%). Symbolic NLP can accurately identify VTEs from narrative radiology reports. This method could facilitate VTE surveillance and the evaluation of preventive measures.

  10. Applicability of a System for fully automated nucleic acid extraction from formalin-fixed paraffin-embedded sections for routine KRAS mutation testing.

    PubMed

    Lehmann, Annika; Schewe, Christiane; Hennig, Guido; Denkert, Carsten; Weichert, Wilko; Budczies, Jan; Dietel, Manfred

    2012-06-01

    Due to the approval of various new targeted therapies for the treatment of cancer, molecular pathology laboratories with a diagnostic focus have to meet new challenges: simultaneous handling of a large number of samples, small amounts of input material, and fragmentation of nucleic acids because of formalin fixation. As a consequence, fully automated systems for a fast and standardized extraction of high-quality DNA from formalin-fixed paraffin-embedded (FFPE) tissues are urgently needed. In this study, we tested the performance of a fully automated, high-throughput method for the extraction of nucleic acids from FFPE tissues. We investigated the extraction performance in sections of 5 different tissue types often analyzed in routine pathology laboratories (cervix, colon, liver, lymph node, and lung; n=340). Furthermore, we compared the quality, labor input, and applicability of the method for diagnostic purposes with those of a laboratory-validated manual method in a clinical setting by screening a set of 45 colorectal adenocarcinoma for the KRAS mutation. Automated extraction of both DNA and RNA was successful in 339 of 340 FFPE samples representing 5 different tissue types. In comparison with a conventional manual extraction protocol, the method showed an overall agreement of 97.7% (95% confidence interval, 88.2%-99.9%) for the subsequent mutational analysis of the KRAS gene in colorectal cancer samples. The fully automated system is a promising tool for a simple, robust, and rapid extraction of DNA and RNA from formalin-fixed tissue. It ensures a standardization of sample processing and can be applied to clinical FFPE samples in routine pathology.

  11. Comparative evaluation of automated and manual commercial DNA extraction methods for detection of Francisella tularensis DNA from suspensions and spiked swabs by real-time polymerase chain reaction.

    PubMed

    Dauphin, Leslie A; Walker, Roblena E; Petersen, Jeannine M; Bowen, Michael D

    2011-07-01

    This study evaluated commercial automated and manual DNA extraction methods for the isolation of Francisella tularensis DNA suitable for real-time polymerase chain reaction (PCR) analysis from cell suspensions and spiked cotton, foam, and polyester swabs. Two automated methods, the MagNA Pure Compact and the QIAcube, were compared to 4 manual methods, the IT 1-2-3 DNA sample purification kit, the MasterPure Complete DNA and RNA purification kit, the QIAamp DNA blood mini kit, and the UltraClean Microbial DNA isolation kit. The methods were compared using 6 F. tularensis strains representing the 2 subspecies which cause the majority of reported cases of tularemia in humans. Cell viability testing of the DNA extracts showed that all 6 extraction methods efficiently inactivated F. tularensis at concentrations of ≤10⁶ CFU/mL. Real-time PCR analysis using a multitarget 5' nuclease assay for F. tularensis revealed that the PCR sensitivity was equivalent using DNA extracted by the 2 automated methods and the manual MasterPure and QIAamp methods. These 4 methods resulted in significantly better levels of detection from bacterial suspensions and performed equivalently for spiked swab samples than the remaining 2. This study identifies optimal DNA extraction methods for processing swab specimens for the subsequent detection of F. tularensis DNA using real-time PCR assays. Furthermore, the results provide diagnostic laboratories with the option to select from 2 automated DNA extraction methods as suitable alternatives to manual methods for the isolation of DNA from F. tularensis.

  12. A semi-automated methodology for finding lipid-related GO terms

    PubMed Central

    Fan, Mengyuan; Low, Hong Sang; Wenk, Markus R.; Wong, Limsoon

    2014-01-01

    Motivation: Although semantic similarity in Gene Ontology (GO) and other approaches may be used to find similar GO terms, there is yet a method to systematically find a class of GO terms sharing a common property with high accuracy (e.g. involving human curation). Results: We have developed a methodology to address this issue and applied it to identify lipid-related GO terms, owing to the important and varied roles of lipids in many biological processes. Our methodology finds lipid-related GO terms in a semi-automated manner, requiring only moderate manual curation. We first obtain a list of lipid-related gold-standard GO terms by keyword search and manual curation. Then, based on the hypothesis that co-annotated GO terms share similar properties, we develop a machine learning method that expands the list of lipid-related terms from the gold standard. Those terms predicted most likely to be lipid related are examined by a human curator following specific curation rules to confirm the class labels. The structure of GO is also exploited to help reduce the curation effort. The prediction and curation cycle is repeated until no further lipid-related term is found. Our approach has covered a high proportion, if not all, of lipid-related terms with relatively high efficiency. Database URL: http://compbio.ddns.comp.nus.edu.sg/∼lipidgo PMID:25209026

  13. A semi-automated methodology for finding lipid-related GO terms.

    PubMed

    Fan, Mengyuan; Low, Hong Sang; Wenk, Markus R; Wong, Limsoon

    2014-01-01

    Although semantic similarity in Gene Ontology (GO) and other approaches may be used to find similar GO terms, there is yet a method to systematically find a class of GO terms sharing a common property with high accuracy (e.g., involving human curation). We have developed a methodology to address this issue and applied it to identify lipid-related GO terms, owing to the important and varied roles of lipids in many biological processes. Our methodology finds lipid-related GO terms in a semi-automated manner, requiring only moderate manual curation. We first obtain a list of lipid-related gold-standard GO terms by keyword search and manual curation. Then, based on the hypothesis that co-annotated GO terms share similar properties, we develop a machine learning method that expands the list of lipid-related terms from the gold standard. Those terms predicted most likely to be lipid related are examined by a human curator following specific curation rules to confirm the class labels. The structure of GO is also exploited to help reduce the curation effort. The prediction and curation cycle is repeated until no further lipid-related term is found. Our approach has covered a high proportion, if not all, of lipid-related terms with relatively high efficiency. http://compbio.ddns.comp.nus.edu.sg/∼lipidgo. © The Author(s) 2014. Published by Oxford University Press.

  14. Automated biphasic morphological assessment of hepatitis B-related liver fibrosis using second harmonic generation microscopy

    NASA Astrophysics Data System (ADS)

    Wang, Tong-Hong; Chen, Tse-Ching; Teng, Xiao; Liang, Kung-Hao; Yeh, Chau-Ting

    2015-08-01

    Liver fibrosis assessment by biopsy and conventional staining scores is based on histopathological criteria. Variations in sample preparation and the use of semi-quantitative histopathological methods commonly result in discrepancies between medical centers. Thus, minor changes in liver fibrosis might be overlooked in multi-center clinical trials, leading to statistically non-significant data. Here, we developed a computer-assisted, fully automated, staining-free method for hepatitis B-related liver fibrosis assessment. In total, 175 liver biopsies were divided into training (n = 105) and verification (n = 70) cohorts. Collagen was observed using second harmonic generation (SHG) microscopy without prior staining, and hepatocyte morphology was recorded using two-photon excitation fluorescence (TPEF) microscopy. The training cohort was utilized to establish a quantification algorithm. Eleven of 19 computer-recognizable SHG/TPEF microscopic morphological features were significantly correlated with the ISHAK fibrosis stages (P < 0.001). A biphasic scoring method was applied, combining support vector machine and multivariate generalized linear models to assess the early and late stages of fibrosis, respectively, based on these parameters. The verification cohort was used to verify the scoring method, and the area under the receiver operating characteristic curve was >0.82 for liver cirrhosis detection. Since no subjective gradings are needed, interobserver discrepancies could be avoided using this fully automated method.

  15. Validation of an Algorithm for Semi-automated Estimation of Voice Relative Fundamental Frequency.

    PubMed

    Lien, Yu-An S; Heller Murray, Elizabeth S; Calabrese, Carolyn R; Michener, Carolyn M; Van Stan, Jarrad H; Mehta, Daryush D; Hillman, Robert E; Noordzij, J Pieter; Stepp, Cara E

    2017-10-01

    Relative fundamental frequency (RFF) has shown promise as an acoustic measure of voice, but the subjective and time-consuming nature of its manual estimation has made clinical translation infeasible. Here, a faster, more objective algorithm for RFF estimation is evaluated in a large and diverse sample of individuals with and without voice disorders. Acoustic recordings were collected from 154 individuals with voice disorders and 36 age- and sex-matched controls with typical voices. These recordings were split into training and 2 testing sets. Using an algorithm tuned to the training set, semi-automated RFF estimates in the testing sets were compared to manual RFF estimates derived from 3 trained technicians. The semi-automated RFF estimations were highly correlated ( r = 0.82-0.91) with the manual RFF estimates. Fast and more objective estimation of RFF makes large-scale RFF analysis feasible. This algorithm allows for future work to optimize RFF measures and expand their potential for clinical voice assessment.

  16. Linearly Supporting Feature Extraction for Automated Estimation of Stellar Atmospheric Parameters

    NASA Astrophysics Data System (ADS)

    Li, Xiangru; Lu, Yu; Comte, Georges; Luo, Ali; Zhao, Yongheng; Wang, Yongjun

    2015-05-01

    We describe a scheme to extract linearly supporting (LSU) features from stellar spectra to automatically estimate the atmospheric parameters {{T}{\\tt{eff} }}, log g, and [Fe/H]. “Linearly supporting” means that the atmospheric parameters can be accurately estimated from the extracted features through a linear model. The successive steps of the process are as follow: first, decompose the spectrum using a wavelet packet (WP) and represent it by the derived decomposition coefficients; second, detect representative spectral features from the decomposition coefficients using the proposed method Least Absolute Shrinkage and Selection Operator (LARS)bs; third, estimate the atmospheric parameters {{T}{\\tt{eff} }}, log g, and [Fe/H] from the detected features using a linear regression method. One prominent characteristic of this scheme is its ability to evaluate quantitatively the contribution of each detected feature to the atmospheric parameter estimate and also to trace back the physical significance of that feature. This work also shows that the usefulness of a component depends on both the wavelength and frequency. The proposed scheme has been evaluated on both real spectra from the Sloan Digital Sky Survey (SDSS)/SEGUE and synthetic spectra calculated from Kurucz's NEWODF models. On real spectra, we extracted 23 features to estimate {{T}{\\tt{eff} }}, 62 features for log g, and 68 features for [Fe/H]. Test consistencies between our estimates and those provided by the Spectroscopic Parameter Pipeline of SDSS show that the mean absolute errors (MAEs) are 0.0062 dex for log {{T}{\\tt{eff} }} (83 K for {{T}{\\tt{eff} }}), 0.2345 dex for log g, and 0.1564 dex for [Fe/H]. For the synthetic spectra, the MAE test accuracies are 0.0022 dex for log {{T}{\\tt{eff} }} (32 K for {{T}{\\tt{eff} }}), 0.0337 dex for log g, and 0.0268 dex for [Fe/H].

  17. Pedestrian detection in thermal images: An automated scale based region extraction with curvelet space validation

    NASA Astrophysics Data System (ADS)

    Lakshmi, A.; Faheema, A. G. J.; Deodhare, Dipti

    2016-05-01

    Pedestrian detection is a key problem in night vision processing with a dozen of applications that will positively impact the performance of autonomous systems. Despite significant progress, our study shows that performance of state-of-the-art thermal image pedestrian detectors still has much room for improvement. The purpose of this paper is to overcome the challenge faced by the thermal image pedestrian detectors, which employ intensity based Region Of Interest (ROI) extraction followed by feature based validation. The most striking disadvantage faced by the first module, ROI extraction, is the failed detection of cloth insulted parts. To overcome this setback, this paper employs an algorithm and a principle of region growing pursuit tuned to the scale of the pedestrian. The statistics subtended by the pedestrian drastically vary with the scale and deviation from normality approach facilitates scale detection. Further, the paper offers an adaptive mathematical threshold to resolve the problem of subtracting the background while extracting cloth insulated parts as well. The inherent false positives of the ROI extraction module are limited by the choice of good features in pedestrian validation step. One such feature is curvelet feature, which has found its use extensively in optical images, but has as yet no reported results in thermal images. This has been used to arrive at a pedestrian detector with a reduced false positive rate. This work is the first venture made to scrutinize the utility of curvelet for characterizing pedestrians in thermal images. Attempt has also been made to improve the speed of curvelet transform computation. The classification task is realized through the use of the well known methodology of Support Vector Machines (SVMs). The proposed method is substantiated with qualified evaluation methodologies that permits us to carry out probing and informative comparisons across state-of-the-art features, including deep learning methods, with six

  18. Evaluation of an automated high-volume extraction method for viral nucleic acids in comparison to a manual procedure with preceding enrichment.

    PubMed

    Hourfar, M K; Schmidt, M; Seifried, E; Roth, W K

    2005-08-01

    Nucleic acid extraction still harbours the potential for improvements in automation and sensitivity of nucleic acid amplification technology (NAT) testing. This study evaluates the feasibility of a novel automated high-volume extraction protocol for NAT minipool testing in a blood bank setting. The chemagic Viral DNA/RNA Kit special for automated purification of viral nucleic acids from 9.6 ml of plasma by using the chemagic Magnetic Separation Module I was investigated. Analytical sensitivity for hepatitis C virus (HCV), human immunodeficiency virus-1 (HIV-1), hepatitis B virus (HBV), hepatitis A virus (HAV) and parvovirus B19 (B19) was compared to our present manual procedure that involves virus enrichment by centrifugation. Chemagic technology allows automation of the viral DNA/RNA extraction process. Viral nucleic acids were bound directly to magnetic beads from 9.6-ml minipools. By combining the automated magnetic beads-based extraction technology with our in-house TaqMan polymerase chain reaction (PCR) assays, 95% detection limits were 280 IU/ml for HCV, 4955 IU/ml for HIV-1, 249 IU/ml for HBV, 462 IU/ml for HAV and 460 IU/ml for B19, calculated for an individual donation in a pool of 96 donors. The detection limits of our present method were 460 IU/ml for HCV, 879 IU/ml for HIV-1, 90 IU/ml for HBV, 203 IU/ml for HAV and 314 IU/ml for B19. The 95% detection limits obtained by using the chemagic method were within the regulatory requirements for blood donor screening. The sensitivities detected for HCV, HBV, HAV and B19 were found to be in a range similar to that of the manual purification method. Sensitivity for HIV-1, however, was found to be inferior for the chemagic method in this study.

  19. Concept recognition for extracting protein interaction relations from biomedical text

    PubMed Central

    Baumgartner, William A; Lu, Zhiyong; Johnson, Helen L; Caporaso, J Gregory; Paquette, Jesse; Lindemann, Anna; White, Elizabeth K; Medvedeva, Olga; Cohen, K Bretonnel; Hunter, Lawrence

    2008-01-01

    Background: Reliable information extraction applications have been a long sought goal of the biomedical text mining community, a goal that if reached would provide valuable tools to benchside biologists in their increasingly difficult task of assimilating the knowledge contained in the biomedical literature. We present an integrated approach to concept recognition in biomedical text. Concept recognition provides key information that has been largely missing from previous biomedical information extraction efforts, namely direct links to well defined knowledge resources that explicitly cement the concept's semantics. The BioCreative II tasks discussed in this special issue have provided a unique opportunity to demonstrate the effectiveness of concept recognition in the field of biomedical language processing. Results: Through the modular construction of a protein interaction relation extraction system, we present several use cases of concept recognition in biomedical text, and relate these use cases to potential uses by the benchside biologist. Conclusion: Current information extraction technologies are approaching performance standards at which concept recognition can begin to deliver high quality data to the benchside biologist. Our system is available as part of the BioCreative Meta-Server project and on the internet . PMID:18834500

  20. Towards a Relation Extraction Framework for Cyber-Security Concepts

    SciTech Connect

    Jones, Corinne L; Bridges, Robert A; Huffer, Kelly M; Goodall, John R

    2015-01-01

    In order to assist security analysts in obtaining information pertaining to their network, such as novel vulnerabilities, exploits, or patches, information retrieval methods tailored to the security domain are needed. As labeled text data is scarce and expensive, we follow developments in semi-supervised NLP and implement a bootstrapping algorithm for extracting security entities and their relationships from text. The algorithm requires little input data, specifically, a few relations or patterns (heuristics for identifying relations), and incorporates an active learning component which queries the user on the most important decisions to prevent drifting the desired relations. Preliminary testing on a small corpus shows promising results, obtaining precision of .82.

  1. Automated feature extraction and spatial organization of seafloor pockmarks, Belfast Bay, Maine, USA

    USGS Publications Warehouse

    Andrews, Brian D.; Brothers, Laura L.; Barnhardt, Walter A.

    2010-01-01

    Seafloor pockmarks occur worldwide and may represent millions of m3 of continental shelf erosion, but few numerical analyses of their morphology and spatial distribution of pockmarks exist. We introduce a quantitative definition of pockmark morphology and, based on this definition, propose a three-step geomorphometric method to identify and extract pockmarks from high-resolution swath bathymetry. We apply this GIS-implemented approach to 25 km2 of bathymetry collected in the Belfast Bay, Maine USA pockmark field. Our model extracted 1767 pockmarks and found a linear pockmark depth-to-diameter ratio for pockmarks field-wide. Mean pockmark depth is 7.6 m and mean diameter is 84.8 m. Pockmark distribution is non-random, and nearly half of the field's pockmarks occur in chains. The most prominent chains are oriented semi-normal to the steepest gradient in Holocene sediment thickness. A descriptive model yields field-wide spatial statistics indicating that pockmarks are distributed in non-random clusters. Results enable quantitative comparison of pockmarks in fields worldwide as well as similar concave features, such as impact craters, dolines, or salt pools.

  2. Automated DICOM metadata and volumetric anatomical information extraction for radiation dosimetry

    NASA Astrophysics Data System (ADS)

    Papamichail, D.; Ploussi, A.; Kordolaimi, S.; Karavasilis, E.; Papadimitroulas, P.; Syrgiamiotis, V.; Efstathopoulos, E.

    2015-09-01

    Patient-specific dosimetry calculations based on simulation techniques have as a prerequisite the modeling of the modality system and the creation of voxelized phantoms. This procedure requires the knowledge of scanning parameters and patients’ information included in a DICOM file as well as image segmentation. However, the extraction of this information is complicated and time-consuming. The objective of this study was to develop a simple graphical user interface (GUI) to (i) automatically extract metadata from every slice image of a DICOM file in a single query and (ii) interactively specify the regions of interest (ROI) without explicit access to the radiology information system. The user-friendly application developed in Matlab environment. The user can select a series of DICOM files and manage their text and graphical data. The metadata are automatically formatted and presented to the user as a Microsoft Excel file. The volumetric maps are formed by interactively specifying the ROIs and by assigning a specific value in every ROI. The result is stored in DICOM format, for data and trend analysis. The developed GUI is easy, fast and and constitutes a very useful tool for individualized dosimetry. One of the future goals is to incorporate a remote access to a PACS server functionality.

  3. Automated boundary extraction of the spinal canal in MRI based on dynamic programming.

    PubMed

    Koh, Jaehan; Chaudhary, Vipin; Dhillon, Gurmeet

    2012-01-01

    The spinal cord is the only communication link between the brain and the body. The abnormalities in it can lead to severe pain and sometimes to paralysis. Due to the growing gap between the number of available radiologists and the number of required radiologists, the need for computer-aided diagnosis and characterization is increasing. To ease this gap, we have developed a computer-aided diagnosis and characterization framework in lumbar spine that includes the spinal cord, vertebrae, and intervertebral discs. In this paper, we propose two spinal cord boundary extraction methods that fit into our framework based on dynamic programming in lumbar spine MRI. Our method incorporates the intensity of the image and the gradient of the image into a dynamic programming scheme and works in a fully-automatic fashion. The boundaries generated by our method is compared against reference boundaries in terms of Fréchet distance which is known to be a metric for shape analysis. The experimental results from 65 clinical data show that our method finds the spinal canal boundary correctly achieving a mean Fréchet distance of 13.5 pixels. For almost all data, the extracted boundary falls within the spinal cord. So, it can be used as a landmark when marking background regions and finding regions of interest.

  4. Automated bare earth extraction technique for complex topography in light detection and ranging surveys

    NASA Astrophysics Data System (ADS)

    Stevenson, Terry H.; Magruder, Lori A.; Neuenschwander, Amy L.; Bradford, Brian

    2013-01-01

    Bare earth extraction is an important component to light detection and ranging (LiDAR) data analysis in terms of terrain classification. The challenge in providing accurate digital surface models is augmented when there is diverse topography within the data set or complex combinations of vegetation and built structures. Few existing algorithms can handle substantial terrain diversity without significant editing or user interaction. This effort presents a newly developed methodology that provides a flexible, adaptable tool capable of integrating multiple LiDAR data attributes for an accurate terrain assessment. The terrain extraction and segmentation (TEXAS) approach uses a third-order spatial derivative for each point in the digital surface model to determine the curvature of the terrain rather than rely solely on the slope. The utilization of the curvature has shown to successfully preserve ground points in areas of steep terrain as they typically exhibit low curvature. Within the framework of TEXAS, the contiguous sets of points with low curvatures are grouped into regions using an edge-based segmentation method. The process does not require any user inputs and is completely data driven. This technique was tested on a variety of existing LiDAR surveys, each with varying levels of topographic complexity.

  5. Underway analysis of nanomolar dissolved reactive phosphorus in oligotrophic seawater with automated on-line solid phase extraction and spectrophotometric system.

    PubMed

    Ma, Jian; Yuan, Yuan; Yuan, Dongxing

    2017-01-15

    The automated in-field determination of trace phosphate (and other nutrients) is highly valuable for studying nutrient dynamics and cycling in oligotrophic oceans. Here, we report an automated portable analyzer for week-long underway analysis of nanomolar dissolved reactive phosphorus (DRP) in seawater. The method is based on classic phosphomolybdenum blue (PMB) chemistry combined with on-line solid phase extraction (SPE) and flow analysis. Under optimized conditions, the formed PMB from sample is automatically concentrated on a hydrophilic lipophilic balanced (HLB) copolymer SPE. The PMB compound can be eluted with NaOH solution and measured in a flow-through detection system. All the components of the analyzer are computer controlled using laboratory-programmed software based on LabVIEW. The system exhibited advantages of high sensitivity (detection limit of 1.0 nmol L(-1)) and reproducibility (relative standard deviation of 5.4%, n = 180), insignificant carry-over effect and no interferences from salinity, silicate, arsenate and other P-containing compounds (concentrations at environmental level). The analytical time was 4-7 min/sample, depending on the DRP concentration. The accuracy of the method was validated through the analysis of reference materials and comparison with two other published methods (slope of 0.986 ± 0.027, intercept of 0.39 ± 0.64 nmol L(-1), R(2) of 0.9608, range of 0-80 nmol L(-1), n = 57). The system has been successfully applied for a two-week continuous underway determination of DRP in surface seawater during a cruise in the South China Sea. Based on the laboratory and field evaluations, it is concluded that this system is suitable for accurate and high resolution underway DRP measurements in oligotrophic areas. Copyright © 2016 Elsevier B.V. All rights reserved.

  6. A novel approach for automated shoreline extraction from remote sensing images using low level programming

    NASA Astrophysics Data System (ADS)

    Rigos, Anastasios; Vaiopoulos, Aristidis; Skianis, George; Tsekouras, George; Drakopoulos, Panos

    2015-04-01

    Tracking coastline changes is a crucial task in the context of coastal management and synoptic remotely sensed data has become an essential tool for this purpose. In this work, and within the framework of BeachTour project, we introduce a new method for shoreline extraction from high resolution satellite images. It was applied on two images taken by the WorldView-2 satellite (7 channels, 2m resolution) during July 2011 and August 2014. The location is the well-known tourist destination of Laganas beach spanning 5 km along the southern part of Zakynthos Island, Greece. The atmospheric correction was performed with the ENVI FLAASH procedure and the final images were validated against hyperspectral field measurements. Using three channels (CH2=blue, CH3=green and CH7=near infrared) the Modified Redness Index image was calculated according to: MRI=(CH7)2/[CH2x(CH3)3]. MRI has the property that its value keeps increasing as the water becomes shallower. This is followed by an abrupt reduction trend at the location of the wet sand up to the point where the dry shore face begins. After that it remains low-valued throughout the beach zone. Images based on this index were used for the shoreline extraction process that included the following steps: a) On the MRI based image, only an area near the shoreline was kept (this process is known as image masking). b) On the masked image the Canny edge detector operator was applied. c) Of all edges discovered on step (b) only the biggest was kept. d) If the line revealed on step (c) was unacceptable, i.e. not defining the shoreline or defining only part of it, then either more than one areas on step (c) were kept or on the MRI image the pixel values were bound in a particular interval [Blow, Bhigh] and only the ones belonging in this interval were kept. Then, steps (a)-(d) were repeated. Using this method, which is still under development, we were able to extract the shoreline position and reveal its changes during the 3-year period

  7. Automation or De-automation

    NASA Astrophysics Data System (ADS)

    Gorlach, Igor; Wessel, Oliver

    2008-09-01

    In the global automotive industry, for decades, vehicle manufacturers have continually increased the level of automation of production systems in order to be competitive. However, there is a new trend to decrease the level of automation, especially in final car assembly, for reasons of economy and flexibility. In this research, the final car assembly lines at three production sites of Volkswagen are analysed in order to determine the best level of automation for each, in terms of manufacturing costs, productivity, quality and flexibility. The case study is based on the methodology proposed by the Fraunhofer Institute. The results of the analysis indicate that fully automated assembly systems are not necessarily the best option in terms of cost, productivity and quality combined, which is attributed to high complexity of final car assembly systems; some de-automation is therefore recommended. On the other hand, the analysis shows that low automation can result in poor product quality due to reasons related to plant location, such as inadequate workers' skills, motivation, etc. Hence, the automation strategy should be formulated on the basis of analysis of all relevant aspects of the manufacturing process, such as costs, quality, productivity and flexibility in relation to the local context. A more balanced combination of automated and manual assembly operations provides better utilisation of equipment, reduces production costs and improves throughput.

  8. Automated extraction of absorption features from Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) and Geophysical and Environmental Research Imaging Spectrometer (GERIS) data

    NASA Technical Reports Server (NTRS)

    Kruse, Fred A.; Calvin, Wendy M.; Seznec, Olivier

    1988-01-01

    Automated techniques were developed for the extraction and characterization of absorption features from reflectance spectra. The absorption feature extraction algorithms were successfully tested on laboratory, field, and aircraft imaging spectrometer data. A suite of laboratory spectra of the most common minerals was analyzed and absorption band characteristics tabulated. A prototype expert system was designed, implemented, and successfully tested to allow identification of minerals based on the extracted absorption band characteristics. AVIRIS spectra for a site in the northern Grapevine Mountains, Nevada, have been characterized and the minerals sericite (fine grained muscovite) and dolomite were identified. The minerals kaolinite, alunite, and buddingtonite were identified and mapped for a site at Cuprite, Nevada, using the feature extraction algorithms on the new Geophysical and Environmental Research 64 channel imaging spectrometer (GERIS) data. The feature extraction routines (written in FORTRAN and C) were interfaced to the expert system (written in PROLOG) to allow both efficient processing of numerical data and logical spectrum analysis.

  9. Automated and portable solid phase extraction platform for immuno-detection of 17β-estradiol in water.

    PubMed

    Heub, Sarah; Tscharner, Noe; Monnier, Véronique; Kehl, Florian; Dittrich, Petra S; Follonier, Stéphane; Barbe, Laurent

    2015-02-13

    A fully automated and portable system for solid phase extraction (SPE) has been developed for the analysis of the natural hormone 17β-estradiol (E2) in environmental water by enzyme linked immuno-sorbent assay (ELISA). The system has been validated with de-ionized and artificial sea water as model samples and allowed for pre-concentration of E2 at levels of 1, 10 and 100 ng/L with only 100 ml of sample. Recoveries ranged from 24±3% to 107±6% depending on the concentration and sample matrix. The method successfully allowed us to determine the concentration of two seawater samples. A concentration of 15.1±0.3 ng/L of E2 was measured in a sample obtained from a food production process, and 8.8±0.7 ng/L in a sample from the Adriatic Sea. The system would be suitable for continuous monitoring of water quality as it is user friendly, and as the method is reproducible and totally compatible with the analysis of water sample by simple immunoassays and other detection methods such as biosensors.

  10. Automated extraction and quantitation of oncogenic HPV genotypes from cervical samples by a real-time PCR-based system.

    PubMed

    Broccolo, Francesco; Cocuzza, Clementina E

    2008-03-01

    Accurate laboratory assays for the diagnosis of persistent oncogenic HPV infection are being recognized increasingly as essential for clinical management of women with cervical precancerous lesions. HPV viral load has been suggested to be a surrogate marker of persistent infection. Four independent real-time quantitative TaqMan PCR assays were developed for: HPV-16, -31, -18 and/or -45 and -33 and/or -52, -58, -67. The assays had a wide dynamic range of detection and a high degree of accuracy, repeatability and reproducibility. In order to minimize material and hands-on time, automated nucleic acid extraction was performed using a 96-well plate format integrated into a robotic liquid handler workstation. The performance of the TaqMan assays for HPV identification was assessed by comparing results with those obtained by means of PCR using consensus primers (GP5+/GP6+) and sequencing (296 samples) and INNO-LiPA analysis (31 samples). Good agreement was found generally between results obtained by real-time PCR assays and GP(+)-PCR system (kappa statistic=0.91). In conclusion, this study describes four newly developed real-time PCR assays that provide a reliable and high-throughput method for detection of not only HPV DNA but also HPV activity of the most common oncogenic HPV types in cervical specimens.

  11. Dried Blood Spot Proteomics: Surface Extraction of Endogenous Proteins Coupled with Automated Sample Preparation and Mass Spectrometry Analysis

    NASA Astrophysics Data System (ADS)

    Martin, Nicholas J.; Bunch, Josephine; Cooper, Helen J.

    2013-08-01

    Dried blood spots offer many advantages as a sample format including ease and safety of transport and handling. To date, the majority of mass spectrometry analyses of dried blood spots have focused on small molecules or hemoglobin. However, dried blood spots are a potentially rich source of protein biomarkers, an area that has been overlooked. To address this issue, we have applied an untargeted bottom-up proteomics approach to the analysis of dried blood spots. We present an automated and integrated method for extraction of endogenous proteins from the surface of dried blood spots and sample preparation via trypsin digestion by use of the Advion Biosciences Triversa Nanomate robotic platform. Liquid chromatography tandem mass spectrometry of the resulting digests enabled identification of 120 proteins from a single dried blood spot. The proteins identified cross a concentration range of four orders of magnitude. The method is evaluated and the results discussed in terms of the proteins identified and their potential use as biomarkers in screening programs.

  12. Exposing Exposure: Automated Anatomy-specific CT Radiation Exposure Extraction for Quality Assurance and Radiation Monitoring

    PubMed Central

    Warden, Graham I.; Farkas, Cameron E.; Ikuta, Ichiro; Prevedello, Luciano M.; Andriole, Katherine P.; Khorasani, Ramin

    2012-01-01

    Purpose: To develop and validate an informatics toolkit that extracts anatomy-specific computed tomography (CT) radiation exposure metrics (volume CT dose index and dose-length product) from existing digital image archives through optical character recognition of CT dose report screen captures (dose screens) combined with Digital Imaging and Communications in Medicine attributes. Materials and Methods: This institutional review board–approved HIPAA-compliant study was performed in a large urban health care delivery network. Data were drawn from a random sample of CT encounters that occurred between 2000 and 2010; images from these encounters were contained within the enterprise image archive, which encompassed images obtained at an adult academic tertiary referral hospital and its affiliated sites, including a cancer center, a community hospital, and outpatient imaging centers, as well as images imported from other facilities. Software was validated by using 150 randomly selected encounters for each major CT scanner manufacturer, with outcome measures of dose screen retrieval rate (proportion of correctly located dose screens) and anatomic assignment precision (proportion of extracted exposure data with correctly assigned anatomic region, such as head, chest, or abdomen and pelvis). The 95% binomial confidence intervals (CIs) were calculated for discrete proportions, and CIs were derived from the standard error of the mean for continuous variables. After validation, the informatics toolkit was used to populate an exposure repository from a cohort of 54 549 CT encounters; of which 29 948 had available dose screens. Results: Validation yielded a dose screen retrieval rate of 99% (597 of 605 CT encounters; 95% CI: 98%, 100%) and an anatomic assignment precision of 94% (summed DLP fraction correct 563 in 600 CT encounters; 95% CI: 92%, 96%). Patient safety applications of the resulting data repository include benchmarking between institutions, CT protocol quality

  13. Kernel-Based Learning for Domain-Specific Relation Extraction

    NASA Astrophysics Data System (ADS)

    Basili, Roberto; Giannone, Cristina; Del Vescovo, Chiara; Moschitti, Alessandro; Naggar, Paolo

    In a specific process of business intelligence, i.e. investigation on organized crime, empirical language processing technologies can play a crucial role. The analysis of transcriptions on investigative activities, such as police interrogatories, for the recognition and storage of complex relations among people and locations is a very difficult and time consuming task, ultimately based on pools of experts. We discuss here an inductive relation extraction platform that opens the way to much cheaper and consistent workflows. The presented empirical investigation shows that accurate results, comparable to the expert teams, can be achieved, and parametrization allows to fine tune the system behavior for fitting domain-specific requirements.

  14. Dispersive liquid-liquid microextraction combined with semi-automated in-syringe back extraction as a new approach for the sample preparation of ionizable organic compounds prior to liquid chromatography.

    PubMed

    Melwanki, Mahaveer B; Fuh, Ming-Ren

    2008-07-11

    Dispersive liquid-liquid microextraction (DLLME) followed by a newly designed semi-automated in-syringe back extraction technique has been developed as an extraction methodology for the extraction of polar organic compounds prior to liquid chromatography (LC) measurement. The method is based on the formation of tiny droplets of the extractant in the sample solution using water-immiscible organic solvent (extractant) dissolved in a water-miscible organic dispersive solvent. Extraction of the analytes from aqueous sample into the dispersed organic droplets took place. The extracting organic phase was separated by centrifuging and the sedimented phase was withdrawn into a syringe. Then in-syringe back extraction was utilized to extract the analytes into an aqueous solution prior to LC analysis. Clenbuterol (CB), a basic organic compound used as a model, was extracted from a basified aqueous sample using 25 microL tetrachloroethylene (TCE, extraction solvent) dissolved in 500 microL acetone (as a dispersive solvent). After separation of the organic extracting phase by centrifuging, CB enriched in TCE phase was back extracted into 10 microL of 1% aqueous formic acid (FA) within the syringe. Back extraction was facilitated by repeatedly moving the plunger back and forth within the barrel of syringe, assisted by a syringe pump. Due to the plunger movement, a thin organic film is formed on the inner layer of the syringe that comes in contact with the acidic aqueous phase. Here, CB, a basic analyte, will be protonated and back extracted into FA. Various parameters affecting the extraction efficiency, viz., choice of extraction and dispersive solvent, salt effect, speed of syringe pump, back extraction time period, effect of concentration of base and acid, were evaluated. Under optimum conditions, precision, linearity (correlation coefficient, r(2)=0.9966 over the concentration range of 10-1000 ng mL(-1) CB), detection limit (4.9 ng mL(-1)), enrichment factor (175), relative

  15. Hyphenating Centrifugal Partition Chromatography with Nuclear Magnetic Resonance through Automated Solid Phase Extraction.

    PubMed

    Bisson, Jonathan; Brunel, Marion; Badoc, Alain; Da Costa, Grégory; Richard, Tristan; Mérillon, Jean-Michel; Waffo-Téguo, Pierre

    2016-10-18

    Centrifugal partition chromatography (CPC) and all countercurrent separation apparatus provide chemists with efficient ways to work with complex matrixes, especially in the domain of natural products. However, despite the great advances provided by these techniques, more efficient ways of analyzing the output flow would bring further enhancement. This study describe a hyphenated approach made by coupling NMR with CPC through a hybrid-indirect coupling made possible by using a solid phase extraction (SPE) apparatus intended for high-pressure liquid chromatography (HPLC)-NMR hyphenation. Some hardware changes were needed to adapt the incompatible flow-rates and a reverse-engineering approach that led to the specific software required to control the apparatus. 1D (1)HNMR and (1)H-(1)H correlation spectroscopy (COSY) spectra were acquired in reasonable time without the need for any solvent-suppression method thanks to the SPE nitrogen drying step. The reduced usage of expensive deuterated solvents from several hundreds of milliliters to the milliliter order is the major improvement of this approach compared to the previously published ones.

  16. Automated quantification of distributed landslide movement using circular tree trunks extracted from terrestrial laser scan data

    NASA Astrophysics Data System (ADS)

    Conner, Jeremy C.; Olsen, Michael J.

    2014-06-01

    This manuscript presents a novel algorithm to automatically detect landslide movement in a forested area using displacements of tree trunks distributed across the landslide surveyed repeatedly using terrestrial laser scanning (TLS). Common landslide monitoring techniques include: inclinometers, global position system (GPS), and interferometric synthetic aperture radar (InSAR). While these techniques provide valuable data for monitoring landslides, they can be difficult to apply with adequate spatial or temporal resolution needed to understand complex landslides, specifically in forested environments. Comparison of the center coordinates (determined via least-squares fit of the TLS data) of a cross section of the tree trunk between consecutive surveys enable quantification of landslide movement rates, which can be used to analyze patterns of landslide displacement. The capabilities of this new methodology were tested through a case-study analyzing the Johnson Creek Landslide, a complex, quick moving coastal landslide, which has proven difficult to monitor using other techniques. A parametric analysis of fitting thresholds was also conducted to determine the reliability of tree trunk displacements calculated and the number of features that were extracted. The optimal parameters in selecting trees for movement analysis were found to be less than 1.5 cm for the RMS residuals of the circle fit and less than 1.0 cm for the difference in the calculated tree radii between epochs.

  17. High-throughput pharmacokinetics screen of VLA-4 antagonists by LC/MS/MS coupled with automated solid-phase extraction sample preparation.

    PubMed

    Tong, Xinchun S; Wang, Junying; Zheng, Song; Pivnichny, James V

    2004-06-29

    Automation of plasma sample preparation for pharmacokinetic studies on VLA-4 antagonists has been achieved by using 96-well format solid-phase extraction operated by Beckman Coulter Biomek 2000 liquid handling system. A Biomek 2000 robot is used to perform fully automated plasma sample preparation tasks that include serial dilution of standard solutions, pipetting plasma samples, addition of standard and internal standard solutions, performing solid-phase extraction (SPE) on Waters OASIS 96-well plates. This automated sample preparation process takes less than 2 h for a typical pharmacokinetic study, including 51 samples, 24 standards, 9 quality controls, and 3-6 dose checks with minimal manual intervention. Extensive validation has been made to ensure the accuracy and reliability of this method. A two-stage vacuum pressure controller has been incorporated in the program to improve SPE efficiency. This automated SPE sample preparation approach combined with liquid chromatography coupled with the high sensitivity and selectivity of tandem mass spectrometry (LC/MS)/MS has been successfully applied on both individual and cassette dosing for pharmacokinetic screening of a large number of VLA-4 antagonists with a limit of quantitation in the range of 1-5 ng/ml. Consequently, a significant throughput increase has been achieved along with an elimination of tedious labor and its consequential tendency to produce errors. Copyright 2004 Elsevier B.V.

  18. Development of an automated method for Folin-Ciocalteu total phenolic assay in artichoke extracts.

    PubMed

    Yoo, Kil Sun; Lee, Eun Jin; Leskovar, Daniel; Patil, Bhimanagouda S

    2012-12-01

    We developed a system to run the Folin-Ciocalteu (F-C) total phenolic assay, in artichoke extract samples, which is fully automatic, consistent, and fast. The system uses 2 high performance liquid chromatography (HPLC) pumps, an autosampler, a column heater, a UV/Vis detector, and a data collection system. To test the system, a pump delivered 10-fold diluted F-C reagent solution at a rate of 0.7 mL/min, and 0.4 g/mL sodium carbonate at a rate of 2.1 mL/min. The autosampler injected 10 μL per 1.2 min, which was mixed with the F-C reagent and heated to 65 °C while it passed through the column heater. The heated reactant was mixed with sodium carbonate and color intensity was measured by the detector at 600 nm. The data collection system recorded the color intensity, and peak area of each sample was calculated as the concentration of the total phenolic content, expressed in μg/mL as either chlorogenic acid or gallic acid. This new method had superb repeatability (0.7% CV) and a high correlation with both the manual method (r(2) = 0.93) and the HPLC method (r(2) = 0.78). Ascorbic acid and quercetin showed variable antioxidant activity, but sugars did not. This method can be efficiently applied to research that needs to test many numbers of antioxidant capacity samples with speed and accuracy.

  19. Validation of the TaqMan Influenza A Detection Kit and a rapid automated total nucleic acid extraction method to detect influenza A virus in nasopharyngeal specimens.

    PubMed

    Bolotin, Shelly; De Lima, Cedric; Choi, Kam-Wing; Lombos, Ernesto; Burton, Laura; Mazzulli, Tony; Drews, Steven J

    2009-01-01

    This study describes the validation of the TaqMan Influenza A Detection Kit v2.0 combined with an automated nucleic acid extraction method. The limit of detection of this assay was determined by probit regression (95% confidence interval) to be 2 influenza A/PR/8/34 (H1N1) virus particles per microlitre. One hundred and eleven specimens previously tested using the Seeplex RV assay and viral culture methods were tested using the TaqMan Influenza A Detection Kit. Compared to the aggregate gold-standard, the sensitivity and specificity of the TaqMan Influenza A Detection Kit were 100% (35/35) and 97% (74/76), respectively. Because of its accuracy, quick turn-around-time and lyophilized bead form, the TaqMan Influenza A Detection Kit, combined with the NucliSense easyMAG automated extraction method, constitutes a reliable protocol for influenza A diagnosis.

  20. Automated liquid-liquid extraction workstation for library synthesis and its use in the parallel and chromatography-free synthesis of 2-alkyl-3-alkyl-4-(3H)-quinazolinones.

    PubMed

    Carpintero, Mercedes; Cifuentes, Marta; Ferritto, Rafael; Haro, Rubén; Toledo, Miguel A

    2007-01-01

    An automated liquid-liquid extraction workstation has been developed. This module processes up to 96 samples in an automated and parallel mode avoiding the time-consuming and intensive sample manipulation during the workup process. To validate the workstation, a highly automated and chromatography-free synthesis of differentially substituted quinazolin-4(3H)-ones with two diversity points has been carried out using isatoic anhydride as starting material.

  1. An Automated Approach to Extracting River Bank Locations from Aerial Imagery Using Image Texture

    DTIC Science & Technology

    2015-11-04

    locating water surface and river banks in high resolution aerial imagery without recourse to any multispectral information, by segmenting based on...differences in infrared reflectivity between different surfaces , such as water and land (Kelley et al., 1998). Many of these techniques, unfortunately...between the relatively smooth surface of the river water and the rougher surface of the vegetated land or built environment bordering it and then

  2. Automated extraction of pressure ridges from SAR images of sea ice - Comparison with surface truth

    NASA Technical Reports Server (NTRS)

    Vesecky, J. F.; Smith, M. P.; Samadani, R.; Daida, J. M.; Comiso, J. C.

    1991-01-01

    The authors estimate the characteristics of ridges and leads in sea ice from SAR (synthetic aperture radar) images. Such estimates are based on the hypothesis that bright filamentary features in SAR sea ice images correspond with pressure ridges. A data set collected in the Greenland Sea in 1987 allows this hypothesis to be evaluated for X-band SAR images. A preliminary analysis of data collected from SAR images and ice elevation (from a laser altimeter) is presented. It is found that SAR image brightness and ice elevation are clearly related. However, the correlation, using the data and techniques applied, is not strong.

  3. Automated extraction of pressure ridges from SAR images of sea ice - Comparison with surface truth

    NASA Technical Reports Server (NTRS)

    Vesecky, J. F.; Smith, M. P.; Samadani, R.; Daida, J. M.; Comiso, J. C.

    1991-01-01

    The authors estimate the characteristics of ridges and leads in sea ice from SAR (synthetic aperture radar) images. Such estimates are based on the hypothesis that bright filamentary features in SAR sea ice images correspond with pressure ridges. A data set collected in the Greenland Sea in 1987 allows this hypothesis to be evaluated for X-band SAR images. A preliminary analysis of data collected from SAR images and ice elevation (from a laser altimeter) is presented. It is found that SAR image brightness and ice elevation are clearly related. However, the correlation, using the data and techniques applied, is not strong.

  4. AUTOMATED ANALYSIS OF AQUEOUS SAMPLES CONTAINING PESTICIDES, ACIDIC/BASIC/NEUTRAL SEMIVOLATILES AND VOLATILE ORGANIC COMPOUNDS BY SOLID PHASE EXTRACTION COUPLED IN-LINE TO LARGE VOLUME INJECTION GC/MS

    EPA Science Inventory

    Data is presented on the development of a new automated system combining solid phase extraction (SPE) with GC/MS spectrometry for the single-run analysis of water samples containing a broad range of organic compounds. The system uses commercially available automated in-line 10-m...

  5. AUTOMATED ANALYSIS OF AQUEOUS SAMPLES CONTAINING PESTICIDES, ACIDIC/BASIC/NEUTRAL SEMIVOLATILES AND VOLATILE ORGANIC COMPOUNDS BY SOLID PHASE EXTRACTION COUPLED IN-LINE TO LARGE VOLUME INJECTION GC/MS

    EPA Science Inventory

    Data is presented on the development of a new automated system combining solid phase extraction (SPE) with GC/MS spectrometry for the single-run analysis of water samples containing a broad range of organic compounds. The system uses commercially available automated in-line 10-m...

  6. Determination of dialkyl phosphate metabolites of organophosphorus pesticides in human urine by automated solid-phase extraction, derivatization, and gas chromatography-mass spectrometry.

    PubMed

    Hemakanthi De Alwis, G K; Needham, Larry L; Barr, Dana B

    2008-01-01

    Organophosphorus (OP) pesticides are highly toxic but used commonly worldwide, nevertheless. Their urinary dialkylphosphate (DAP) metabolites are widely used for exposure assessment of OP pesticides in humans. We previously developed an analytical method to measure urinary DAPs utilizing solid-phase extraction (SPE)-derivatization-gas chromatography-tandem mass spectrometry (GC-MS-MS) with quantification using isotope-dilution technique. We now present a more cost-effective yet highly accurate method that can be easily adaptable to many laboratories for routine OP exposure assessment. This method is simple and fast and involves automated SPE of the metabolites followed by derivatization with pentafluorobenzyl bromide and quantification by GC-MS. Dibutyl phosphate (DBP) serves as the internal standard. The detection limits for the six metabolites ranged from 0.1 to 0.15 ng/mL. Depending on the metabolite the relative standard deviation of the analytical procedure was 2-15% for the metabolites. We compared performance of DBP as an internal standard with that of isotope-labeled compounds and found that DBP gives reliable results for the analytical procedure. We also optimized reaction parameters of pentafluorobenzylation.

  7. Evaluation of three automated nucleic acid extraction systems for identification of respiratory viruses in clinical specimens by multiplex real-time PCR.

    PubMed

    Kim, Yoonjung; Han, Mi-Soon; Kim, Juwon; Kwon, Aerin; Lee, Kyung-A

    2014-01-01

    A total of 84 nasopharyngeal swab specimens were collected from 84 patients. Viral nucleic acid was extracted by three automated extraction systems: QIAcube (Qiagen, Germany), EZ1 Advanced XL (Qiagen), and MICROLAB Nimbus IVD (Hamilton, USA). Fourteen RNA viruses and two DNA viruses were detected using the Anyplex II RV16 Detection kit (Seegene, Republic of Korea). The EZ1 Advanced XL system demonstrated the best analytical sensitivity for all the three viral strains. The nucleic acids extracted by EZ1 Advanced XL showed higher positive rates for virus detection than the others. Meanwhile, the MICROLAB Nimbus IVD system was comprised of fully automated steps from nucleic extraction to PCR setup function that could reduce human errors. For the nucleic acids recovered from nasopharyngeal swab specimens, the QIAcube system showed the fewest false negative results and the best concordance rate, and it may be more suitable for detecting various viruses including RNA and DNA virus strains. Each system showed different sensitivity and specificity for detection of certain viral pathogens and demonstrated different characteristics such as turnaround time and sample capacity. Therefore, these factors should be considered when new nucleic acid extraction systems are introduced to the laboratory.

  8. Automation of DNA and miRNA co-extraction for miRNA-based identification of human body fluids and tissues.

    PubMed

    Kulstein, Galina; Marienfeld, Ralf; Miltner, Erich; Wiegand, Peter

    2016-10-01

    In the last years, microRNA (miRNA) analysis came into focus in the field of forensic genetics. Yet, no standardized and recommendable protocols for co-isolation of miRNA and DNA from forensic relevant samples have been developed so far. Hence, this study evaluated the performance of an automated Maxwell® 16 System-based strategy (Promega) for co-extraction of DNA and miRNA from forensically relevant (blood and saliva) samples compared to (semi-)manual extraction methods. Three procedures were compared on the basis of recovered quantity of DNA and miRNA (as determined by real-time PCR and Bioanalyzer), miRNA profiling (shown by Cq values and extraction efficiency), STR profiles, duration, contamination risk and handling. All in all, the results highlight that the automated co-extraction procedure yielded the highest miRNA and DNA amounts from saliva and blood samples compared to both (semi-)manual protocols. Also, for aged and genuine samples of forensically relevant traces the miRNA and DNA yields were sufficient for subsequent downstream analysis. Furthermore, the strategy allows miRNA extraction only in cases where it is relevant to obtain additional information about the sample type. Besides, this system enables flexible sample throughput and labor-saving sample processing with reduced risk of cross-contamination.

  9. Automated extraction of decision rules for leptin dynamics--a rough sets approach.

    PubMed

    Brtka, Vladimir; Stokić, Edith; Srdić, Biljana

    2008-08-01

    A significant area in the field of medical informatics is concerned with the learning of medical models from low-level data. The goals of inducing models from data are twofold: analysis of the structure of the models so as to gain new insight into the unknown phenomena, and development of classifiers or outcome predictors for unseen cases. In this paper, we will employ approach based on the relation of indiscernibility and rough sets theory to study certain questions concerning the design of model based on if-then rules, from low-level data including 36 parameters, one of them leptin. To generate easy to read, interpret, and inspect model, we have used ROSETTA software system. The main goal of this work is to get new insight into phenomena of leptin levels while interplaying with other risk factors in obesity.

  10. The ValleyMorph Tool: An automated extraction tool for transverse topographic symmetry (T-) factor and valley width to valley height (Vf-) ratio

    NASA Astrophysics Data System (ADS)

    Daxberger, Heidi; Dalumpines, Ron; Scott, Darren M.; Riller, Ulrich

    2014-09-01

    In tectonically active regions on Earth, shallow-crustal deformation associated with seismic hazards may pose a threat to human life and property. The study of landform development, such as analysis of the valley width to valley height ratio (Vf-ratio) and the Transverse Topographic Symmetry Factor (T-factor), delineating drainage basin symmetry, can be used as a relative measure of tectonic activity along fault-bound mountain fronts. The fast evolution of digital elevation models (DEM) provides an ideal base for remotely-sensed tectonomorphic studies of large areas using Geographical Information Systems (GIS). However, a manual extraction of the above mentioned morphologic parameters may be tedious and very time consuming. Moreover, basic GIS software suites do not provide the necessary built-in functions. Therefore, we present a newly developed, Python based, ESRI ArcGIS compatible tool and stand-alone script, the ValleyMorph Tool. This tool facilitates an automated extraction of the Vf-ratio and the T-factor data for large regions. Using a digital elevation raster and watershed polygon files as input, the tool provides output in the form of several ArcGIS data tables and shapefiles, ideal for further data manipulation and computation. This coding enables an easy application among the ArcGIS user community and code conversion to earlier ArcGIS versions. The ValleyMorph Tool is easy to use due to a simple graphical user interface. The tool is tested for the southern Central Andes using a total of 3366 watersheds.

  11. Submicrometric Magnetic Nanoporous Carbons Derived from Metal-Organic Frameworks Enabling Automated Electromagnet-Assisted Online Solid-Phase Extraction.

    PubMed

    Frizzarin, Rejane M; Palomino Cabello, Carlos; Bauzà, Maria Del Mar; Portugal, Lindomar A; Maya, Fernando; Cerdà, Víctor; Estela, José M; Turnes Palomino, Gemma

    2016-07-19

    We present the first application of submicrometric magnetic nanoporous carbons (μMNPCs) as sorbents for automated solid-phase extraction (SPE). Small zeolitic imidazolate framework-67 crystals are obtained at room temperature and directly carbonized under an inert atmosphere to obtain submicrometric nanoporous carbons containing magnetic cobalt nanoparticles. The μMNPCs have a high contact area, high stability, and their preparation is simple and cost-effective. The prepared μMNPCs are exploited as sorbents in a microcolumn format in a sequential injection analysis (SIA) system with online spectrophotometric detection, which includes a specially designed three-dimensional (3D)-printed holder containing an automatically actuated electromagnet. The combined action of permanent magnets and an automatically actuated electromagnet enabled the movement of the solid bed of particles inside the microcolumn, preventing their aggregation, increasing the versatility of the system, and increasing the preconcentration efficiency. The method was optimized using a full factorial design and Doehlert Matrix. The developed system was applied to the determination of anionic surfactants, exploiting the retention of the ion-pairs formed with Methylene Blue on the μMNPC. Using sodium dodecyl sulfate as a model analyte, quantification was linear from 50 to 1000 μg L(-1), and the detection limit was equal to 17.5 μg L(-1), the coefficient of variation (n = 8; 100 μg L(-1)) was 2.7%, and the analysis throughput was 13 h(-1). The developed approach was applied to the determination of anionic surfactants in water samples (natural water, groundwater, and wastewater), yielding recoveries of 93% to 110% (95% confidence level).

  12. Path duplication using GPS carrier based relative position for automated ground vehicle convoys

    NASA Astrophysics Data System (ADS)

    Travis, William E., III

    A GPS based automated convoy strategy to duplicate the path of a lead vehicle is presented in this dissertation. Laser scanners and cameras are not used; all information available comes from GPS or inertial systems. An algorithm is detailed that uses GPS carrier phase measurements to determine relative position between two moving ground vehicles. Error analysis shows the accuracy is centimeter level. It is shown that the time to the first solution fix is dependent upon initial relative position accuracy, and that near instantaneous fixes can be realized if that accuracy is less than 20 centimeters. The relative positioning algorithm is then augmented with inertial measurement units to dead reckon through brief outages. Performance analysis of automotive and tactical grade units shows the twenty centimeter threshold can be maintained for only a few seconds with the automotive grade unit and for 14 seconds with the tactical unit. Next, techniques to determine odometry information in vector form are discussed. Three methods are outlined: dead reckoning of inertial sensors, time differencing GPS carrier measurements to determine change in platform position, and aiding the time differenced carrier measurements with inertial measurements. Partial integration of a tactical grade inertial measurement unit provided the lowest error drift for the scenarios investigated, but the time differenced carrier phase approach provided the most cost feasible approach with similar accuracy. Finally, the relative position and odometry algorithms are used to generate a reference by which an automated following vehicle can replicate a lead vehicle's path of travel. The first method presented uses only the relative position information to determine a relative angle to the leader. Using the relative angle as a heading reference for a steering control causes the follower to drive at the lead vehicle, thereby creating a towing effect on the follower when both vehicles are in motion. Effective

  13. Satellite mapping and automated feature extraction: Geographic information system-based change detection of the Antarctic coast

    NASA Astrophysics Data System (ADS)

    Kim, Kee-Tae

    Declassified Intelligence Satellite Photograph (DISP) data are important resources for measuring the geometry of the coastline of Antarctica. By using the state-of-art digital imaging technology, bundle block triangulation based on tie points and control points derived from a RADARSAT-1 Synthetic Aperture Radar (SAR) image mosaic and Ohio State University (OSU) Antarctic digital elevation model (DEM), the individual DISP images were accurately assembled into a map quality mosaic of Antarctica as it appeared in 1963. The new map is one of important benchmarks for gauging the response of the Antarctic coastline to changing climate. Automated coastline extraction algorithm design is the second theme of this dissertation. At the pre-processing stage, an adaptive neighborhood filtering was used to remove the film-grain noise while preserving edge features. At the segmentation stage, an adaptive Bayesian approach to image segmentation was used to split the DISP imagery into its homogenous regions, in which the fuzzy c-means clustering (FCM) technique and Gibbs random field (GRF) model were introduced to estimate the conditional and prior probability density functions. A Gaussian mixture model was used to estimate the reliable initial values for the FCM technique. At the post-processing stage, image object formation and labeling, removal of noisy image objects, and vectorization algorithms were sequentially applied to segmented images for extracting a vector representation of coastlines. Results were presented that demonstrate the effectiveness of the algorithm in segmenting the DISP data. In the cases of cloud cover and little contrast scenes, manual editing was carried out based on intermediate image processing and visual inspection in comparison of old paper maps. Through a geographic information system (GIS), the derived DISP coastline data were integrated with earlier and later data to assess continental scale changes in the Antarctic coast. Computing the area of

  14. Extracted facial feature of racial closely related faces

    NASA Astrophysics Data System (ADS)

    Liewchavalit, Chalothorn; Akiba, Masakazu; Kanno, Tsuneo; Nagao, Tomoharu

    2010-02-01

    Human faces contain a lot of demographic information such as identity, gender, age, race and emotion. Human being can perceive these pieces of information and use it as an important clue in social interaction with other people. Race perception is considered the most delicacy and sensitive parts of face perception. There are many research concerning image-base race recognition, but most of them are focus on major race group such as Caucasoid, Negroid and Mongoloid. This paper focuses on how people classify race of the racial closely related group. As a sample of racial closely related group, we choose Japanese and Thai face to represents difference between Northern and Southern Mongoloid. Three psychological experiment was performed to study the strategies of face perception on race classification. As a result of psychological experiment, it can be suggested that race perception is an ability that can be learn. Eyes and eyebrows are the most attention point and eyes is a significant factor in race perception. The Principal Component Analysis (PCA) was performed to extract facial features of sample race group. Extracted race features of texture and shape were used to synthesize faces. As the result, it can be suggested that racial feature is rely on detailed texture rather than shape feature. This research is a indispensable important fundamental research on the race perception which are essential in the establishment of human-like race recognition system.

  15. Automated extraction of DNA and RNA from a single formalin-fixed paraffin-embedded tissue section for analysis of both single-nucleotide polymorphisms and mRNA expression.

    PubMed

    Hennig, Guido; Gehrmann, Mathias; Stropp, Udo; Brauch, Hiltrud; Fritz, Peter; Eichelbaum, Michel; Schwab, Matthias; Schroth, Werner

    2010-12-01

    There is an increasing need for the identification of both DNA and RNA biomarkers from pathodiagnostic formalin-fixed paraffin-embedded (FFPE) tissue samples for the exploration of individualized therapy strategies in cancer. We investigated a fully automated, xylene-free nucleic acid extraction method for the simultaneous analysis of RNA and DNA biomarkers related to breast cancer. We copurified both RNA and DNA from a single 10-μm section of 210 paired samples of FFPE tumor and adjacent normal tissues (1-25 years of archival time) using a fully automated extraction method. Half of the eluate was DNase I digested for mRNA expression analysis performed by using reverse-transcription quantitative PCR for the genes estrogen receptor 1 (ESR1), progesterone receptor (PGR), v-erb-b2 erythroblastic leukemia viral oncogene homolog 2, neuro/glioblastoma derived oncogene homolog (avian) (ERBB2), epoxide hydrolase 1 (EPHX1), baculoviral IAP repeat-containing 5 (BIRC5), matrix metallopeptidase 7 (MMP7), vascular endothelial growth factor A (VEGFA), and topoisomerase (DNA) II alpha 170kDa (TOP2A). The remaining undigested aliquot was used for the analysis of 7 single-nucleotide polymorphisms (SNPs) by MALDI-TOF mass spectrometry. In 208 of 210 samples (99.0%) the protocol yielded robust quantification-cycle values for both RNA and DNA normalization. Expression of the 8 breast cancer genes was detected in 81%-100% of tumor tissues and 21%-100% of normal tissues. The 7 SNPs were successfully genotyped in 91%-97% of tumor and 94%-97% of normal tissues. Allele concordance between tumor and normal tissue was 98.9%-99.5%. This fully automated process allowed an efficient simultaneous extraction of both RNA and DNA from a single FFPE section and subsequent dual analysis of selected genes. High gene expression and genotyping detection rates demonstrate the feasibility of molecular profiling from limited archival patient samples.

  16. Medication Incidents Related to Automated Dose Dispensing in Community Pharmacies and Hospitals - A Reporting System Study

    PubMed Central

    Cheung, Ka-Chun; van den Bemt, Patricia M. L. A.; Bouvy, Marcel L.; Wensing, Michel; De Smet, Peter A. G. M.

    2014-01-01

    Introduction Automated dose dispensing (ADD) is being introduced in several countries and the use of this technology is expected to increase as a growing number of elderly people need to manage their medication at home. ADD aims to improve medication safety and treatment adherence, but it may introduce new safety issues. This descriptive study provides insight into the nature and consequences of medication incidents related to ADD, as reported by healthcare professionals in community pharmacies and hospitals. Methods The medication incidents that were submitted to the Dutch Central Medication incidents Registration (CMR) reporting system were selected and characterized independently by two researchers. Main Outcome Measures Person discovering the incident, phase of the medication process in which the incident occurred, immediate cause of the incident, nature of incident from the healthcare provider's perspective, nature of incident from the patient's perspective, and consequent harm to the patient caused by the incident. Results From January 2012 to February 2013 the CMR received 15,113 incidents: 3,685 (24.4%) incidents from community pharmacies and 11,428 (75.6%) incidents from hospitals. Eventually 1 of 50 reported incidents (268/15,113 = 1.8%) were related to ADD; in community pharmacies more incidents (227/3,685 = 6.2%) were related to ADD than in hospitals (41/11,428 = 0.4%). The immediate cause of an incident was often a change in the patient's medicine regimen or relocation. Most reported incidents occurred in two phases: entering the prescription into the pharmacy information system and filling the ADD bag. Conclusion A proportion of incidents was related to ADD and is reported regularly, especially by community pharmacies. In two phases, entering the prescription into the pharmacy information system and filling the ADD bag, most incidents occurred. A change in the patient's medicine regimen or relocation was the immediate causes of an incident

  17. Semi-automated disk-type solid-phase extraction method for polychlorinated dibenzo-p-dioxins and dibenzofurans in aqueous samples and its application to natural water.

    PubMed

    Choi, J W; Lee, J H; Moon, B S; Baek, K H

    2007-07-20

    A disk-type solid-phase extraction (SPE) method was used for the extraction of polychlorinated dibenzo-p-dioxins and dibenzofurans (PCDD/Fs) in natural water and tap water. Since this SPE system comprised airtight glass covers with a decompression pump, it enabled continuous extraction with semi-automation. The disk-type SPE method was validated by comparing its recovery rates of spiked internal standards with those of the liquid-liquid extraction (LLE). The recovery ranges of both methods were similar in terms of (13)C-labeled internal standards: 64.3-99.2% for the LLE and 52.4-93.6% for the SPE. For the native spike of 1,3,6,8-tetrachlorinated dibenzo-p-dioxin (TCDD) and octachlorinated dibenzo-p-dioxin (OCDD), the recoveries in the SPE were in the normal range of 77.9-101.1%. However, in the LLE, the recoveries of 1,3,6,8-TCDD decreased significantly. One of the reasons for the low recovery is that the solubility of this congener is high. The semi-automated SPE method was applied to the analysis of different types of water: river water, snow, sea water, raw water for drinking purposes, and tap water. PCDD/F congeners were found in some sea water and snow samples, while their concentrations in the other samples were below the limits of detection (LODs). This SPE system is appropriate for the routine analysis of water samples below 50L.

  18. Texture- and object-related automated information analysis in histological still images of various organs.

    PubMed

    Kayser, Klaus; Hoshang, Sabah Amir; Metze, Konradin; Goldmann, Torsten; Vollmer, Ekkehard; Radziszowski, Dominik; Kosjerina, Zdravko; Mireskandari, Masoud; Kayser, Gian

    2008-12-01

    To create algorithms and application tools that can support routine diagnoses of various organs. A generalized algorithm was developed that permits the evaluation of diagnosis-associated image features obtained from hematoxylin-eosin-stained histopathologic slides. The procedure was tested for screening of tumor tissue vs. tumor-free tissue in 1,442 cases of various organs. Tissue samples studied include colon, lung, breast, pleura, stomach and thyroid. The algorithm distinguishes between texture- and object-related parameters. Texture-based information-defined as gray value per pixel measure--is independent from any segmentation procedure. It results in recursive vectors derived from time series analysis and image features obtained by spatial dependent and independent transformations. Object-based features are defined as gray value per biologic object measured. The accuracy of automated crude classification was between 95% and 100% based upon a learning set of 10 cases per diagnosis class. Results were independent from the analyzed organ. The algorithm can also distinguish between benign and malignant tumors of colon, between epithelial mesothelioma and pleural carcinomatosis or between different common pulmonary carcinomas. Our algorithm distinguishes accurately among crude histologic diagnoses of various organs. It is a promising technique that can assist tissue-based diagnosis and be expanded to virtual slide evaluation.

  19. Automated spatio-temporal analysis of dendritic spines and related protein dynamics.

    PubMed

    On, Vincent; Zahedi, Atena; Ethell, Iryna M; Bhanu, Bir

    2017-01-01

    Cofilin and other Actin-regulating proteins are essential in regulating the shape of dendritic spines, which are sites of neuronal communications in the brain, and their malfunctions are implicated in neurodegeneration related to aging. The analysis of cofilin motility in dendritic spines using fluorescence video-microscopy may allow for the discovery of its effects on synaptic functions. To date, the flow of cofilin has not been analyzed by automatic means. This paper presents Dendrite Protein Analysis (DendritePA), a novel automated pattern recognition software to analyze protein trafficking in neurons. Using spatiotemporal information present in multichannel fluorescence videos, the DendritePA generates a temporal maximum intensity projection that enhances the signal-to-noise ratio of important biological structures, segments and tracks dendritic spines, estimates the density of proteins in spines, and analyzes the flux of proteins through the dendrite/spine boundary. The motion of a dendritic spine is used to generate spine energy images, which are used to automatically classify the shape of common dendritic spines such as stubby, mushroom, or thin. By tracking dendritic spines over time and using their intensity profiles, the system can analyze the flux patterns of cofilin and other fluorescently stained proteins. The cofilin flux patterns are found to correlate with the dynamic changes in dendritic spine shapes. Our results also have shown that the activation of cofilin using genetic manipulations leads to immature spines while its inhibition results in an increase in mature spines.

  20. Employment and residential characteristics in relation to automated external defibrillator locations

    PubMed Central

    Griffis, Heather M.; Band, Roger A; Ruther, Matthew; Harhay, Michael; Asch, David A.; Hershey, John C.; Hill, Shawndra; Nadkarni, Lindsay; Kilaru, Austin; Branas, Charles C.; Shofer, Frances; Nichol, Graham; Becker, Lance B.; Merchant, Raina M.

    2015-01-01

    Background Survival from out-of-hospital cardiac arrest (OHCA) is generally poor and varies by geography. Variability in automated external defibrillator (AED) locations may be a contributing factor. To inform optimal placement of AEDs, we investigated AED access in a major US city relative to demographic and employment characteristics. Methods and Results This was a retrospective analysis of a Philadelphia AED registry (2,559 total AEDs). The 2010 US Census and the Local Employment Dynamics (LED) database by ZIP code was used. AED access was calculated as the weighted areal percentage of each ZIP code covered by a 400 meter radius around each AED. Of 47 ZIP codes, only 9%(4) were high AED service areas. In 26%(12) of ZIP codes, less than 35% of the area was covered by AED service areas. Higher AED access ZIP codes were more likely to have a moderately populated residential area (p=0.032), higher median household income (p=0.006), and higher paying jobs (p=008). Conclusions The locations of AEDs vary across specific ZIP codes; select residential and employment characteristics explain some variation. Further work on evaluating OHCA locations, AED use and availability, and OHCA outcomes could inform AED placement policies. Optimizing the placement of AEDs through this work may help to increase survival. PMID:26856232

  1. Characterization of eleutheroside B metabolites derived from an extract of Acanthopanax senticosus Harms by high-resolution liquid chromatography/quadrupole time-of-flight mass spectrometry and automated data analysis.

    PubMed

    Lu, Fang; Sun, Qiang; Bai, Yun; Bao, Shunru; Li, Xuzhao; Yan, Guangli; Liu, Shumin

    2012-10-01

    We elucidated the structure and metabolite profile of eleutheroside B, a component derived from the extract of Acanthopanax senticosus Harms, after oral administration of the extract in rats. Samples of rat plasma were collected and analyzed by selective high-resolution liquid chromatography/quadrupole time-of-flight mass spectrometry (UPLC/Q-TOF MS) automated data analysis method. A total of 11 metabolites were detected: four were identified, and three of those four are reported for the first time here. The three new plasma metabolites were identified on the basis of mass fragmentation patterns and literature reports. The major in vivo metabolic processes associated with eleutheroside B in A. senticosus include demethylation, acetylation, oxidation and glucuronidation after deglycosylation. A fairly comprehensive metabolic pathway was proposed for eleutheroside B. Our results provide a meaningful basis for drug discovery, design and clinical applications related to A. senticosus in traditional Chinese medicine.

  2. Automated Metadata Extraction

    DTIC Science & Technology

    2008-06-01

    advanced forensic format, library and tools. Paper presented at the Second Annual IFIP WG 11.9 International Conference on Digital Forensics...search.htm. [49] C. Spieler. UnZip 5.52. Retrieved 5/24/2008, from http://www.WinZip.com. [50] J. E. Towle , C. T. Clotfelter. (2007). TwiddleNet

  3. Highly sensitive routine method for urinary 3-hydroxybenzo[a]pyrene quantitation using liquid chromatography-fluorescence detection and automated off-line solid phase extraction.

    PubMed

    Barbeau, Damien; Maître, Anne; Marques, Marie

    2011-03-21

    Many workers and also the general population are exposed to polycyclic aromatic hydrocarbons (PAHs), and benzo[a]pyrene (BaP) was recently classified as carcinogenic for humans (group 1) by the International Agency for Research on Cancer. Biomonitoring of PAHs exposure is usually performed by urinary 1-hydroxypyrene (1-OHP) analysis. 1-OHP is a metabolite of pyrene, a non-carcinogenic PAH. In this work, we developed a very simple but highly sensitive analytical method of quantifying one urinary metabolite of BaP, 3-hydroxybenzo[a]pyrene (3-OHBaP), to evaluate carcinogenic PAHs exposure. After hydrolysis of 10 mL urine for two hours and concentration by automated off-line solid phase extraction, the sample was injected in a column-switching high-performance liquid chromatography fluorescence detection system. The limit of quantification was 0.2 pmol L(-1) (0.05 ng L(-1)) and the limit of detection was estimated at 0.07 pmol L(-1) (0.02 ng L(-1)). Linearity was established for 3-OHBaP concentrations ranging from 0.4 to 74.5 pmol L(-1) (0.1 to 20 ng L(-1)). Relative within-day standard deviation was less than 3% and relative between-day standard deviation was less than 4%. In non-occupationally exposed subjects, median concentrations for smokers compared with non-smokers were 3.5 times higher for 1-OHP (p<0.001) and 2 times higher for 3-OHBaP (p<0.05). The two urinary biomarkers were correlated in smokers (ρ=0.636; p<0.05; n=10) but not in non-smokers (ρ=0.09; p>0.05; n=21).

  4. Determination of Low Concentrations of Acetochlor in Water by Automated Solid-Phase Extraction and Gas Chromatography with Mass-Selective Detection

    USGS Publications Warehouse

    Lindley, C.E.; Stewart, J.T.; Sandstrom, M.W.

    1996-01-01

    A sensitive and reliable gas chromatographic/mass spectrometric (GC/MS) method for determining acetochlor in environmental water samples was developed. The method involves automated extraction of the herbicide from a filtered 1 L water sample through a C18 solid-phase extraction column, elution from the column with hexane-isopropyl alcohol (3 + 1), and concentration of the extract with nitrogen gas. The herbicide is quantitated by capillary/column GC/MS with selected-ion monitoring of 3 characteristic ions. The single-operator method detection limit for reagent water samples is 0.0015 ??g/L. Mean recoveries ranged from about 92 to 115% for 3 water matrixes fortified at 0.05 and 0.5 ??g/L. Average single-operator precision, over the course of 1 week, was better than 5%.

  5. MG-Digger: An Automated Pipeline to Search for Giant Virus-Related Sequences in Metagenomes

    PubMed Central

    Verneau, Jonathan; Levasseur, Anthony; Raoult, Didier; La Scola, Bernard; Colson, Philippe

    2016-01-01

    The number of metagenomic studies conducted each year is growing dramatically. Storage and analysis of such big data is difficult and time-consuming. Interestingly, analysis shows that environmental and human metagenomes include a significant amount of non-annotated sequences, representing a ‘dark matter.’ We established a bioinformatics pipeline that automatically detects metagenome reads matching query sequences from a given set and applied this tool to the detection of sequences matching large and giant DNA viral members of the proposed order Megavirales or virophages. A total of 1,045 environmental and human metagenomes (≈ 1 Terabase) were collected, processed, and stored on our bioinformatics server. In addition, nucleotide and protein sequences from 93 Megavirales representatives, including 19 giant viruses of amoeba, and 5 virophages, were collected. The pipeline was generated by scripts written in Python language and entitled MG-Digger. Metagenomes previously found to contain megavirus-like sequences were tested as controls. MG-Digger was able to annotate 100s of metagenome sequences as best matching those of giant viruses. These sequences were most often found to be similar to phycodnavirus or mimivirus sequences, but included reads related to recently available pandoraviruses, Pithovirus sibericum, and faustoviruses. Compared to other tools, MG-Digger combined stand-alone use on Linux or Windows operating systems through a user-friendly interface, implementation of ready-to-use customized metagenome databases and query sequence databases, adjustable parameters for BLAST searches, and creation of output files containing selected reads with best match identification. Compared to Metavir 2, a reference tool in viral metagenome analysis, MG-Digger detected 8% more true positive Megavirales-related reads in a control metagenome. The present work shows that massive, automated and recurrent analyses of metagenomes are effective in improving knowledge about

  6. MG-Digger: An Automated Pipeline to Search for Giant Virus-Related Sequences in Metagenomes.

    PubMed

    Verneau, Jonathan; Levasseur, Anthony; Raoult, Didier; La Scola, Bernard; Colson, Philippe

    2016-01-01

    The number of metagenomic studies conducted each year is growing dramatically. Storage and analysis of such big data is difficult and time-consuming. Interestingly, analysis shows that environmental and human metagenomes include a significant amount of non-annotated sequences, representing a 'dark matter.' We established a bioinformatics pipeline that automatically detects metagenome reads matching query sequences from a given set and applied this tool to the detection of sequences matching large and giant DNA viral members of the proposed order Megavirales or virophages. A total of 1,045 environmental and human metagenomes (≈ 1 Terabase) were collected, processed, and stored on our bioinformatics server. In addition, nucleotide and protein sequences from 93 Megavirales representatives, including 19 giant viruses of amoeba, and 5 virophages, were collected. The pipeline was generated by scripts written in Python language and entitled MG-Digger. Metagenomes previously found to contain megavirus-like sequences were tested as controls. MG-Digger was able to annotate 100s of metagenome sequences as best matching those of giant viruses. These sequences were most often found to be similar to phycodnavirus or mimivirus sequences, but included reads related to recently available pandoraviruses, Pithovirus sibericum, and faustoviruses. Compared to other tools, MG-Digger combined stand-alone use on Linux or Windows operating systems through a user-friendly interface, implementation of ready-to-use customized metagenome databases and query sequence databases, adjustable parameters for BLAST searches, and creation of output files containing selected reads with best match identification. Compared to Metavir 2, a reference tool in viral metagenome analysis, MG-Digger detected 8% more true positive Megavirales-related reads in a control metagenome. The present work shows that massive, automated and recurrent analyses of metagenomes are effective in improving knowledge about the

  7. An automated method to analyze language use in patients with schizophrenia and their first-degree relatives

    PubMed Central

    Elvevåg, Brita; Foltz, Peter W.; Rosenstein, Mark; DeLisi, Lynn E.

    2009-01-01

    Communication disturbances are prevalent in schizophrenia, and since it is a heritable illness these are likely present - albeit in a muted form - in the relatives of patients. Given the time-consuming, and often subjective nature of discourse analysis, these deviances are frequently not assayed in large scale studies. Recent work in computational linguistics and statistical-based semantic analysis has shown the potential and power of automated analysis of communication. We present an automated and objective approach to modeling discourse that detects very subtle deviations between probands, their first-degree relatives and unrelated healthy controls. Although these findings should be regarded as preliminary due to the limitations of the data at our disposal, we present a brief analysis of the models that best differentiate these groups in order to illustrate the utility of the method for future explorations of how language components are differentially affected by familial and illness related issues. PMID:20383310

  8. A Logic-Based Approach to Relation Extraction from Texts

    NASA Astrophysics Data System (ADS)

    Horváth, Tamás; Paass, Gerhard; Reichartz, Frank; Wrobel, Stefan

    In recent years, text mining has moved far beyond the classical problem of text classification with an increased interest in more sophisticated processing of large text corpora, such as, for example, evaluations of complex queries. This and several other tasks are based on the essential step of relation extraction. This problem becomes a typical application of learning logic programs by considering the dependency trees of sentences as relational structures and examples of the target relation as ground atoms of a target predicate. In this way, each example is represented by a definite first-order Horn-clause. We show that an adaptation of Plotkin's least general generalization (LGG) operator can effectively be applied to such clauses and propose a simple and effective divide-and-conquer algorithm for listing a certain set of LGGs. We use these LGGs to generate binary features and compute the hypothesis by applying SVM to the feature vectors obtained. Empirical results on the ACE-2003 benchmark dataset indicate that the performance of our approach is comparable to state-of-the-art kernel methods.

  9. Fully automated diagnosis of papilledema through robust extraction of vascular patterns and ocular pathology from fundus photographs

    PubMed Central

    Fatima, Khush Naseeb; Hassan, Taimur; Akram, M. Usman; Akhtar, Mahmood; Butt, Wasi Haider

    2017-01-01

    Rapid development in the field of ophthalmology has increased the demand of computer aided diagnosis of various eye diseases. Papilledema is an eye disease in which the optic disc of the eye is swelled due to an increase in intracranial pressure. This increased pressure can cause severe encephalic complications like abscess, tumors, meningitis or encephalitis, which may lead to a patient’s death. Although there have been several papilledema case studies reported from a medical point of view, only a few researchers have presented automated algorithms for this problem. This paper presents a novel computer aided system which aims to automatically detect papilledema from fundus images. Firstly, the fundus images are preprocessed by going through optic disc detection and vessel segmentation. After preprocessing, a total of 26 different features are extracted to capture possible changes in the optic disc due to papilledema. These features are further divided into four categories based upon their color, textural, vascular and disc margin obscuration properties. The best features are then selected and combined to form a feature matrix that is used to distinguish between normal images and images with papilledema using the supervised support vector machine (SVM) classifier. The proposed method is tested on 160 fundus images obtained from two different data sets i.e. structured analysis of retina (STARE), which is a publicly available data set, and our local data set that has been acquired from the Armed Forces Institute of Ophthalmology (AFIO). The STARE data set contained 90 and our local data set contained 70 fundus images respectively. These annotations have been performed with the help of two ophthalmologists. We report detection accuracies of 95.6% for STARE, 87.4% for the local data set, and 85.9% for the combined STARE and local data sets. The proposed system is fast and robust in detecting papilledema from fundus images with promising results. This will aid

  10. Effects of a psychophysiological system for adaptive automation on performance, workload, and the event-related potential P300 component

    NASA Technical Reports Server (NTRS)

    Prinzel, Lawrence J 3rd; Freeman, Frederick G.; Scerbo, Mark W.; Mikulka, Peter J.; Pope, Alan T.

    2003-01-01

    The present study examined the effects of an electroencephalographic- (EEG-) based system for adaptive automation on tracking performance and workload. In addition, event-related potentials (ERPs) to a secondary task were derived to determine whether they would provide an additional degree of workload specificity. Participants were run in an adaptive automation condition, in which the system switched between manual and automatic task modes based on the value of each individual's own EEG engagement index; a yoked control condition; or another control group, in which task mode switches followed a random pattern. Adaptive automation improved performance and resulted in lower levels of workload. Further, the P300 component of the ERP paralleled the sensitivity to task demands of the performance and subjective measures across conditions. These results indicate that it is possible to improve performance with a psychophysiological adaptive automation system and that ERPs may provide an alternative means for distinguishing among levels of cognitive task demand in such systems. Actual or potential applications of this research include improved methods for assessing operator workload and performance.

  11. Effects of a psychophysiological system for adaptive automation on performance, workload, and the event-related potential P300 component

    NASA Technical Reports Server (NTRS)

    Prinzel, Lawrence J 3rd; Freeman, Frederick G.; Scerbo, Mark W.; Mikulka, Peter J.; Pope, Alan T.

    2003-01-01

    The present study examined the effects of an electroencephalographic- (EEG-) based system for adaptive automation on tracking performance and workload. In addition, event-related potentials (ERPs) to a secondary task were derived to determine whether they would provide an additional degree of workload specificity. Participants were run in an adaptive automation condition, in which the system switched between manual and automatic task modes based on the value of each individual's own EEG engagement index; a yoked control condition; or another control group, in which task mode switches followed a random pattern. Adaptive automation improved performance and resulted in lower levels of workload. Further, the P300 component of the ERP paralleled the sensitivity to task demands of the performance and subjective measures across conditions. These results indicate that it is possible to improve performance with a psychophysiological adaptive automation system and that ERPs may provide an alternative means for distinguishing among levels of cognitive task demand in such systems. Actual or potential applications of this research include improved methods for assessing operator workload and performance.

  12. [Determination of isothiocyanates and related compounds in mustard extract and horseradish extract used as natural food additives].

    PubMed

    Uematsu, Yoko; Hirata, Keiko; Suzuki, Kumi; Iida, Kenji; Ueta, Tadahiko; Kamata, Kunihiro

    2002-02-01

    Amounts of isothiocyanates and related compounds in a mustard extract and a horseradish extract for food additive use were determined by GC, after confirmation of the identity of GC peaks by GC/MS. Amounts of allyl isothiocyanate, which included that of allyl thiocyanate, because most of the allyl thiocyanate detected in the sample was assumed to have been formed from allyl isothiocyanate during GC analysis, were 97.6% and 85.4%, in the mustard extract and the horseradish extract, respectively. Total amounts of the identified isothiocyanates in the mustard extract and the horseradish extract were 98.5% and 95.4%, respectively. Allyl cyanide, a degradation product of allyl isothiocyanate, was found in the mustard extract and the horseradish extract at the levels of 0.57% and 1.73%, respectively. beta-Phenylethyl cyanide, a possible degradation product of beta-phenylethyl isothiocyanate, and allyl sulfides were found in the horseradish extract, at the levels of 0.13% and 0.46%, respectively. Allylamine, which is another degradation product of allyl isothiocyanate, was determined after acetylation, and was found in the mustard extract and the horseradish extract at the levels of 8 micrograms/g and 67 micrograms/g, respectively.

  13. Simultaneous determination of dextromethorphan, dextrorphan, and guaifenesin in human plasma using semi-automated liquid/liquid extraction and gradient liquid chromatography tandem mass spectrometry.

    PubMed

    Eichhold, Thomas H; McCauley-Myers, David L; Khambe, Deepa A; Thompson, Gary A; Hoke, Steven H

    2007-01-17

    A method for the simultaneous determination of dextromethorphan (DEX), dextrorphan (DET), and guaifenesin (GG) in human plasma was developed, validated, and applied to determine plasma concentrations of these compounds in samples from six clinical pharmacokinetic (PK) studies. Semi-automated liquid handling systems were used to perform the majority of the sample manipulation including liquid/liquid extraction (LLE) of the analytes from human plasma. Stable-isotope-labeled analogues were utilized as internal standards (ISTDs) for each analyte to facilitate accurate and precise quantification. Extracts were analyzed using gradient liquid chromatography coupled with tandem mass spectrometry (LC-MS/MS). Use of semi-automated LLE with LC-MS/MS proved to be a very rugged and reliable approach for analysis of more than 6200 clinical study samples. The lower limit of quantification was validated at 0.010, 0.010, and 1.0 ng/mL of plasma for DEX, DET, and GG, respectively. Accuracy and precision of quality control (QC) samples for all three analytes met FDA Guidance criteria of +/-15% for average QC accuracy with coefficients of variation less than 15%. Data from the thorough evaluation of the method during development, validation, and application are presented to characterize selectivity, linearity, over-range sample analysis, accuracy, precision, autosampler carry-over, ruggedness, extraction efficiency, ionization suppression, and stability. Pharmacokinetic data are also provided to illustrate improvements in systemic drug and metabolite concentration-time profiles that were achieved by formulation optimization.

  14. Time-resolved Characterization of Particle Associated Polycyclic Aromatic Hydrocarbons using a newly-developed Sequential Spot Sampler with Automated Extraction and Analysis.

    PubMed

    Eiguren Fernandez, Arantzazu; Lewis, Gregory S; Spielman, Steven R; Hering, Susanne V

    2014-10-01

    A versatile and compact sampling system, the Sequential Spot Sampler (S3) has been developed for pre-concentrated, time-resolved, dry collection of fine and ultrafine particles. Using a temperature-moderated laminar flow water condensation method, ambient particles as small as 6 nm are deposited within a dry, 1-mm diameter spot. Sequential samples are collected on a multiwell plate. Chemical analyses are laboratory-based, but automated. The sample preparation, extraction and chemical analysis steps are all handled through a commercially-available, needle-based autosampler coupled to a liquid chromatography system. This automation is enabled by the small deposition area of the collection. The entire sample is extracted into 50-100μl volume of solvent, providing quantifiable samples with small collected air volumes. A pair of S3 units was deployed in Stockton (CA) from November 2011 to February 2012. PM2.5 samples were collected every 12 hrs, and analyzed for polycyclic aromatic hydrocarbons (PAHs). In parallel, conventional filter samples were collected for 48 hrs and used to assess the new system's performance. An automated sample preparation and extraction was developed for samples collected using the S3. Collocated data from the two sequential spot samplers were highly correlated for all measured compounds, with a regression slope of 1.1 and r(2)=0.9 for all measured concentrations. S3/filter ratios for the mean concentration of each individual PAH vary between 0.82 and 1.33, with the larger variability observed for the semivolatile components. Ratio for total PAH concentrations was 1.08. Total PAH concentrations showed similar temporal trend as ambient PM2.5 concentrations. Source apportionment analysis estimated a significant contribution of biomass burning to ambient PAH concentrations during winter.

  15. Time-resolved characterization of particle associated polycyclic aromatic hydrocarbons using a newly-developed sequential spot sampler with automated extraction and analysis

    NASA Astrophysics Data System (ADS)

    Eiguren-Fernandez, Arantzazu; Lewis, Gregory S.; Spielman, Steven R.; Hering, Susanne V.

    2014-10-01

    A versatile and compact sampling system, the Sequential Spot Sampler (S3) has been developed for pre-concentrated, time-resolved, dry collection of fine and ultrafine particles. Using a temperature-moderated laminar flow water condensation method, ambient particles as small as 6 nm are deposited within a dry, 1-mm diameter spot. Sequential samples are collected on a multiwell plate. Chemical analyses are laboratory-based, but automated. The sample preparation, extraction and chemical analysis steps are all handled through a commercially-available, needle-based autosampler coupled to a liquid chromatography system. This automation is enabled by the small deposition area of the collection. The entire sample is extracted into 50-100 μL volume of solvent, providing quantifiable samples with small collected air volumes. A pair of S3 units was deployed in Stockton (CA) from November 2011 to February 2012. PM2.5 samples were collected every 12 h, and analyzed for polycyclic aromatic hydrocarbons (PAHs). In parallel, conventional filter samples were collected for 48 h and used to assess the new system's performance. An automated sample preparation and extraction was developed for samples collected using the S3. Collocated data from the two sequential spot samplers were highly correlated for all measured compounds, with a regression slope of 1.1 and r2 = 0.9 for all measured concentrations. S3/filter ratios for the mean concentration of each individual PAH vary between 0.82 and 1.33, with the larger variability observed for the semivolatile components. Ratio for total PAH concentrations was 1.08. Total PAH concentrations showed similar temporal trend as ambient PM2.5 concentrations. Source apportionment analysis estimated a significant contribution of biomass burning to ambient PAH concentrations during winter.

  16. Time-resolved Characterization of Particle Associated Polycyclic Aromatic Hydrocarbons using a newly-developed Sequential Spot Sampler with Automated Extraction and Analysis

    PubMed Central

    Lewis, Gregory S.; Spielman, Steven R.; Hering, Susanne V.

    2014-01-01

    A versatile and compact sampling system, the Sequential Spot Sampler (S3) has been developed for pre-concentrated, time-resolved, dry collection of fine and ultrafine particles. Using a temperature-moderated laminar flow water condensation method, ambient particles as small as 6 nm are deposited within a dry, 1-mm diameter spot. Sequential samples are collected on a multiwell plate. Chemical analyses are laboratory-based, but automated. The sample preparation, extraction and chemical analysis steps are all handled through a commercially-available, needle-based autosampler coupled to a liquid chromatography system. This automation is enabled by the small deposition area of the collection. The entire sample is extracted into 50–100μl volume of solvent, providing quantifiable samples with small collected air volumes. A pair of S3 units was deployed in Stockton (CA) from November 2011 to February 2012. PM2.5 samples were collected every 12 hrs, and analyzed for polycyclic aromatic hydrocarbons (PAHs). In parallel, conventional filter samples were collected for 48 hrs and used to assess the new system’s performance. An automated sample preparation and extraction was developed for samples collected using the S3. Collocated data from the two sequential spot samplers were highly correlated for all measured compounds, with a regression slope of 1.1 and r2=0.9 for all measured concentrations. S3/filter ratios for the mean concentration of each individual PAH vary between 0.82 and 1.33, with the larger variability observed for the semivolatile components. Ratio for total PAH concentrations was 1.08. Total PAH concentrations showed similar temporal trend as ambient PM2.5 concentrations. Source apportionment analysis estimated a significant contribution of biomass burning to ambient PAH concentrations during winter. PMID:25574151

  17. Determination of phthalates in bottled water by automated on-line solid phase extraction coupled to liquid chromatography with uv detection.

    PubMed

    Salazar-Beltrán, Daniel; Hinojosa-Reyes, Laura; Ruiz-Ruiz, Edgar; Hernández-Ramírez, Aracely; Luis Guzmán-Mar, Jorge

    2017-06-01

    An on-line solid phase extraction coupled to liquid chromatography with UV detection (SPE/LC-UV) method was automated by the multisyringe flow-injection analysis (MSFIA) system for the determination of three phthalic acid esters (PAEs). The PAEs determined in drinking water stored in polyethylene terephthalate (PET) bottles of ten commercial brands were dimethyl phthalate (DMP), diethyl phthalate (DEP) and dibutyl phthalate (DBP). C18-bonded silica membrane was used for isolation and enrichment of the PAEs in water samples. The calibration range of the SPE/LC-UV method was 2.5-100μgL(-1) for DMP and DEP and 10-100μgL(-1) for DBP with correlation coefficients (r) ranging from 0.9970 to 0.9975. Limits of detection (LODs) were between 0.7 and 2.4μgL(-1). Inter-day reproducibility performed at two concentration levels (10 and 100μgL(-1)) expressed as relative standard deviation (%RSD) were found in the range of 0.9-4.0%. The solvent volume was reduced to 18mL with a total analysis time of 48min per sample. The major species detected in bottled water samples was DBP reaching concentrations between 20.5 and 82.8μgL(-1). The recovery percentages for the three analytes in drinking water were 80-115%. The migration test showed a great variation in the sum of migrated PAEs level (10.2-50.6μgL(-1)) among the PET bottle brands analyzed indicating that the presence of these contaminants in the plastic containers may depend on raw materials and the conditions used during their production process. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. Automated mini-column solid-phase extraction cleanup for high-throughput analysis of chemical contaminants in foods by low-pressure gas chromatography – tandem mass spectrometry

    USDA-ARS?s Scientific Manuscript database

    This study demonstrated the application of an automated high-throughput mini-cartridge solid-phase extraction (mini-SPE) cleanup for the rapid low-pressure gas chromatography – tandem mass spectrometry (LPGC-MS/MS) analysis of pesticides and environmental contaminants in QuEChERS extracts of foods. ...

  19. A filter paper-based microdevice for low-cost, rapid, and automated DNA extraction and amplification from diverse sample types.

    PubMed

    Gan, Wupeng; Zhuang, Bin; Zhang, Pengfei; Han, Junping; Li, Cai-Xia; Liu, Peng

    2014-10-07

    A plastic microfluidic device that integrates a filter disc as a DNA capture phase was successfully developed for low-cost, rapid and automated DNA extraction and PCR amplification from various raw samples. The microdevice was constructed by sandwiching a piece of Fusion 5 filter, as well as a PDMS (polydimethylsiloxane) membrane, between two PMMA (poly(methyl methacrylate)) layers. An automated DNA extraction from 1 μL of human whole blood can be finished on the chip in 7 minutes by sequentially aspirating NaOH, HCl, and water through the filter. The filter disc containing extracted DNA was then taken out directly for PCR. On-chip DNA purification from 0.25-1 μL of human whole blood yielded 8.1-21.8 ng of DNA, higher than those obtained using QIAamp® DNA Micro kits. To realize DNA extraction from raw samples, an additional sample loading chamber containing a filter net with an 80 μm mesh size was designed in front of the extraction chamber to accommodate sample materials. Real-world samples, including whole blood, dried blood stains on Whatman® 903 paper, dried blood stains on FTA™ cards, buccal swabs, saliva, and cigarette butts, can all be processed in the system in 8 minutes. In addition, multiplex amplification of 15 STR (short tandem repeat) loci and Sanger-based DNA sequencing of the 520 bp GJB2 gene were accomplished from the filters that contained extracted DNA from blood. To further prove the feasibility of integrating this extraction method with downstream analyses, "in situ" PCR amplifications were successfully performed in the DNA extraction chamber following DNA purification from blood and blood stains without DNA elution. Using a modified protocol to bond the PDMS and PMMA, our plastic PDMS devices withstood the PCR process without any leakage. This study represents a significant step towards the practical application of on-chip DNA extraction methods, as well as the development of fully integrated genetic analytical systems.

  20. On the Relation between Automated Essay Scoring and Modern Views of the Writing Construct

    ERIC Educational Resources Information Center

    Deane, Paul

    2013-01-01

    This paper examines the construct measured by automated essay scoring (AES) systems. AES systems measure features of the text structure, linguistic structure, and conventional print form of essays; as such, the systems primarily measure text production skills. In the current state-of-the-art, AES provide little direct evidence about such matters…

  1. Method for extracting copper, silver and related metals

    DOEpatents

    Moyer, Bruce A.; McDowell, W. J.

    1990-01-01

    A process for selectively extracting precious metals such as silver and gold concurrent with copper extraction from aqueous solutions containing the same. The process utilizes tetrathiamacrocycles and high molecular weight organic acids that exhibit a synergistic relationship when complexing with certain metal ions thereby removing them from ore leach solutions.

  2. Method for extracting copper, silver and related metals

    DOEpatents

    Moyer, B.A.; McDowell, W.J.

    1987-10-23

    A process for selectively extracting precious metals such as silver and gold concurrent with copper extraction from aqueous solutions containing the same. The process utilizes tetrathiamacrocycles and high molecular weight organic acids that exhibit a synergistic relationship when complexing with certain metal ions thereby removing them from ore leach solutions.

  3. Determination of 74 new psychoactive substances in serum using automated in-line solid-phase extraction-liquid chromatography-tandem mass spectrometry.

    PubMed

    Lehmann, Sabrina; Kieliba, Tobias; Beike, Justus; Thevis, Mario; Mercer-Chalmers-Bender, Katja

    2017-10-01

    A detailed description is given of the development and validation of a fully automated in-line solid-phase extraction-liquid chromatography-tandem mass spectrometry (SPE-LC-MS/MS) method capable of detecting 90 central-stimulating new psychoactive substances (NPS) and 5 conventional amphetamine-type stimulants (amphetamine, 3,4-methylenedioxy-methamphetamine (MDMA), 3,4-methylenedioxy-amphetamine (MDA), 3,4-methylenedioxy-N-ethyl-amphetamine (MDEA), methamphetamine) in serum. The aim was to apply the validated method to forensic samples. The preparation of 150μL of serum was performed by an Instrument Top Sample Preparation (ITSP)-SPE with mixed mode cation exchanger cartridges. The extracts were directly injected into an LC-MS/MS system, using a biphenyl column and gradient elution with 2mM ammonium formate/0.1% formic acid and acetonitrile/0.1% formic acid as mobile phases. The chromatographic run time amounts to 9.3min (including re-equilibration). The total cycle time is 11min, due to the interlacing between sample preparation and analysis. The method was fully validated using 69 NPS and five conventional amphetamine-type stimulants, according to the guidelines of the Society of Toxicological and Forensic Chemistry (GTFCh). The guidelines were fully achieved for 62 analytes (with a limit of detection (LOD) between 0.2 and 4μg/L), whilst full validation was not feasible for the remaining 12 analytes. For the fully validated analytes, the method achieved linearity in the 5μg/L (lower limit of quantification, LLOQ) to 250μg/L range (coefficients of determination>0.99). Recoveries for 69 of these compounds were greater than 50%, with relative standard deviations≤15%. The validated method was then tested for its capability in detecting a further 21 NPS, thus totalling 95 tested substances. An LOD between 0.4 and 1.6μg/L was obtained for these 21 additional qualitatively-measured substances. The method was subsequently successfully applied to 28 specimens from

  4. Toward automated classification of consumers' cancer-related questions with a new taxonomy of expected answer types.

    PubMed

    McRoy, Susan; Jones, Sean; Kurmally, Adam

    2016-09-01

    This article examines methods for automated question classification applied to cancer-related questions that people have asked on the web. This work is part of a broader effort to provide automated question answering for health education. We created a new corpus of consumer-health questions related to cancer and a new taxonomy for those questions. We then compared the effectiveness of different statistical methods for developing classifiers, including weighted classification and resampling. Basic methods for building classifiers were limited by the high variability in the natural distribution of questions and typical refinement approaches of feature selection and merging categories achieved only small improvements to classifier accuracy. Best performance was achieved using weighted classification and resampling methods, the latter yielding an accuracy of F1 = 0.963. Thus, it would appear that statistical classifiers can be trained on natural data, but only if natural distributions of classes are smoothed. Such classifiers would be useful for automated question answering, for enriching web-based content, or assisting clinical professionals to answer questions.

  5. Automated procedure for determination of ammonia in concrete with headspace single-drop micro-extraction by stepwise injection spectrophotometric analysis.

    PubMed

    Timofeeva, Irina; Khubaibullin, Ilnur; Kamencev, Mihail; Moskvin, Aleksey; Bulatov, Andrey

    2015-02-01

    A novel automatic stepwise injection headspace single-drop micro-extraction system is proposed as a versatile approach for automated determination of volatile compounds. The system application is demonstrated for ammonia determination in concrete samples. An ammonia gas was produced from ammonium ions and extracted on-line into 5 μL 0.1M H3PO4 to eliminate the interference effect of concrete species on the ammonia stepwise injection spectrophotometric determination. The linear range was 0.1-1 mg kg(-1) with LOD 30 µg kg(-1). The sample throughput was 4 h(-1). This system has been successfully applied for the determination of ammonia in concretes. Copyright © 2014 Elsevier B.V. All rights reserved.

  6. 3D printed device including disk-based solid-phase extraction for the automated speciation of iron using the multisyringe flow injection analysis technique.

    PubMed

    Calderilla, Carlos; Maya, Fernando; Cerdà, Víctor; Leal, Luz O

    2017-12-01

    The development of advanced manufacturing techniques is crucial for the design of novel analytical tools with unprecedented features. Advanced manufacturing, also known as 3D printing, has been explored for the first time to fabricate modular devices with integrated features for disk-based automated solid-phase extraction (SPE). A modular device integrating analyte oxidation, disk-based SPE and analyte complexation has been fabricated using stereolithographic 3D printing. The 3D printed device is directly connected to flow-based analytical instrumentation, replacing typical flow networks based on discrete elements. As proof of concept, the 3D printed device was implemented in a multisyringe flow injection analysis (MSFIA) system, and applied to the fully automated speciation, SPE and spectrophotometric quantification of Fe in water samples. The obtained limit of detection for total Fe determination was 7ng, with a dynamic linear range from 22ng to 2400ng Fe (3mL sample). An intra-day RSD of 4% (n = 12) and an inter-day RSD of 4.3% (n = 5, 3mL sample, different day with a different disk), were obtained. Incorporation of integrated 3D printed devices with automated flow-based techniques showed improved sensitivity (85% increase on the measured peak height for the determination of total Fe) in comparison with analogous flow manifolds built from conventional tubing and connectors. Our work represents a step forward towards the improved reproducibility in the fabrication of manifolds for flow-based automated methods of analysis, which is especially relevant in the implementation of interlaboratory analysis. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. Automated flow-based anion-exchange method for high-throughput isolation and real-time monitoring of RuBisCO in plant extracts.

    PubMed

    Suárez, Ruth; Miró, Manuel; Cerdà, Víctor; Perdomo, Juan Alejandro; Galmés, Jeroni

    2011-06-15

    In this work, a miniaturized, completely enclosed multisyringe-flow system is proposed for high-throughput purification of RuBisCO from Triticum aestivum extracts. The automated method capitalizes on the uptake of the target protein at 4°C onto Q-Sepharose Fast Flow strong anion-exchanger packed in a cylindrical microcolumn (105 × 4 mm) followed by a stepwise ionic-strength gradient elution (0-0.8 mol/L NaCl) to eliminate concomitant extract components and retrieve highly purified RuBisCO. The manifold is furnished downstream with a flow-through diode-array UV/vis spectrophotometer for real-time monitoring of the column effluent at the protein-specific wavelength of 280 nm to detect the elution of RuBisCO. Quantitation of RuBisCO and total soluble proteins in the eluate fractions were undertaken using polyacrylamide gel electrophoresis (PAGE) and the spectrophotometric Bradford assay, respectively. A comprehensive investigation of the effect of distinct concentration gradients on the isolation of RuBisCO and experimental conditions (namely, type of resin, column dimensions and mobile-phase flow rate) upon column capacity and analyte breakthrough was effected. The assembled set-up was aimed to critically ascertain the efficiency of preliminary batchwise pre-treatments of crude plant extracts (viz., polyethylenglycol (PEG) precipitation, ammonium sulphate precipitation and sucrose gradient centrifugation) in terms of RuBisCO purification and absolute recovery prior to automated anion-exchange column separation. Under the optimum physical and chemical conditions, the flow-through column system is able to admit crude plant extracts and gives rise to RuBisCO purification yields better than 75%, which might be increased up to 96 ± 9% with a prior PEG fractionation followed by sucrose gradient step.

  8. A fully automated method for simultaneous determination of aflatoxins and ochratoxin A in dried fruits by pressurized liquid extraction and online solid-phase extraction cleanup coupled to ultra-high-pressure liquid chromatography-tandem mass spectrometry.

    PubMed

    Campone, Luca; Piccinelli, Anna Lisa; Celano, Rita; Russo, Mariateresa; Valdés, Alberto; Ibáñez, Clara; Rastrelli, Luca

    2015-04-01

    According to current demands and future perspectives in food safety, this study reports a fast and fully automated analytical method for the simultaneous analysis of the mycotoxins with high toxicity and wide spread, aflatoxins (AFs) and ochratoxin A (OTA) in dried fruits, a high-risk foodstuff. The method is based on pressurized liquid extraction (PLE), with aqueous methanol (30%) at 110 °C, of the slurried dried fruit and online solid-phase extraction (online SPE) cleanup of the PLE extracts with a C18 cartridge. The purified sample was directly analysed by ultra-high-pressure liquid chromatography-tandem mass spectrometry (UHPLC-MS/MS) for sensitive and selective determination of AFs and OTA. The proposed analytical procedure was validated for different dried fruits (vine fruit, fig and apricot), providing method detection and quantification limits much lower than the AFs and OTA maximum levels imposed by EU regulation in dried fruit for direct human consumption. Also, recoveries (83-103%) and repeatability (RSD < 8, n = 3) meet the performance criteria required by EU regulation for the determination of the levels of mycotoxins in foodstuffs. The main advantage of the proposed method is full automation of the whole analytical procedure that reduces the time and cost of the analysis, sample manipulation and solvent consumption, enabling high-throughput analysis and highly accurate and precise results.

  9. Extraction of a group-pair relation: problem-solving relation from web-board documents.

    PubMed

    Pechsiri, Chaveevan; Piriyakul, Rapepun

    2016-01-01

    This paper aims to extract a group-pair relation as a Problem-Solving relation, for example a DiseaseSymptom-Treatment relation and a CarProblem-Repair relation, between two event-explanation groups, a problem-concept group as a symptom/CarProblem-concept group and a solving-concept group as a treatment-concept/repair concept group from hospital-web-board and car-repair-guru-web-board documents. The Problem-Solving relation (particularly Symptom-Treatment relation) including the graphical representation benefits non-professional persons by supporting knowledge of primarily solving problems. The research contains three problems: how to identify an EDU (an Elementary Discourse Unit, which is a simple sentence) with the event concept of either a problem or a solution; how to determine a problem-concept EDU boundary and a solving-concept EDU boundary as two event-explanation groups, and how to determine the Problem-Solving relation between these two event-explanation groups. Therefore, we apply word co-occurrence to identify a problem-concept EDU and a solving-concept EDU, and machine-learning techniques to solve a problem-concept EDU boundary and a solving-concept EDU boundary. We propose using k-mean and Naïve Bayes to determine the Problem-Solving relation between the two event-explanation groups involved with clustering features. In contrast to previous works, the proposed approach enables group-pair relation extraction with high accuracy.

  10. Office automation.

    PubMed

    Arenson, R L

    1986-03-01

    By now, the term "office automation" should have more meaning for those readers who are not intimately familiar with the subject. Not all of the preceding material pertains to every department or practice, but certainly, word processing and simple telephone management are key items. The size and complexity of the organization will dictate the usefulness of electronic mail and calendar management, and the individual radiologist's personal needs and habits will determine the usefulness of the home computer. Perhaps the most important ingredient for success in the office automation arena relates to the ability to integrate information from various systems in a simple and flexible manner. Unfortunately, this is perhaps the one area that most office automation systems have ignored or handled poorly. In the personal computer world, there has been much emphasis recently on integration of packages such as spreadsheet, database management, word processing, graphics, time management, and communications. This same philosophy of integration has been applied to a few office automation systems, but these are generally vendor-specific and do not allow for a mixture of foreign subsystems. During the next few years, it is likely that a few vendors will emerge as dominant in this integrated office automation field and will stress simplicity and flexibility as major components.

  11. Automated Grading of Age-Related Macular Degeneration From Color Fundus Images Using Deep Convolutional Neural Networks.

    PubMed

    Burlina, Philippe M; Joshi, Neil; Pekala, Michael; Pacheco, Katia D; Freund, David E; Bressler, Neil M

    2017-09-28

    Age-related macular degeneration (AMD) affects millions of people throughout the world. The intermediate stage may go undetected, as it typically is asymptomatic. However, the preferred practice patterns for AMD recommend identifying individuals with this stage of the disease to educate how to monitor for the early detection of the choroidal neovascular stage before substantial vision loss has occurred and to consider dietary supplements that might reduce the risk of the disease progressing from the intermediate to the advanced stage. Identification, though, can be time-intensive and requires expertly trained individuals. To develop methods for automatically detecting AMD from fundus images using a novel application of deep learning methods to the automated assessment of these images and to leverage artificial intelligence advances. Deep convolutional neural networks that are explicitly trained for performing automated AMD grading were compared with an alternate deep learning method that used transfer learning and universal features and with a trained clinical grader. Age-related macular degeneration automated detection was applied to a 2-class classification problem in which the task was to distinguish the disease-free/early stages from the referable intermediate/advanced stages. Using several experiments that entailed different data partitioning, the performance of the machine algorithms and human graders in evaluating more than 130 000 images that were deidentified with respect to age, sex, and race/ethnicity from 4613 patients against a gold standard included in the National Institutes of Health Age-Related Eye Disease Study data set was evaluated. Accuracy, receiver operating characteristics and area under the curve, and κ score. The deep convolutional neural network method yielded accuracy that ranged between 88.4% (SD, 0.5%) and 91.6% (SD, 0.1%), the area under the receiver operating characteristic curve was between 0.94 and 0.96, and κ (SD) between 0

  12. Automated position control of a surface array relative to a liquid microjunction surface sampler

    DOEpatents

    Van Berkel, Gary J.; Kertesz, Vilmos; Ford, Michael James

    2007-11-13

    A system and method utilizes an image analysis approach for controlling the probe-to-surface distance of a liquid junction-based surface sampling system for use with mass spectrometric detection. Such an approach enables a hands-free formation of the liquid microjunction used to sample solution composition from the surface and for re-optimization, as necessary, of the microjunction thickness during a surface scan to achieve a fully automated surface sampling system.

  13. Three Experiments Examining the Use of Electroencephalogram,Event-Related Potentials, and Heart-Rate Variability for Real-Time Human-Centered Adaptive Automation Design

    NASA Technical Reports Server (NTRS)

    Prinzel, Lawrence J., III; Parasuraman, Raja; Freeman, Frederick G.; Scerbo, Mark W.; Mikulka, Peter J.; Pope, Alan T.

    2003-01-01

    Adaptive automation represents an advanced form of human-centered automation design. The approach to automation provides for real-time and model-based assessments of human-automation interaction, determines whether the human has entered into a hazardous state of awareness and then modulates the task environment to keep the operator in-the-loop , while maintaining an optimal state of task engagement and mental alertness. Because adaptive automation has not matured, numerous challenges remain, including what the criteria are, for determining when adaptive aiding and adaptive function allocation should take place. Human factors experts in the area have suggested a number of measures including the use of psychophysiology. This NASA Technical Paper reports on three experiments that examined the psychophysiological measures of event-related potentials, electroencephalogram, and heart-rate variability for real-time adaptive automation. The results of the experiments confirm the efficacy of these measures for use in both a developmental and operational role for adaptive automation design. The implications of these results and future directions for psychophysiology and human-centered automation design are discussed.

  14. Demonstration and validation of automated agricultural field extraction from multi-temporal Landsat data for the majority of United States harvested cropland

    NASA Astrophysics Data System (ADS)

    Yan, L.; Roy, D. P.

    2014-12-01

    The spatial distribution of agricultural fields is a fundamental description of rural landscapes and the location and extent of fields is important to establish the area of land utilized for agricultural yield prediction, resource allocation, and for economic planning, and may be indicative of the degree of agricultural capital investment, mechanization, and labor intensity. To date, field objects have not been extracted from satellite data over large areas because of computational constraints, the complexity of the extraction task, and because consistently processed appropriate resolution data have not been available or affordable. A recently published automated methodology to extract agricultural crop fields from weekly 30 m Web Enabled Landsat data (WELD) time series was refined and applied to 14 states that cover 70% of harvested U.S. cropland (USDA 2012 Census). The methodology was applied to 2010 combined weekly Landsat 5 and 7 WELD data. The field extraction and quantitative validation results are presented for the following 14 states: Iowa, North Dakota, Illinois, Kansas, Minnesota, Nebraska, Texas, South Dakota, Missouri, Indiana, Ohio, Wisconsin, Oklahoma and Michigan (sorted by area of harvested cropland). These states include the top 11 U.S states by harvested cropland area. Implications and recommendations for systematic application to global coverage Landsat data are discussed.

  15. A Comprehensive Automated 3D Approach for Building Extraction, Reconstruction, and Regularization from Airborne Laser Scanning Point Clouds.

    PubMed

    Dorninger, Peter; Pfeifer, Norbert

    2008-11-17

    Three dimensional city models are necessary for supporting numerous management applications. For the determination of city models for visualization purposes, several standardized workflows do exist. They are either based on photogrammetry or on LiDAR or on a combination of both data acquisition techniques. However, the automated determination of reliable and highly accurate city models is still a challenging task, requiring a workflow comprising several processing steps. The most relevant are building detection, building outline generation, building modeling, and finally, building quality analysis. Commercial software tools for building modeling require, generally, a high degree of human interaction and most automated approaches described in literature stress the steps of such a workflow individually. In this article, we propose a comprehensive approach for automated determination of 3D city models from airborne acquired point cloud data. It is based on the assumption that individual buildings can be modeled properly by a composition of a set of planar faces. Hence, it is based on a reliable 3D segmentation algorithm, detecting planar faces in a point cloud. This segmentation is of crucial importance for the outline detection and for the modeling approach. We describe the theoretical background, the segmentation algorithm, the outline detection, and the modeling approach, and we present and discuss several actual projects.

  16. A Comprehensive Automated 3D Approach for Building Extraction, Reconstruction, and Regularization from Airborne Laser Scanning Point Clouds

    PubMed Central

    Dorninger, Peter; Pfeifer, Norbert

    2008-01-01

    Three dimensional city models are necessary for supporting numerous management applications. For the determination of city models for visualization purposes, several standardized workflows do exist. They are either based on photogrammetry or on LiDAR or on a combination of both data acquisition techniques. However, the automated determination of reliable and highly accurate city models is still a challenging task, requiring a workflow comprising several processing steps. The most relevant are building detection, building outline generation, building modeling, and finally, building quality analysis. Commercial software tools for building modeling require, generally, a high degree of human interaction and most automated approaches described in literature stress the steps of such a workflow individually. In this article, we propose a comprehensive approach for automated determination of 3D city models from airborne acquired point cloud data. It is based on the assumption that individual buildings can be modeled properly by a composition of a set of planar faces. Hence, it is based on a reliable 3D segmentation algorithm, detecting planar faces in a point cloud. This segmentation is of crucial importance for the outline detection and for the modeling approach. We describe the theoretical background, the segmentation algorithm, the outline detection, and the modeling approach, and we present and discuss several actual projects. PMID:27873931

  17. Automated age-related macular degeneration classification in OCT using unsupervised feature learning

    NASA Astrophysics Data System (ADS)

    Venhuizen, Freerk G.; van Ginneken, Bram; Bloemen, Bart; van Grinsven, Mark J. J. P.; Philipsen, Rick; Hoyng, Carel; Theelen, Thomas; Sánchez, Clara I.

    2015-03-01

    Age-related Macular Degeneration (AMD) is a common eye disorder with high prevalence in elderly people. The disease mainly affects the central part of the retina, and could ultimately lead to permanent vision loss. Optical Coherence Tomography (OCT) is becoming the standard imaging modality in diagnosis of AMD and the assessment of its progression. However, the evaluation of the obtained volumetric scan is time consuming, expensive and the signs of early AMD are easy to miss. In this paper we propose a classification method to automatically distinguish AMD patients from healthy subjects with high accuracy. The method is based on an unsupervised feature learning approach, and processes the complete image without the need for an accurate pre-segmentation of the retina. The method can be divided in two steps: an unsupervised clustering stage that extracts a set of small descriptive image patches from the training data, and a supervised training stage that uses these patches to create a patch occurrence histogram for every image on which a random forest classifier is trained. Experiments using 384 volume scans show that the proposed method is capable of identifying AMD patients with high accuracy, obtaining an area under the Receiver Operating Curve of 0:984. Our method allows for a quick and reliable assessment of the presence of AMD pathology in OCT volume scans without the need for accurate layer segmentation algorithms.

  18. Simultaneous analysis of organochlorinated pesticides (OCPs) and polychlorinated biphenyls (PCBs) from marine samples using automated pressurized liquid extraction (PLE) and Power Prep™ clean-up.

    PubMed

    Helaleh, Murad I H; Al-Rashdan, Amal; Ibtisam, A

    2012-05-30

    An automated pressurized liquid extraction (PLE) method followed by Power Prep™ clean-up was developed for organochlorinated pesticide (OCP) and polychlorinated biphenyl (PCB) analysis in environmental marine samples of fish, squid, bivalves, shells, octopus and shrimp. OCPs and PCBs were simultaneously determined in a single chromatographic run using gas chromatography-mass spectrometry-negative chemical ionization (GC-MS-NCI). About 5 g of each biological marine sample was mixed with anhydrous sodium sulphate and placed in the extraction cell of the PLE system. PLE is controlled by means of a PC using DMS 6000 software. Purification of the extract was accomplished using automated Power Prep™ clean-up with a pre-packed disposable silica column (6 g) supplied by Fluid Management Systems (FMS). All OCPs and PCBs were eluted from the silica column using two types of solvent: 80 mL of hexane and a 50 mL mixture of hexane and dichloromethane (1:1). A wide variety of fish and shellfish were collected from the fish market and analyzed using this method. The total PCB concentrations were 2.53, 0.25, 0.24, 0.24, 0.17 and 1.38 ng g(-1) (w/w) for fish, squid, bivalves, shells, octopus and shrimp, respectively, and the corresponding total OCP concentrations were 30.47, 2.86, 0.92, 10.72, 5.13 and 18.39 ng g(-1) (w/w). Lipids were removed using an SX-3 Bio-Beads gel permeation chromatography (GPC) column. Analytical criteria such as recovery, reproducibility and repeatability were evaluated through a range of biological matrices.

  19. Automated Building Extraction from High-Resolution Satellite Imagery in Urban Areas Using Structural, Contextual, and Spectral Information

    NASA Astrophysics Data System (ADS)

    Jin, Xiaoying; Davis, Curt H.

    2005-12-01

    High-resolution satellite imagery provides an important new data source for building extraction. We demonstrate an integrated strategy for identifying buildings in 1-meter resolution satellite imagery of urban areas. Buildings are extracted using structural, contextual, and spectral information. First, a series of geodesic opening and closing operations are used to build a differential morphological profile (DMP) that provides image structural information. Building hypotheses are generated and verified through shape analysis applied to the DMP. Second, shadows are extracted using the DMP to provide reliable contextual information to hypothesize position and size of adjacent buildings. Seed building rectangles are verified and grown on a finely segmented image. Next, bright buildings are extracted using spectral information. The extraction results from the different information sources are combined after independent extraction. Performance evaluation of the building extraction on an urban test site using IKONOS satellite imagery of the City of Columbia, Missouri, is reported. With the combination of structural, contextual, and spectral information,[InlineEquation not available: see fulltext.] of the building areas are extracted with a quality percentage[InlineEquation not available: see fulltext.].

  20. Extraction of gene-disease relations from Medline using domain dictionaries and machine learning.

    PubMed

    Chun, Hong-Woo; Tsuruoka, Yoshimasa; Kim, Jin-Dong; Shiba, Rie; Nagata, Naoki; Hishiki, Teruyoshi; Tsujii, Jun'ichi

    2006-01-01

    We describe a system that extracts disease-gene relations from Medline. We constructed a dictionary for disease and gene names from six public databases and extracted relation candidates by dictionary matching. Since dictionary matching produces a large number of false positives, we developed a method of machine learning-based named entity recognition (NER) to filter out false recognitions of disease/gene names. We found that the performance of relation extraction is heavily dependent upon the performance of NER filtering and that the filtering improves the precision of relation extraction by 26.7% at the cost of a small reduction in recall.

  1. Picogram per liter level determination of estrogens in natural waters and waterworks by a fully automated on-line solid-phase extraction-liquid chromatography-electrospray tandem mass spectrometry method.

    PubMed

    Rodriguez-Mozaz, Sara; Lopez de Alda, Maria J; Barceló, Damià

    2004-12-01

    The present work describes a novel, fully automated method, based on on-line solid-phase extraction-liquid chromatography-electrospray tandem mass spectrometry (SPE-LC-ESI-MS-MS), which allows the unequivocal identification and quantification of the most environmentally relevant estrogens (estradiol, estrone, estriol, estradiol-17-glucuronide, estradiol-17-acetate, estrone-3-sulfate, ethynyl estradiol, diethylstilbestrol) in natural and treated waters at levels well below those of concern (limits of quantification between 0.02 and 1.02 ng/L). The method is highly precise, with relative standard deviations varying between 1.43 and 3.89%, and accurate (recovery percentages >74 %). This method was used to track the presence and fate of the target compounds in a waterworks and to evaluate the removal efficiency of the treatment processes applied. Only estrone and estrone-3-sulfate were detected in the river water used as source (at 0.68 and 0.33 ng/L, respectively). After progressive removal through the various treatment steps, none of them were detected in the finished drinking water. In addition to selectivity, sensitivity, repeatability, and automation (up to 15 samples plus 6 calibration solutions and 1 blank can be analyzed unattended), this technique offers fairly high throughput (analysis time per sample is 60 min), low time and solvent consumption, and ease of use.

  2. Comparison of automated multiplexed bead-based ANA screening assay with ELISA for detecting five common anti-extractable nuclear antigens and anti-dsDNA in systemic rheumatic diseases.

    PubMed

    Kim, Yoonjung; Park, Yongjung; Lee, Eun Young; Kim, Hyon-Suk

    2012-01-18

    A newly developed and totally automated Luminex-based assay, the BioPlex™ 2200 system, is able to detect various autoantibodies simultaneously from a single sample. We compared the BioPlex™ 2200 system with ELISA for the detection of six autoantibodies. A total of 127 serum samples from the patients with systemic rheumatic diseases were collected and assayed with the BioPlex™ 2200 system (Bio-Rad, USA) and conventional ELISA (INOVA Diagnostics, USA) for 5 anti-extractable nuclear antigens. Additionally, relative sensitivity of the BioPlex™ 2200 system for detecting anti-dsDNA was evaluated with 79 specimens from SLE patients, which were positive for anti-dsDNA by ELISA. The concordance rates between ELISA and the BioPlex ranged from 88.1% for anti-RNP to 95.2% for anti-Scl-70, and the kappa coefficients between the results by the two assays were from 0.48 to 0.67. Among the 79 anti-dsDNA positive specimens by ELISA, seventy-eight (98.7%) showed positive results for anti-dsDNA by the BioPlex. The BioPlex™ 2200 system showed comparable results with those by conventional ELISA for detecting autoantibodies, and this automated assay could measure multifarious autoantibodies concurrently in a single sample. It could be effectively used in clinical laboratories for screening autoimmune diseases. Copyright © 2011 Elsevier B.V. All rights reserved.

  3. Variation in antioxidant and antimicrobial activities in Lantana camara L. flowers in relation to extraction methods.

    PubMed

    Manzoor, Madiha; Anwar, Farooq; Sultana, Bushra; Mushtaq, Muhammad

    2013-01-01

    The present work was designed to appraise how different extraction solvents and techniques affect the extractability of antioxidant and antimicrobial components from Lantana camara (L. camard) flowers. Four extraction solvents including 100% methanol, 80% methanol, 100% ethanol and 80% ethanol coupled with three extraction techniques namely stirring, microwave-assisted stirring and ultrasonic-assisted stirring employed to isolate extractable components from the flowers of L. camara. The extracts produced were evaluated for their antioxidant and antimicrobial attributes. The yield of extractable components varied over a wide range 4.87-30.00% in relation to extraction solvent and techniques. The extracts produced contained considerable amounts of total phenolics (8.28-52.34 mg GAE/100 g DW) and total flavonoids (1.24-7.88 mg CE/100 g DW). Furthermore, a promising antioxidant activity in terms of DPPH° scavenging, inhibition of linoleic acid peroxidation and reducing power, as well as antimicrobial potential of the extracts were recorded against the selected bacterial and fungal strains. It was concluded that both extraction solvent and techniques employed affected the antioxidant and antimicrobial attributes of the extracts from L. camara flowers. With few exceptions, overall methanolic extracts produced by ultrasonic-assisted stirring offered superior activities followed by the microwave-assisted stirring and then stirring. The results advocate the use of appropriate extraction strategies to recover potent antioxidant and antimicrobial agents from the flowers of L. camara for nutraceutical and therapeutic.

  4. An Automated Approach to Agricultural Tile Drain Detection and Extraction Utilizing High Resolution Aerial Imagery and Object-Based Image Analysis

    NASA Astrophysics Data System (ADS)

    Johansen, Richard A.

    Subsurface drainage from agricultural fields in the Maumee River watershed is suspected to adversely impact the water quality and contribute to the formation of harmful algal blooms (HABs) in Lake Erie. In early August of 2014, a HAB developed in the western Lake Erie Basin that resulted in over 400,000 people being unable to drink their tap water due to the presence of a toxin from the bloom. HAB development in Lake Erie is aided by excess nutrients from agricultural fields, which are transported through subsurface tile and enter the watershed. Compounding the issue within the Maumee watershed, the trend within the watershed has been to increase the installation of tile drains in both total extent and density. Due to the immense area of drained fields, there is a need to establish an accurate and effective technique to monitor subsurface farmland tile installations and their associated impacts. This thesis aimed at developing an automated method in order to identify subsurface tile locations from high resolution aerial imagery by applying an object-based image analysis (OBIA) approach utilizing eCognition. This process was accomplished through a set of algorithms and image filters, which segment and classify image objects by their spectral and geometric characteristics. The algorithms utilized were based on the relative location of image objects and pixels, in order to maximize the robustness and transferability of the final rule-set. These algorithms were coupled with convolution and histogram image filters to generate results for a 10km2 study area located within Clay Township in Ottawa County, Ohio. The eCognition results were compared to previously collected tile locations from an associated project that applied heads-up digitizing of aerial photography to map field tile. The heads-up digitized locations were used as a baseline for the accuracy assessment. The accuracy assessment generated a range of agreement values from 67.20% - 71.20%, and an average

  5. Shoe-String Automation

    SciTech Connect

    Duncan, M.L.

    2001-07-30

    Faced with a downsizing organization, serious budget reductions and retirement of key metrology personnel, maintaining capabilities to provide necessary services to our customers was becoming increasingly difficult. It appeared that the only solution was to automate some of our more personnel-intensive processes; however, it was crucial that the most personnel-intensive candidate process be automated, at the lowest price possible and with the lowest risk of failure. This discussion relates factors in the selection of the Standard Leak Calibration System for automation, the methods of automation used to provide the lowest-cost solution and the benefits realized as a result of the automation.

  6. Pathology report data extraction from relational database using R, with extraction from reports on melanoma of skin as an example.

    PubMed

    Ye, Jay J

    2016-01-01

    Different methods have been described for data extraction from pathology reports with varying degrees of success. Here a technique for directly extracting data from relational database is described. Our department uses synoptic reports modified from College of American Pathologists (CAP) Cancer Protocol Templates to report most of our cancer diagnoses. Choosing the melanoma of skin synoptic report as an example, R scripting language extended with RODBC package was used to query the pathology information system database. Reports containing melanoma of skin synoptic report in the past 4 and a half years were retrieved and individual data elements were extracted. Using the retrieved list of the cases, the database was queried a second time to retrieve/extract the lymph node staging information in the subsequent reports from the same patients. 426 synoptic reports corresponding to unique lesions of melanoma of skin were retrieved, and data elements of interest were extracted into an R data frame. The distribution of Breslow depth of melanomas grouped by year is used as an example of intra-report data extraction and analysis. When the new pN staging information was present in the subsequent reports, 82% (77/94) was precisely retrieved (pN0, pN1, pN2 and pN3). Additional 15% (14/94) was retrieved with certain ambiguity (positive or knowing there was an update). The specificity was 100% for both. The relationship between Breslow depth and lymph node status was graphed as an example of lesion-specific multi-report data extraction and analysis. R extended with RODBC package is a simple and versatile approach well-suited for the above tasks. The success or failure of the retrieval and extraction depended largely on whether the reports were formatted and whether the contents of the elements were consistently phrased. This approach can be easily modified and adopted for other pathology information systems that use relational database for data management.

  7. Extraction of Relations between Entities from Texts by Learning Methods

    DTIC Science & Technology

    2006-12-01

    use of a large training corpus, necessary to efficiently weight documents and patterns. • Prométhée ( Morin 1999) incrementally learns a set of...Co-texte et calcul du sens, Claude Guimier (éd). Frigière J. (2004), Information extraction by learning method, Thales report. Harris Z. S. (1968...Superiority, IST- 055 Specialists Meeting on "Information Fusion for Command Support", Netherlands. Morin E. (1999) ,Using Lexico-Syntactic Patterns to

  8. Therapeutic drug monitoring of haloperidol, perphenazine, and zuclopenthixol in serum by a fully automated sequential solid phase extraction followed by high-performance liquid chromatography.

    PubMed

    Angelo, H R; Petersen, A

    2001-04-01

    In Denmark, haloperidol, perphenazine, and zuclopenthixol are among the most frequently requested antipsychotics for therapeutic drug monitoring. With the number of requests made at the authors' laboratory, the only rational analysis is one that can measure all three drugs simultaneously. The authors therefore decided to develop an automated high-performance liquid chromatography (HPLC) method. Two milliliters serum, 2.0 mL 10 mmol/L sodium phosphate buffer (pH 5.5), and 150 microL internal standard (trifluoperazine) solution were pipetted into HPLC vials and extracted on an ASPEC XL equipped with 1 mL (50 mg) Isolute C2 (EC) extraction columns and acetonitrile-methanol-ammonium acetate buffer (60:34:6) as extracting solution. Three hundred fifty microliters was analyzed by HPLC; a 150 x 4.6-mm S5CN Spherisorb column with a mobile phase of 10 mmol/L ammonium acetate buffer-methanol (1:9), a flow rate of 0.6-1.7 mL/min, and ultraviolet detection at 256 and 245 nm were used. Reproducibility was 5-12% and the lower limit of quantitation was 10, 1, and 5 nmol/L (4, 0.4, and 2 ng/mL) for haloperidol, perphenazine, and zuclopenthixol, respectively. The method was found to be sufficiently selective and robust for routine analysis.

  9. Simultaneous analysis of thebaine, 6-MAM and six abused opiates in postmortem fluids and tissues using Zymark automated solid-phase extraction and gas chromatography-mass spectrometry.

    PubMed

    Lewis, R J; Johnson, R D; Hattrup, R A

    2005-08-05

    Opiates are some of the most widely prescribed drugs in America and are often abused. Demonstrating the presence or absence of opiate compounds in postmortem fluids and/or tissues derived from fatal civil aviation accidents can have serious legal consequences and may help determine the cause of impairment and/or death. However, the consumption of poppy seed products can result in a positive opiate drug test. We have developed a simple method for the simultaneous determination of eight opiate compounds from one extraction. These compounds are hydrocodone, dihydrocodeine, codeine, oxycodone, hydromorphone, 6-monoacetylmorphine, morphine, and thebaine. The inclusion of thebaine is notable as it is an indicator of poppy seed consumption and may help explain morphine/codeine positives in cases where no opiate use was indicated. This method incorporates a Zymark RapidTracetrade mark automated solid-phase extraction system, gas chromatography/mass spectrometry, and trimethyl silane (TMS) and oxime-TMS derivatives. The limits of detection ranged from 0.78 to 12.5 ng/mL. The linear dynamic range for most analytes was 6.25-1600 ng/mL. The extraction efficiencies ranged from 70 to 103%. We applied this method to eight separate aviation fatalities where opiate compounds had previously been detected.

  10. Automated detection of feeding strikes by larval fish using continuous high-speed digital video: a novel method to extract quantitative data from fast, sparse kinematic events.

    PubMed

    Shamur, Eyal; Zilka, Miri; Hassner, Tal; China, Victor; Liberzon, Alex; Holzman, Roi

    2016-06-01

    Using videography to extract quantitative data on animal movement and kinematics constitutes a major tool in biomechanics and behavioral ecology. Advanced recording technologies now enable acquisition of long video sequences encompassing sparse and unpredictable events. Although such events may be ecologically important, analysis of sparse data can be extremely time-consuming and potentially biased; data quality is often strongly dependent on the training level of the observer and subject to contamination by observer-dependent biases. These constraints often limit our ability to study animal performance and fitness. Using long videos of foraging fish larvae, we provide a framework for the automated detection of prey acquisition strikes, a behavior that is infrequent yet critical for larval survival. We compared the performance of four video descriptors and their combinations against manually identified feeding events. For our data, the best single descriptor provided a classification accuracy of 77-95% and detection accuracy of 88-98%, depending on fish species and size. Using a combination of descriptors improved the accuracy of classification by ∼2%, but did not improve detection accuracy. Our results indicate that the effort required by an expert to manually label videos can be greatly reduced to examining only the potential feeding detections in order to filter false detections. Thus, using automated descriptors reduces the amount of manual work needed to identify events of interest from weeks to hours, enabling the assembly of an unbiased large dataset of ecologically relevant behaviors.

  11. Automated diagnosis of congestive heart failure using dual tree complex wavelet transform and statistical features extracted from 2s of ECG signals.

    PubMed

    Sudarshan, Vidya K; Acharya, U Rajendra; Oh, Shu Lih; Adam, Muhammad; Tan, Jen Hong; Chua, Chua Kuang; Chua, Kok Poo; Tan, Ru San

    2017-04-01

    Identification of alarming features in the electrocardiogram (ECG) signal is extremely significant for the prediction of congestive heart failure (CHF). ECG signal analysis carried out using computer-aided techniques can speed up the diagnosis process and aid in the proper management of CHF patients. Therefore, in this work, dual tree complex wavelets transform (DTCWT)-based methodology is proposed for an automated identification of ECG signals exhibiting CHF from normal. In the experiment, we have performed a DTCWT on ECG segments of 2s duration up to six levels to obtain the coefficients. From these DTCWT coefficients, statistical features are extracted and ranked using Bhattacharyya, entropy, minimum redundancy maximum relevance (mRMR), receiver-operating characteristics (ROC), Wilcoxon, t-test and reliefF methods. Ranked features are subjected to k-nearest neighbor (KNN) and decision tree (DT) classifiers for automated differentiation of CHF and normal ECG signals. We have achieved 99.86% accuracy, 99.78% sensitivity and 99.94% specificity in the identification of CHF affected ECG signals using 45 features. The proposed method is able to detect CHF patients accurately using only 2s of ECG signal length and hence providing sufficient time for the clinicians to further investigate on the severity of CHF and treatments.

  12. Development of an Automated Column Solid-Phase Extraction Cleanup of QuEChERS Extracts, Using a Zirconia-Based Sorbent, for Pesticide Residue Analyses by LC-MS/MS.

    PubMed

    Morris, Bruce D; Schriner, Richard B

    2015-06-03

    A new, automated, high-throughput, mini-column solid-phase extraction (c-SPE) cleanup method for QuEChERS extracts was developed, using a robotic X-Y-Z instrument autosampler, for analysis of pesticide residues in fruits and vegetables by LC-MS/MS. Removal of avocado matrix and recoveries of 263 pesticides and metabolites were studied, using various stationary phase mixtures, including zirconia-based sorbents, and elution with acetonitrile. These experiments allowed selection of a sorbent mixture consisting of zirconia, C18, and carbon-coated silica, that effectively retained avocado matrix but also retained 53 pesticides with <70% recoveries. Addition of MeOH to the elution solvent improved pesticide recoveries from zirconia, as did citrate ions in CEN QuEChERS extracts. Finally, formate buffer in acetonitrile/MeOH (1:1) was required to give >70% recoveries of all 263 pesticides. Analysis of avocado extracts by LC-Q-Orbitrap-MS showed that the method developed was removing >90% of di- and triacylglycerols. The method was validated for 269 pesticides (including homologues and metabolites) in avocado and citrus. Spike recoveries were within 70-120% and 20% RSD for 243 of these analytes in avocado and 254 in citrus, when calibrated against solvent-only standards, indicating effective matrix removal and minimal electrospray ionization suppression.

  13. Use of Magnetic Bead Resin and Automated Liquid Handler Extraction Methods to Robotically Isolate Nucleic Acids of Biological Agent Simulates

    DTIC Science & Technology

    2003-09-01

    allows the ABATS to use a CORE system to link the Applied Biosystems ABI Prism® 7900HT thermocycler and the ORIGEN ® M8 Analyzer (IGEN, Gaithersburg, MD) to...measures pertaining to nucleic acid extraction of two accepted biological agent simulants in an effort to reduce labor and speed analysis of unknown...sites. As part of the labor reduction effort, we have developed a hybrid protocol for extraction of nucleic acids from environmental samples that can

  14. A simple micro-extraction plate assay for automated LC-MS/MS analysis of human serum 25-hydroxyvitamin D levels.

    PubMed

    Geib, Timon; Meier, Florian; Schorr, Pascal; Lammert, Frank; Stokes, Caroline S; Volmer, Dietrich A

    2015-01-01

    This short application note describes a simple and automated assay for determination of 25-hydroxyvitamin D (25(OH)D) levels in very small volumes of human serum. It utilizes commercial 96-well micro-extraction plates with commercial 25(OH)D isotope calibration and quality control kits. Separation was achieved using a pentafluorophenyl liquid chromatography column followed by multiple reaction monitoring-based quantification on an electrospray triple quadrupole mass spectrometer. Emphasis was placed on providing a simple assay that can be rapidly established in non-specialized laboratories within days, without the need for laborious and time consuming sample preparation steps, advanced calibration or data acquisition routines. The analytical figures of merit obtained from this assay compared well to established assays. To demonstrate the applicability, the assay was applied to analysis of serum samples from patients with chronic liver diseases and compared to results from a routine clinical immunoassay.

  15. Fully automated online solid phase extraction coupled directly to liquid chromatography-tandem mass spectrometry. Quantification of sulfonamide antibiotics, neutral and acidic pesticides at low concentrations in surface waters.

    PubMed

    Stoob, Krispin; Singer, Heinz P; Goetz, Christian W; Ruff, Matthias; Mueller, Stephan R

    2005-12-02

    A fully automated online solid phase extraction-liquid chromatography-tandem mass spectrometry (SPE-LC-MS/MS) instrumental setup has been developed for the quantification of sulfonamide antibiotics and pesticides in natural water. The direct coupling of an online solid phase extraction cartridge (Oasis HLB) to LC-MS/MS was accomplished using column switching techniques. High sensitivity in the low ng/L range was achieved by large volume injections of 18 mL with a combination of a tri-directional auto-sampler and a dispenser system. This setup allowed high sample throughput with a minimum of investment costs. Special emphasis was placed on low cross contamination. The chosen approach is suitable for research as well as for monitoring applications. The flexible instrumental setup was successfully optimised for different important groups of bioactive chemicals resulting in three trace analytical methods for quantification of (i) sulfonamide antibiotics and their acetyl metabolites; (ii) neutral pesticides (triazines, phenylureas, amides, chloracetanilides) and (iii) acidic pesticides (phenoxyacetic acids and triketones). Absolute extraction recoveries from 85 to 112% were obtained for the different analytes. More than 500 samples could be analyzed with one extraction cartridge. The inter-day precision of the method was excellent indicated by relative standard deviations between 1 and 6%. High accuracy was achieved by the developed methods resulting in maximum deviation relative to the spiked amount of 8-15% for the different analytes. Detection limits for various environmental samples were between 0.5 and 5 ng/L. Matrix induced ion suppression was in general smaller than 25%. The performance of the online methods was demonstrated with measurements of concentration dynamics of sulfonamide antibiotics and pesticides concentrations in a little creek during rain fall events.

  16. Fully automated analysis of four tobacco-specific N-nitrosamines in mainstream cigarette smoke using two-dimensional online solid phase extraction combined with liquid chromatography-tandem mass spectrometry.

    PubMed

    Zhang, Jie; Bai, Ruoshi; Yi, Xiaoli; Yang, Zhendong; Liu, Xingyu; Zhou, Jun; Liang, Wei

    2016-01-01

    A fully automated method for the detection of four tobacco-specific nitrosamines (TSNAs) in mainstream cigarette smoke (MSS) has been developed. The new developed method is based on two-dimensional online solid-phase extraction-liquid chromatography-tandem mass spectrometry (SPE/LC-MS/MS). The two dimensional SPE was performed in the method utilizing two cartridges with different extraction mechanisms to cleanup disturbances of different polarity to minimize sample matrix effects on each analyte. Chromatographic separation was achieved using a UPLC C18 reversed phase analytical column. Under the optimum online SPE/LC-MS/MS conditions, N'-nitrosonornicotine (NNN), N'-nitrosoanatabine (NAT), N'-nitrosoanabasine (NAB), and 4-(methylnitrosamino)-1-(3-pyridyl)-1-butanone (NNK) were baseline separated with good peak shapes. This method appears to be the most sensitive method yet reported for determination of TSNAs in mainstream cigarette smoke. The limits of quantification for NNN, NNK, NAT and NAB reached the levels of 6.0, 1.0, 3.0 and 0.6 pg/cig, respectively, which were well below the lowest levels of TSNAs in MSS of current commercial cigarettes. The accuracy of the measurement of four TSNAs was from 92.8 to 107.3%. The relative standard deviations of intra-and inter-day analysis were less than 5.4% and 7.5%, respectively. The main advantages of the method developed are fairly high sensitivity, selectivity and accuracy of results, minimum sample pre-treatment, full automation, and high throughput. As a part of the validation procedure, the developed method was applied to evaluate TSNAs yields for 27 top-selling commercial cigarettes in China.

  17. Automation and robotics and related technology issues for Space Station customer servicing

    NASA Technical Reports Server (NTRS)

    Cline, Helmut P.

    1987-01-01

    Several flight servicing support elements are discussed within the context of the Space Station. Particular attention is given to the servicing facility, the mobile servicing center, and the flight telerobotic servicer (FTS). The role that automation and robotics can play in the design and operation of each of these elements is discussed. It is noted that the FTS, which is currently being developed by NASA, will evolve to increasing levels of autonomy to allow for the virtual elimination of routine EVA. Some of the features of the FTS will probably be: dual manipulator arms having reach and dexterity roughly equivalent to that of an EVA-suited astronaut, force reflection capability allowing efficient teleoperation, and capability of operating from a variety of support systems.

  18. Automation and robotics and related technology issues for Space Station customer servicing

    NASA Technical Reports Server (NTRS)

    Cline, Helmut P.

    1987-01-01

    Several flight servicing support elements are discussed within the context of the Space Station. Particular attention is given to the servicing facility, the mobile servicing center, and the flight telerobotic servicer (FTS). The role that automation and robotics can play in the design and operation of each of these elements is discussed. It is noted that the FTS, which is currently being developed by NASA, will evolve to increasing levels of autonomy to allow for the virtual elimination of routine EVA. Some of the features of the FTS will probably be: dual manipulator arms having reach and dexterity roughly equivalent to that of an EVA-suited astronaut, force reflection capability allowing efficient teleoperation, and capability of operating from a variety of support systems.

  19. Automated suppression of sample-related artifacts in Fluorescence Correlation Spectroscopy.

    PubMed

    Ries, Jonas; Bayer, Mathias; Csúcs, Gábor; Dirkx, Ronald; Solimena, Michele; Ewers, Helge; Schwille, Petra

    2010-05-24

    Fluorescence Correlation Spectroscopy (FCS) in cells often suffers from artifacts caused by bright aggregates or vesicles, depletion of fluorophores or bleaching of a fluorescent background. The common practice of manually discarding distorted curves is time consuming and subjective. Here we demonstrate the feasibility of automated FCS data analysis with efficient rejection of corrupted parts of the signal. As test systems we use a solution of fluorescent molecules, contaminated with bright fluorescent beads, as well as cells expressing a fluorescent protein (ICA512-EGFP), which partitions into bright secretory granules. This approach improves the accuracy of FCS measurements in biological samples, extends its applicability to especially challenging systems and greatly simplifies and accelerates the data analysis.

  20. Automated tissue m-FISH analysis workstation for identification of clonally related cells

    NASA Astrophysics Data System (ADS)

    Dubrowski, Piotr; Lam, Wan; Ling, Victor; Lam, Stephen; MacAulay, Calum

    2008-02-01

    We have developed an automated multicolour high-throughput multi-colour Fluorescence in-situ Hybridization (FISH) scanning system for examining Non-Small Cell Lung Cancer (NSCLC) 5-10μm thick tissue specimens and analyzing their FISH spot signals at the individual cell level and then as clonal populations using cell-cell architecture (spatial distributions). Using FISH probes targeting genomic areas deemed significant to chemotherapy resistance, we aim to identify clonal subpopulations of cells in tissue samples likely to be resistant to cis-platinum/vinorelbine chemotherapy. The scanning system consists of automatic image acquisition, cell nuclei segmentation, spot counting and measuring the spatial distribution and connectivity of cells with specific genetic profiles across the entire section using architectural tools to provide the scoring system.

  1. Differential genetic regulation of motor activity and anxiety-related behaviors in mice using an automated home cage task.

    PubMed

    Kas, Martien J H; de Mooij-van Malsen, Annetrude J G; Olivier, Berend; Spruijt, Berry M; van Ree, Jan M

    2008-08-01

    Traditional behavioral tests, such as the open field test, measure an animal's responsiveness to a novel environment. However, it is generally difficult to assess whether the behavioral response obtained from these tests relates to the expression level of motor activity and/or to avoidance of anxiogenic areas. Here, an automated home cage environment for mice was designed to obtain independent measures of motor activity levels and of sheltered feeding preference during three consecutive days. Chronic treatment with the anxiolytic drug chlordiazepoxide (5 and 10 mg/kg/day) in C57BL/6J mice reduced sheltered feeding preference without altering motor activity levels. Furthermore, two distinct chromosome substitution strains, derived from C57BL/6J (host strain) and A/J (donor strain) inbred strains, expressed either increased sheltering preference in females (chromosome 15) or reduced motor activity levels in females and males (chromosome 1) when compared to C57BL/6J. Longitudinal behavioral monitoring revealed that these phenotypic differences maintained after adaptation to the home cage. Thus, by using new automated behavioral phenotyping approaches, behavior can be dissociated into distinct behavioral domains (e.g., anxiety-related and motor activity domains) with different underlying genetic origin and pharmacological responsiveness.

  2. Comparison of Boiling and Robotics Automation Method in DNA Extraction for Metagenomic Sequencing of Human Oral Microbes

    PubMed Central

    Shinozaki, Natsuko; Ye, Bin; Tsuboi, Akito; Nagasaki, Masao; Yamashita, Riu

    2016-01-01

    The rapid improvement of next-generation sequencing performance now enables us to analyze huge sample sets with more than ten thousand specimens. However, DNA extraction can still be a limiting step in such metagenomic approaches. In this study, we analyzed human oral microbes to compare the performance of three DNA extraction methods: PowerSoil (a method widely used in this field), QIAsymphony (a robotics method), and a simple boiling method. Dental plaque was initially collected from three volunteers in the pilot study and then expanded to 12 volunteers in the follow-up study. Bacterial flora was estimated by sequencing the V4 region of 16S rRNA following species-level profiling. Our results indicate that the efficiency of PowerSoil and QIAsymphony was comparable to the boiling method. Therefore, the boiling method may be a promising alternative because of its simplicity, cost effectiveness, and short handling time. Moreover, this method was reliable for estimating bacterial species and could be used in the future to examine the correlation between oral flora and health status. Despite this, differences in the efficiency of DNA extraction for various bacterial species were observed among the three methods. Based on these findings, there is no “gold standard” for DNA extraction. In future, we suggest that the DNA extraction method should be selected on a case-by-case basis considering the aims and specimens of the study. PMID:27104353

  3. Comparison of Boiling and Robotics Automation Method in DNA Extraction for Metagenomic Sequencing of Human Oral Microbes.

    PubMed

    Yamagishi, Junya; Sato, Yukuto; Shinozaki, Natsuko; Ye, Bin; Tsuboi, Akito; Nagasaki, Masao; Yamashita, Riu

    2016-01-01

    The rapid improvement of next-generation sequencing performance now enables us to analyze huge sample sets with more than ten thousand specimens. However, DNA extraction can still be a limiting step in such metagenomic approaches. In this study, we analyzed human oral microbes to compare the performance of three DNA extraction methods: PowerSoil (a method widely used in this field), QIAsymphony (a robotics method), and a simple boiling method. Dental plaque was initially collected from three volunteers in the pilot study and then expanded to 12 volunteers in the follow-up study. Bacterial flora was estimated by sequencing the V4 region of 16S rRNA following species-level profiling. Our results indicate that the efficiency of PowerSoil and QIAsymphony was comparable to the boiling method. Therefore, the boiling method may be a promising alternative because of its simplicity, cost effectiveness, and short handling time. Moreover, this method was reliable for estimating bacterial species and could be used in the future to examine the correlation between oral flora and health status. Despite this, differences in the efficiency of DNA extraction for various bacterial species were observed among the three methods. Based on these findings, there is no "gold standard" for DNA extraction. In future, we suggest that the DNA extraction method should be selected on a case-by-case basis considering the aims and specimens of the study.

  4. Sieve-based relation extraction of gene regulatory networks from biological literature.

    PubMed

    Žitnik, Slavko; Žitnik, Marinka; Zupan, Blaž; Bajec, Marko

    2015-01-01

    Relation extraction is an essential procedure in literature mining. It focuses on extracting semantic relations between parts of text, called mentions. Biomedical literature includes an enormous amount of textual descriptions of biological entities, their interactions and results of related experiments. To extract them in an explicit, computer readable format, these relations were at first extracted manually from databases. Manual curation was later replaced with automatic or semi-automatic tools with natural language processing capabilities. The current challenge is the development of information extraction procedures that can directly infer more complex relational structures, such as gene regulatory networks. We develop a computational approach for extraction of gene regulatory networks from textual data. Our method is designed as a sieve-based system and uses linear-chain conditional random fields and rules for relation extraction. With this method we successfully extracted the sporulation gene regulation network in the bacterium Bacillus subtilis for the information extraction challenge at the BioNLP 2013 conference. To enable extraction of distant relations using first-order models, we transform the data into skip-mention sequences. We infer multiple models, each of which is able to extract different relationship types. Following the shared task, we conducted additional analysis using different system settings that resulted in reducing the reconstruction error of bacterial sporulation network from 0.73 to 0.68, measured as the slot error rate between the predicted and the reference network. We observe that all relation extraction sieves contribute to the predictive performance of the proposed approach. Also, features constructed by considering mention words and their prefixes and suffixes are the most important features for higher accuracy of extraction. Analysis of distances between different mention types in the text shows that our choice of transforming

  5. Sieve-based relation extraction of gene regulatory networks from biological literature

    PubMed Central

    2015-01-01

    Background Relation extraction is an essential procedure in literature mining. It focuses on extracting semantic relations between parts of text, called mentions. Biomedical literature includes an enormous amount of textual descriptions of biological entities, their interactions and results of related experiments. To extract them in an explicit, computer readable format, these relations were at first extracted manually from databases. Manual curation was later replaced with automatic or semi-automatic tools with natural language processing capabilities. The current challenge is the development of information extraction procedures that can directly infer more complex relational structures, such as gene regulatory networks. Results We develop a computational approach for extraction of gene regulatory networks from textual data. Our method is designed as a sieve-based system and uses linear-chain conditional random fields and rules for relation extraction. With this method we successfully extracted the sporulation gene regulation network in the bacterium Bacillus subtilis for the information extraction challenge at the BioNLP 2013 conference. To enable extraction of distant relations using first-order models, we transform the data into skip-mention sequences. We infer multiple models, each of which is able to extract different relationship types. Following the shared task, we conducted additional analysis using different system settings that resulted in reducing the reconstruction error of bacterial sporulation network from 0.73 to 0.68, measured as the slot error rate between the predicted and the reference network. We observe that all relation extraction sieves contribute to the predictive performance of the proposed approach. Also, features constructed by considering mention words and their prefixes and suffixes are the most important features for higher accuracy of extraction. Analysis of distances between different mention types in the text shows that our choice

  6. Comparative evaluation of in-house manual, and commercial semi-automated and automated DNA extraction platforms in the sample preparation of human stool specimens for a Salmonella enterica 5'-nuclease assay.

    PubMed

    Schuurman, Tim; de Boer, Richard; Patty, Rachèl; Kooistra-Smid, Mirjam; van Zwet, Anton

    2007-12-01

    In the present study, three methods (NucliSens miniMAG [bioMérieux], MagNA Pure DNA Isolation Kit III Bacteria/Fungi [Roche], and a silica-guanidiniumthiocyanate {Si-GuSCN-F} procedure for extracting DNA from stool specimens were compared with regard to analytical performance (relative DNA recovery and down stream real-time PCR amplification of Salmonella enterica DNA), stability of the extracted DNA, hands-on time (HOT), total processing time (TPT), and costs. The Si-GuSCN-F procedure showed the highest analytical performance (relative recovery of 99%, S. enterica real-time PCR sensitivity of 91%) at the lowest associated costs per extraction (euro 4.28). However, this method did required the longest HOT (144 min) and subsequent TPT (176 min) when processing 24 extractions. Both miniMAG and MagNA Pure extraction showed similar performances at first (relative recoveries of 57% and 52%, S. enterica real-time PCR sensitivity of 85%). However, when difference in the observed Ct values after real-time PCR were taken into account, MagNA Pure resulted in a significant increase in Ct value compared to both miniMAG and Si-GuSCN-F (with on average +1.26 and +1.43 cycles). With regard to inhibition all methods showed relatively low inhibition rates (< 4%), with miniMAG providing the lowest rate (0.7%). Extracted DNA was stable for at least 1 year for all methods. HOT was lowest for MagNA Pure (60 min) and TPT was shortest for miniMAG (121 min). Costs, finally, were euro 4.28 for Si-GuSCN, euro 6.69 for MagNA Pure and euro 9.57 for miniMAG.

  7. A METHOD FOR AUTOMATED ANALYSIS OF 10 ML WATER SAMPLES CONTAINING ACIDIC, BASIC, AND NEUTRAL SEMIVOLATILE COMPOUNDS LISTED IN USEPA METHOD 8270 BY SOLID PHASE EXTRACTION COUPLED IN-LINE TO LARGE VOLUME INJECTION GAS CHROMATOGRAPHY/MASS SPECTROMETRY

    EPA Science Inventory

    Data is presented showing the progress made towards the development of a new automated system combining solid phase extraction (SPE) with gas chromatography/mass spectrometry for the single run analysis of water samples containing a broad range of acid, base and neutral compounds...

  8. A METHOD FOR AUTOMATED ANALYSIS OF 10 ML WATER SAMPLES CONTAINING ACIDIC, BASIC, AND NEUTRAL SEMIVOLATILE COMPOUNDS LISTED IN USEPA METHOD 8270 BY SOLID PHASE EXTRACTION COUPLED IN-LINE TO LARGE VOLUME INJECTION GAS CHROMATOGRAPHY/MASS SPECTROMETRY

    EPA Science Inventory

    Data is presented showing the progress made towards the development of a new automated system combining solid phase extraction (SPE) with gas chromatography/mass spectrometry for the single run analysis of water samples containing a broad range of acid, base and neutral compounds...

  9. Screening of drugs in equine plasma using automated on-line solid-phase extraction coupled with liquid chromatography-tandem mass spectrometry.

    PubMed

    Kwok, W H; Leung, David K K; Leung, Gary N W; Wan, Terence S M; Wong, Colton H F; Wong, Jenny K Y

    2010-05-07

    A rapid liquid chromatography-tandem mass spectrometry (LC-MS-MS) method was developed for the simultaneous screening of 19 drugs of different classes in equine plasma using automated on-line solid-phase extraction (SPE) coupled with a triple quadrupole mass spectrometer. Plasma samples were first protein precipitated using acetonitrile. After centrifugation, the supernatant was directly injected into the on-line SPE system and analysed by a triple quadrupole LC-MS-MS in positive electrospray ionisation (+ESI) mode with selected reaction monitoring (SRM) scan function. On-line extraction and chromatographic separation of the targeted drugs were performed using respectively a polymeric extraction column (2 cm L x 2.1mm ID, 25 microm particle size) and a reversed-phase C18 LC column (3 cm L x 2.1mm ID, 3 microm particle size) with gradient elution to provide fast analysis time. The overall instrument turnaround time was 9.5 min, inclusive of post-run and equilibration time. Plasma samples fortified with 19 targeted drugs including narcotic analgesics, local anaesthetics, antipsychotics, bronchodilators, mucolytics, corticosteroids, sedative and tranquillisers at sub-parts per billion (ppb) to low parts per trillion (ppt) levels could be consistently detected. No significant matrix interference was observed at the expected retention times of the targeted ion transitions. Over 70% of the drugs studied gave detection limits at or below 100 pg/mL, with some detection limits reaching down to 19 pg/mL. The method had been validated for extraction recovery, precision and sensitivity, and a blockage study had also been carried out. This method is used regularly in the authors' laboratory to screen for the presence of targeted drugs in pre-race plasma samples from racehorses.

  10. MICAS: a fully automated web server for microsatellite extraction and analysis from prokaryote and viral genomic sequences.

    PubMed

    Sreenu, Vattipally B; Ranjitkumar, Gundu; Swaminathan, Sugavanam; Priya, Sasidharan; Bose, Buddhaditta; Pavan, Mogili N; Thanu, Geeta; Nagaraju, Javaregowda; Nagarajaram, Hampapathalu A

    2003-01-01

    MICAS is a web server for extracting microsatellite information from completely sequenced prokaryote and viral genomes, or user-submitted sequences. This server provides an integrated platform for MICdb (database of prokaryote and viral microsatellites), W-SSRF (simple sequence repeat finding program) and Autoprimer (primer design software). MICAS, through dynamic HTML page generation, helps in the systematic extraction of microsatellite information from selected genomes hosted on MICdb or from user-submitted sequences. Further, it assists in the design of primers with the help of Autoprimer, for sequences containing selected microsatellite tracts.

  11. Comparison of automated nucleic acid extraction methods for the detection of cytomegalovirus DNA in fluids and tissues

    PubMed Central

    Waggoner, Jesse J.

    2014-01-01

    Testing for cytomegalovirus (CMV) DNA is increasingly being used for specimen types other than plasma or whole blood. However, few studies have investigated the performance of different nucleic acid extraction protocols in such specimens. In this study, CMV extraction using the Cell-free 1000 and Pathogen Complex 400 protocols on the QIAsymphony Sample Processing (SP) system were compared using bronchoalveolar lavage fluid (BAL), tissue samples, and urine. The QIAsymphonyAssay Set-up (AS) system was used to assemble reactions using artus CMV PCR reagents and amplification was carried out on the Rotor-Gene Q. Samples from 93 patients previously tested for CMV DNA and negative samples spiked with CMV AD-169 were used to evaluate assay performance. The Pathogen Complex 400 protocol yielded the following results: BAL, sensitivity 100% (33/33), specificity 87% (20/23); tissue, sensitivity 100% (25/25), specificity 100% (20/20); urine, sensitivity 100% (21/21), specificity 100% (20/20). Cell-free 1000 extraction gave comparable results for BAL and tissue, however, for urine, the sensitivity was 86% (18/21) and specimen quantitation was inaccurate. Comparative studies of different extraction protocols and DNA detection methods in body fluids and tissues are needed, as assays optimized for blood or plasma will not necessarily perform well on other specimen types. PMID:24765569

  12. A Customized Attention-Based Long Short-Term Memory Network for Distant Supervised Relation Extraction.

    PubMed

    He, Dengchao; Zhang, Hongjun; Hao, Wenning; Zhang, Rui; Cheng, Kai

    2017-04-14

    Distant supervision, a widely applied approach in the field of relation extraction can automatically generate large amounts of labeled training corpus with minimal manual effort. However, the labeled training corpus may have many false-positive data, which would hurt the performance of relation extraction. Moreover, in traditional feature-based distant supervised approaches, extraction models adopt human design features with natural language processing. It may also cause poor performance. To address these two shortcomings, we propose a customized attention-based long short-term memory network. Our approach adopts word-level attention to achieve better data representation for relation extraction without manually designed features to perform distant supervision instead of fully supervised relation extraction, and it utilizes instance-level attention to tackle the problem of false-positive data. Experimental results demonstrate that our proposed approach is effective and achieves better performance than traditional methods.

  13. Simultaneous determination of the inhibitory potency of herbal extracts on the activity of six major cytochrome P450 enzymes using liquid chromatography/mass spectrometry and automated online extraction.

    PubMed

    Unger, Matthias; Frank, Andreas

    2004-01-01

    Here we describe a liquid chromatography/mass spectrometry (LC/MS) method with automated online extraction (LC/LC/MS) to simultaneously determine the in vitro inhibitory potency of herbal extracts on six major human drug-metabolising cytochrome P450 enzymes. Substrates were incubated with a commercially available mixture of CYP1A2/2C8/2C9/2C19/2D6 and 3A4 from baculovirus-infected insect cells and the resulting metabolites were quantified with LC/LC/MS using electrospray ionisation in the selected ion monitoring mode. Consistent inhibitory activities were obtained for known inhibitors and plant extracts using the enzyme/substrate cocktail and the individual enzymes/substrates. Popular herbal remedies including devil's claw root (Harpagophytum procumbens), feverfew herb (Tanacetum parthenium), fo-ti root (Polygonum multiflorum), kava-kava root (Piper methysticum), peppermint oil (Mentha piperita), eucalyptus oil (Eucalyptus globulus), red clover blossom (Trifolium pratense) and grapefruit juice (GJ; Citrus paradisi) could be identified as inhibitors of the applied CYP enzymes with IC(50) values between 20 and 1000 microg/mL. Copyright 2004 John Wiley & Sons, Ltd.

  14. Fully automated determination of 74 pharmaceuticals in environmental and waste waters by online solid phase extraction-liquid chromatography-electrospray-tandem mass spectrometry.

    PubMed

    López-Serna, Rebeca; Pérez, Sandra; Ginebreda, Antoni; Petrović, Mira; Barceló, Damià

    2010-12-15

    The present work describes the development of a fully automated method, based on on-line solid-phase extraction (SPE)-liquid chromatography-electrospray-tandem mass spectrometry (LC-MS-MS), for the determination of 74 pharmaceuticals in environmental waters (superficial water and groundwater) as well as sewage waters. On-line SPE is performed by passing 2.5 mL of the water sample through a HySphere Resin GP cartridge. For unequivocal identification and confirmation two selected reaction monitoring (SRM) transitions are monitored per compound, thus four identification points are achieved. Quantification is performed by the internal standard approach, indispensable to correct the losses during the solid phase extraction, as well as the matrix effects. The main advantages of the method developed are high sensitivity (limits of detection in the low ng L(-1) range), selectivity due the use of tandem mass spectrometry and reliability due the use of 51 surrogates and minimum sample manipulation. As a part of the validation procedure, the method developed has been applied to the analysis of various environmental and sewage samples from a Spanish river and a sewage treatment plant. Copyright © 2010 Elsevier B.V. All rights reserved.

  15. Automated evaluation of electronic discharge notes to assess quality of care for cardiovascular diseases using Medical Language Extraction and Encoding System (MedLEE)

    PubMed Central

    Lin, Jou-Wei; Yang, Chen-Wei

    2010-01-01

    The objective of this study was to develop and validate an automated acquisition system to assess quality of care (QC) measures for cardiovascular diseases. This system combining searching and retrieval algorithms was designed to extract QC measures from electronic discharge notes and to estimate the attainment rates to the current standards of care. It was developed on the patients with ST-segment elevation myocardial infarction and tested on the patients with unstable angina/non-ST-segment elevation myocardial infarction, both diseases sharing almost the same QC measures. The system was able to reach a reasonable agreement (κ value) with medical experts from 0.65 (early reperfusion rate) to 0.97 (β-blockers and lipid-lowering agents before discharge) for different QC measures in the test set, and then applied to evaluate QC in the patients who underwent coronary artery bypass grafting surgery. The result has validated a new tool to reliably extract QC measures for cardiovascular diseases. PMID:20442141

  16. High-throughput, automated extraction of DNA and RNA from clinical samples using TruTip technology on common liquid handling robots.

    PubMed

    Holmberg, Rebecca C; Gindlesperger, Alissa; Stokes, Tinsley; Brady, Dane; Thakore, Nitu; Belgrader, Philip; Cooney, Christopher G; Chandler, Darrell P

    2013-06-11

    TruTip is a simple nucleic acid extraction technology whereby a porous, monolithic binding matrix is inserted into a pipette tip. The geometry of the monolith can be adapted for specific pipette tips ranging in volume from 1.0 to 5.0 ml. The large porosity of the monolith enables viscous or complex samples to readily pass through it with minimal fluidic backpressure. Bi-directional flow maximizes residence time between the monolith and sample, and enables large sample volumes to be processed within a single TruTip. The fundamental steps, irrespective of sample volume or TruTip geometry, include cell lysis, nucleic acid binding to the inner pores of the TruTip monolith, washing away unbound sample components and lysis buffers, and eluting purified and concentrated nucleic acids into an appropriate buffer. The attributes and adaptability of TruTip are demonstrated in three automated clinical sample processing protocols using an Eppendorf epMotion 5070, Hamilton STAR and STARplus liquid handling robots, including RNA isolation from nasopharyngeal aspirate, genomic DNA isolation from whole blood, and fetal DNA extraction and enrichment from large volumes of maternal plasma (respectively).

  17. High-throughput, Automated Extraction of DNA and RNA from Clinical Samples using TruTip Technology on Common Liquid Handling Robots

    PubMed Central

    Holmberg, Rebecca C.; Gindlesperger, Alissa; Stokes, Tinsley; Brady, Dane; Thakore, Nitu; Belgrader, Philip; Cooney, Christopher G.; Chandler, Darrell P.

    2013-01-01

    TruTip is a simple nucleic acid extraction technology whereby a porous, monolithic binding matrix is inserted into a pipette tip. The geometry of the monolith can be adapted for specific pipette tips ranging in volume from 1.0 to 5.0 ml. The large porosity of the monolith enables viscous or complex samples to readily pass through it with minimal fluidic backpressure. Bi-directional flow maximizes residence time between the monolith and sample, and enables large sample volumes to be processed within a single TruTip. The fundamental steps, irrespective of sample volume or TruTip geometry, include cell lysis, nucleic acid binding to the inner pores of the TruTip monolith, washing away unbound sample components and lysis buffers, and eluting purified and concentrated nucleic acids into an appropriate buffer. The attributes and adaptability of TruTip are demonstrated in three automated clinical sample processing protocols using an Eppendorf epMotion 5070, Hamilton STAR and STARplus liquid handling robots, including RNA isolation from nasopharyngeal aspirate, genomic DNA isolation from whole blood, and fetal DNA extraction and enrichment from large volumes of maternal plasma (respectively). PMID:23793016

  18. Automated in-syringe single-drop head-space micro-extraction applied to the determination of ethanol in wine samples.

    PubMed

    Srámková, Ivana; Horstkotte, Burkhard; Solich, Petr; Sklenářová, Hana

    2014-05-30

    A novel approach of head-space single-drop micro-extraction applied to the determination of ethanol in wine is presented. For the first time, the syringe of an automated syringe pump was used as an extraction chamber of adaptable size for a volatile analyte. This approach enabled to apply negative pressure during the enrichment step, which favored the evaporation of the analyte. Placing a slowly spinning magnetic stirring bar inside the syringe, effective syringe cleaning as well as mixing of the sample with buffer solution to suppress the interference of acetic acid was achieved. Ethanol determination was based on the reduction of a single drop of 3mmol L(-1) potassium dichromate dissolved in 8mol L(-1) sulfuric acid. The drop was positioned in the syringe inlet in the head-space above the sample with posterior spectrophotometric quantification. The entire procedure was carried out automatically using a simple sequential injection analyzer system. One analysis required less than 5min including the washing step. A limit of detection of 0.025% (v/v) of ethanol and an average repeatability of less than 5.0% RSD were achieved. The consumption of dichromate reagent, buffer, and sample per analysis were only 20μL, 200μL, and 1mL, respectively. The results of real samples analysis did not differ significantly from those obtained with the references gas chromatography method. Copyright © 2014 Elsevier B.V. All rights reserved.

  19. Semi-automated solid phase extraction method for the mass spectrometric quantification of 12 specific metabolites of organophosphorus pesticides, synthetic pyrethroids, and select herbicides in human urine.

    PubMed

    Davis, Mark D; Wade, Erin L; Restrepo, Paula R; Roman-Esteva, William; Bravo, Roberto; Kuklenyik, Peter; Calafat, Antonia M

    2013-06-15

    Organophosphate and pyrethroid insecticides and phenoxyacetic acid herbicides represent important classes of pesticides applied in commercial and residential settings. Interest in assessing the extent of human exposure to these pesticides exists because of their widespread use and their potential adverse health effects. An analytical method for measuring 12 biomarkers of several of these pesticides in urine has been developed. The target analytes were extracted from one milliliter of urine by a semi-automated solid phase extraction technique, separated from each other and from other urinary biomolecules by reversed-phase high performance liquid chromatography, and detected using tandem mass spectrometry with isotope dilution quantitation. This method can be used to measure all the target analytes in one injection with similar repeatability and detection limits of previous methods which required more than one injection. Each step of the procedure was optimized to produce a robust, reproducible, accurate, precise and efficient method. The required selectivity and sensitivity for trace-level analysis (e.g., limits of detection below 0.5ng/mL) was achieved using a narrow diameter analytical column, higher than unit mass resolution for certain analytes, and stable isotope labeled internal standards. The method was applied to the analysis of 55 samples collected from adult anonymous donors with no known exposure to the target pesticides. This efficient and cost-effective method is adequate to handle the large number of samples required for national biomonitoring surveys. Published by Elsevier B.V.

  20. Extracting Related Words from Anchor Text Clusters by Focusing on the Page Designer's Intention

    NASA Astrophysics Data System (ADS)

    Liu, Jianquan; Chen, Hanxiong; Furuse, Kazutaka; Ohbo, Nobuo

    Approaches for extracting related words (terms) by co-occurrence work poorly sometimes. Two words frequently co-occurring in the same documents are considered related. However, they may not relate at all because they would have no common meanings nor similar semantics. We address this problem by considering the page designer’s intention and propose a new model to extract related words. Our approach is based on the idea that the web page designers usually make the correlative hyperlinks appear in close zone on the browser. We developed a browser-based crawler to collect “geographically” near hyperlinks, then by clustering these hyperlinks based on their pixel coordinates, we extract related words which can well reflect the designer’s intention. Experimental results show that our method can represent the intention of the web page designer in extremely high precision. Moreover, the experiments indicate that our extracting method can obtain related words in a high average precision.

  1. Extracting Topological Relations Between Indoor Spaces from Point Clouds

    NASA Astrophysics Data System (ADS)

    Tran, H.; Khoshelham, K.; Kealy, A.; Díaz-Vilariño, L.

    2017-09-01

    3D models of indoor environments are essential for many application domains such as navigation guidance, emergency management and a range of indoor location-based services. The principal components defined in different BIM standards contain not only building elements, such as floors, walls and doors, but also navigable spaces and their topological relations, which are essential for path planning and navigation. We present an approach to automatically reconstruct topological relations between navigable spaces from point clouds. Three types of topological relations, namely containment, adjacency and connectivity of the spaces are modelled. The results of initial experiments demonstrate the potential of the method in supporting indoor navigation.

  2. Automated extraction of lysergic acid diethylamide (LSD) and N-demethyl-LSD from blood, serum, plasma, and urine samples using the Zymark RapidTrace with LC/MS/MS confirmation.

    PubMed

    de Kanel, J; Vickery, W E; Waldner, B; Monahan, R M; Diamond, F X

    1998-05-01

    A forensic procedure for the quantitative confirmation of lysergic acid diethylamide (LSD) and the qualitative confirmation of its metabolite, N-demethyl-LSD, in blood, serum, plasma, and urine samples is presented. The Zymark RapidTrace was used to perform fully automated solid-phase extractions of all specimen types. After extract evaporation, confirmations were performed using liquid chromatography (LC) followed by positive electrospray ionization (ESI+) mass spectrometry/mass spectrometry (MS/MS) without derivatization. Quantitation of LSD was accomplished using LSD-d3 as an internal standard. The limit of quantitation (LOQ) for LSD was 0.05 ng/mL. The limit of detection (LOD) for both LSD and N-demethyl-LSD was 0.025 ng/mL. The recovery of LSD was greater than 95% at levels of 0.1 ng/mL and 2.0 ng/mL. For LSD at 1.0 ng/mL, the within-run and between-run (different day) relative standard deviation (RSD) was 2.2% and 4.4%, respectively.

  3. Automated and sensitive determination of four anabolic androgenic steroids in urine by online turbulent flow solid-phase extraction coupled with liquid chromatography-tandem mass spectrometry: a novel approach for clinical monitoring and doping control.

    PubMed

    Guo, Feng; Shao, Jing; Liu, Qian; Shi, Jian-Bo; Jiang, Gui-Bin

    2014-07-01

    A novel method for automated and sensitive analysis of testosterone, androstenedione, methyltestosterone and methenolone in urine samples by online turbulent flow solid-phase extraction coupled with high performance liquid chromatography-tandem mass spectrometry was developed. The optimization and validation of the method were discussed in detail. The Turboflow C18-P SPE column showed the best extraction efficiency for all the analytes. Nanogram per liter (ng/L) level of AAS could be determined directly and the limits of quantification (LOQs) were 0.01 ng/mL, which were much lower than normally concerned concentrations for these typical anabolic androgenic steroids (AAS) (0.1 ng/mL). The linearity range was from the LOQ to 100 ng/mL for each compound, with the coefficients of determination (r(2)) ranging from 0.9990 to 0.9999. The intraday and interday relative standard deviations (RSDs) ranged from 1.1% to 14.5% (n=5). The proposed method was successfully applied to the analysis of urine samples collected from 24 male athletes and 15 patients of prostate cancer. The proposed method provides an alternative practical way to rapidly determine AAS in urine samples, especially for clinical monitoring and doping control.

  4. Bacterial flora in relation to cataract extraction. II. Peroperative flora.

    PubMed

    Fahmy, J A; Moller, S; Bentzon, M W

    1975-06-01

    The peroperative flora of 499 patients undergoing cataract extraction was studied with local bacterial cultures taken at the beginning and end of surgery and compared with the preoperative flora examined previously (Fahmy et al. 1975 b) on admission the day prior to surgery. The local application of a single dose of oxytetracycline - polymyxin B, approximately 18 hours before surgery, significantly reduced the incidence of bacteria at the time of surgery. However, 92% of the conjunctivas examined immediately before operation proved to harbour one or more kinds of microorganisms. Futhermore, 61% of the wound sites were found to be contaminated with bacteria at the conclusion of surgery. The reasons are discussed. The origin of Staphylococcus aureus isolated peroperatively from the conjunctiva and wound site was studied. The great majority of strains could be traced to the patient's own conjunctiva preoperatively. In a few cases S. aureus was traced to the patient's own nose, skin of face or to the surgeon's nose. The air of the wards and operating theatre as well as the hands and gloves of surgeons and assistant nurses apparently did not play any role as a source of S. aureus infection.

  5. Semi-automated extraction and delineation of 3D roads of street scene from mobile laser scanning point clouds

    NASA Astrophysics Data System (ADS)

    Yang, Bisheng; Fang, Lina; Li, Jonathan

    2013-05-01

    Accurate 3D road information is important for applications such as road maintenance and virtual 3D modeling. Mobile laser scanning (MLS) is an efficient technique for capturing dense point clouds that can be used to construct detailed road models for large areas. This paper presents a method for extracting and delineating roads from large-scale MLS point clouds. The proposed method partitions MLS point clouds into a set of consecutive "scanning lines", which each consists of a road cross section. A moving window operator is used to filter out non-ground points line by line, and curb points are detected based on curb patterns. The detected curb points are tracked and refined so that they are both globally consistent and locally similar. To evaluate the validity of the proposed method, experiments were conducted using two types of street-scene point clouds captured by Optech's Lynx Mobile Mapper System. The completeness, correctness, and quality of the extracted roads are over 94.42%, 91.13%, and 91.3%, respectively, which proves the proposed method is a promising solution for extracting 3D roads from MLS point clouds.

  6. Pathology report data extraction from relational database using R, with extraction from reports on melanoma of skin as an example

    PubMed Central

    Ye, Jay J.

    2016-01-01

    Background: Different methods have been described for data extraction from pathology reports with varying degrees of success. Here a technique for directly extracting data from relational database is described. Methods: Our department uses synoptic reports modified from College of American Pathologists (CAP) Cancer Protocol Templates to report most of our cancer diagnoses. Choosing the melanoma of skin synoptic report as an example, R scripting language extended with RODBC package was used to query the pathology information system database. Reports containing melanoma of skin synoptic report in the past 4 and a half years were retrieved and individual data elements were extracted. Using the retrieved list of the cases, the database was queried a second time to retrieve/extract the lymph node staging information in the subsequent reports from the same patients. Results: 426 synoptic reports corresponding to unique lesions of melanoma of skin were retrieved, and data elements of interest were extracted into an R data frame. The distribution of Breslow depth of melanomas grouped by year is used as an example of intra-report data extraction and analysis. When the new pN staging information was present in the subsequent reports, 82% (77/94) was precisely retrieved (pN0, pN1, pN2 and pN3). Additional 15% (14/94) was retrieved with certain ambiguity (positive or knowing there was an update). The specificity was 100% for both. The relationship between Breslow depth and lymph node status was graphed as an example of lesion-specific multi-report data extraction and analysis. Conclusions: R extended with RODBC package is a simple and versatile approach well-suited for the above tasks. The success or failure of the retrieval and extraction depended largely on whether the reports were formatted and whether the contents of the elements were consistently phrased. This approach can be easily modified and adopted for other pathology information systems that use relational database

  7. Discovery of Predicate-Oriented Relations among Named Entities Extracted from Thai Texts

    NASA Astrophysics Data System (ADS)

    Tongtep, Nattapong; Theeramunkong, Thanaruk

    Extracting named entities (NEs) and their relations is more difficult in Thai than in other languages due to several Thai specific characteristics, including no explicit boundaries for words, phrases and sentences; few case markers and modifier clues; high ambiguity in compound words and serial verbs; and flexible word orders. Unlike most previous works which focused on NE relations of specific actions, such as work_for, live_in, located_in, and kill, this paper proposes more general types of NE relations, called predicate-oriented relation (PoR), where an extracted action part (verb) is used as a core component to associate related named entities extracted from Thai Texts. Lacking a practical parser for the Thai language, we present three types of surface features, i.e. punctuation marks (such as token spaces), entity types and the number of entities and then apply five alternative commonly used learning schemes to investigate their performance on predicate-oriented relation extraction. The experimental results show that our approach achieves the F-measure of 97.76%, 99.19%, 95.00% and 93.50% on four different types of predicate-oriented relation (action-location, location-action, action-person and person-action) in crime-related news documents using a data set of 1,736 entity pairs. The effects of NE extraction techniques, feature sets and class unbalance on the performance of relation extraction are explored.

  8. Maneuver Automation Software

    NASA Technical Reports Server (NTRS)

    Uffelman, Hal; Goodson, Troy; Pellegrin, Michael; Stavert, Lynn; Burk, Thomas; Beach, David; Signorelli, Joel; Jones, Jeremy; Hahn, Yungsun; Attiyah, Ahlam; hide

    2009-01-01

    The Maneuver Automation Software (MAS) automates the process of generating commands for maneuvers to keep the spacecraft of the Cassini-Huygens mission on a predetermined prime mission trajectory. Before MAS became available, a team of approximately 10 members had to work about two weeks to design, test, and implement each maneuver in a process that involved running many maneuver-related application programs and then serially handing off data products to other parts of the team. MAS enables a three-member team to design, test, and implement a maneuver in about one-half hour after Navigation has process-tracking data. MAS accepts more than 60 parameters and 22 files as input directly from users. MAS consists of Practical Extraction and Reporting Language (PERL) scripts that link, sequence, and execute the maneuver- related application programs: "Pushing a single button" on a graphical user interface causes MAS to run navigation programs that design a maneuver; programs that create sequences of commands to execute the maneuver on the spacecraft; and a program that generates predictions about maneuver performance and generates reports and other files that enable users to quickly review and verify the maneuver design. MAS can also generate presentation materials, initiate electronic command request forms, and archive all data products for future reference.

  9. Automated on-fiber derivatization with headspace SPME-GC-MS-MS for the determination of primary amines in sewage sludge using pressurized hot water extraction.

    PubMed

    Llop, Anna; Pocurull, Eva; Borrull, Francesc

    2011-07-01

    An automated, environmentally friendly, simple, selective, and sensitive method was developed for the determination of ten primary aliphatic amines in sewage sludge at μg/kg dry weight (d.w.). The procedure involves a pressurized hot water extraction (PHWE) of the analytes from the solid matrix, followed by a fully automated on-fiber derivatization with 2,3,4,5-pentafluorobenzaldehyde (PFBAY) and headspace solid-phase microextraction (HS-SPME) and subsequent gas chromatography ion-trap tandem mass spectrometry (GC-IT-MS-MS) analysis. The limits of detection (LODs) of the method were between 0.5 and 45 μg/kg (d.w.) for all compounds except for ethyl-, isopropyl-, and amylamine, whose LODs were 70, 109, and 116 μg/kg (d.w.), respectively. The limits of quantification (LOQs) were between 10 and 350 μg/kg (d.w.). Repeatability and intermediate precision, expressed as RSD(%) (n=3), were lower than 18 and 21%, respectively. The method developed enabled to determine primary aliphatic amines in sludge from various urban and industrial sewage treatment plants as well as from a potable treatment plant. Most of the primary aliphatic amines were found in the sewage sludge samples analyzed corresponding to the maximum concentrations to the samples from the urban plant: for instance, isobutylamine and methylamine were found at 7728 and 12 536 μg/kg (d.w.), respectively. Amylamine was detected only in few samples but always at concentrations lower than its LOQ.

  10. A neural joint model for entity and relation extraction from biomedical text.

    PubMed

    Li, Fei; Zhang, Meishan; Fu, Guohong; Ji, Donghong

    2017-03-31

    Extracting biomedical entities and their relations from text has important applications on biomedical research. Previous work primarily utilized feature-based pipeline models to process this task. Many efforts need to be made on feature engineering when feature-based models are employed. Moreover, pipeline models may suffer error propagation and are not able to utilize the interactions between subtasks. Therefore, we propose a neural joint model to extract biomedical entities as well as their relations simultaneously, and it can alleviate the problems above. Our model was evaluated on two tasks, i.e., the task of extracting adverse drug events between drug and disease entities, and the task of extracting resident relations between bacteria and location entities. Compared with the state-of-the-art systems in these tasks, our model improved the F1 scores of the first task by 5.1% in entity recognition and 8.0% in relation extraction, and that of the second task by 9.2% in relation extraction. The proposed model achieves competitive performances with less work on feature engineering. We demonstrate that the model based on neural networks is effective for biomedical entity and relation extraction. In addition, parameter sharing is an alternative method for neural models to jointly process this task. Our work can facilitate the research on biomedical text mining.

  11. Temporal Relation Extraction in Outcome Variances of Clinical Pathways.

    PubMed

    Yamashita, Takanori; Wakata, Yoshifumi; Hamai, Satoshi; Nakashima, Yasuharu; Iwamoto, Yukihide; Franagan, Brendan; Nakashima, Naoki; Hirokawa, Sachio

    2015-01-01

    Recently the clinical pathway has progressed with digitalization and the analysis of activity. There are many previous studies on the clinical pathway but not many feed directly into medical practice. We constructed a mind map system that applies the spanning tree. This system can visualize temporal relations in outcome variances, and indicate outcomes that affect long-term hospitalization.

  12. Recommendations relative to the scientific missions of a Mars Automated Roving Vehicle (MARV)

    NASA Technical Reports Server (NTRS)

    Spencer, R. L. (Editor)

    1973-01-01

    Scientific objectives of the MARV mission are outlined and specific science systems requirements and experimental payloads defined. All aspects of the Martian surface relative to biotic and geologic elements and those relating to geophysical and geochemical properties are explored.

  13. CD-REST: a system for extracting chemical-induced disease relation in literature.

    PubMed

    Xu, Jun; Wu, Yonghui; Zhang, Yaoyun; Wang, Jingqi; Lee, Hee-Jin; Xu, Hua

    2016-01-01

    Mining chemical-induced disease relations embedded in the vast biomedical literature could facilitate a wide range of computational biomedical applications, such as pharmacovigilance. The BioCreative V organized a Chemical Disease Relation (CDR) Track regarding chemical-induced disease relation extraction from biomedical literature in 2015. We participated in all subtasks of this challenge. In this article, we present our participation system Chemical Disease Relation Extraction SysTem (CD-REST), an end-to-end system for extracting chemical-induced disease relations in biomedical literature. CD-REST consists of two main components: (1) a chemical and disease named entity recognition and normalization module, which employs the Conditional Random Fields algorithm for entity recognition and a Vector Space Model-based approach for normalization; and (2) a relation extraction module that classifies both sentence-level and document-level candidate drug-disease pairs by support vector machines. Our system achieved the best performance on the chemical-induced disease relation extraction subtask in the BioCreative V CDR Track, demonstrating the effectiveness of our proposed machine learning-based approaches for automatic extraction of chemical-induced disease relations in biomedical literature. The CD-REST system provides web services using HTTP POST request. The web services can be accessed fromhttp://clinicalnlptool.com/cdr The online CD-REST demonstration system is available athttp://clinicalnlptool.com/cdr/cdr.html. Database URL:http://clinicalnlptool.com/cdr;http://clinicalnlptool.com/cdr/cdr.html.

  14. Extraction of Children's Friendship Relation from Activity Level

    NASA Astrophysics Data System (ADS)

    Kono, Aki; Shintani, Kimio; Katsuki, Takuya; Kihara, Shin'ya; Ueda, Mari; Kaneda, Shigeo; Haga, Hirohide

    Children learn to fit into society through living in a group, and it's greatly influenced by their friend relations. Although preschool teachers need to observe them to assist in the growth of children's social progress and support the development each child's personality, only experienced teachers can watch over children while providing high-quality guidance. To resolve the problem, this paper proposes a mathematical and objective method that assists teachers with observation. It uses numerical data of activity level recorded by pedometers, and we make tree diagram called dendrogram based on hierarchical clustering with recorded activity level. Also, we calculate children's ``breadth'' and ``depth'' of friend relations by using more than one dendrogram. When we record children's activity level in a certain kindergarten for two months and evaluated the proposed method, the results usually coincide with remarks of teachers about the children.

  15. Background Knowledge in Learning-Based Relation Extraction

    DTIC Science & Technology

    2012-01-01

    programing language . Some examples are shown in Table 3.1. Four types of taxonomic relations are covered with balanced number of examples in all data ...information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering...and maintaining the data needed, and completing and reviewing the collection of information. Send comments regarding this burden estimate or any other

  16. Automation of Silica Bead-based Nucleic Acid Extraction on a Centrifugal Lab-on-a-Disc Platform

    NASA Astrophysics Data System (ADS)

    Kinahan, David J.; Mangwanya, Faith; Garvey, Robert; Chung, Danielle WY; Lipinski, Artur; Julius, Lourdes AN; King, Damien; Mohammadi, Mehdi; Mishra, Rohit; Al-Ofi, May; Miyazaki, Celina; Ducrée, Jens

    2016-10-01

    We describe a centrifugal microfluidic ‘Lab-on-a-Disc’ (LoaD) technology for DNA purification towards eventual integration into a Sample-to-Answer platform for detection of the pathogen Escherichia coli O157:H7 from food samples. For this application, we use a novel microfluidic architecture which combines ‘event-triggered’ dissolvable film (DF) valves with a reaction chamber gated by a centrifugo-pneumatic siphon valve (CPSV). This architecture permits comprehensive flow control by simple changes in the speed of the platform innate spindle motor. Even before method optimisation, characterisation by DNA fluorescence reveals an extraction efficiency of 58%, which is close to commercial spin columns.

  17. MDL constrained 3-D grayscale skeletonization algorithm for automated extraction of dendrites and spines from fluorescence confocal images.

    PubMed

    Yuan, Xiaosong; Trachtenberg, Joshua T; Potter, Steve M; Roysam, Badrinath

    2009-12-01

    This paper presents a method for improved automatic delineation of dendrites and spines from three-dimensional (3-D) images of neurons acquired by confocal or multi-photon fluorescence microscopy. The core advance presented here is a direct grayscale skeletonization algorithm that is constrained by a structural complexity penalty using the minimum description length (MDL) principle, and additional neuroanatomy-specific constraints. The 3-D skeleton is extracted directly from the grayscale image data, avoiding errors introduced by image binarization. The MDL method achieves a practical tradeoff between the complexity of the skeleton and its coverage of the fluorescence signal. Additional advances include the use of 3-D spline smoothing of dendrites to improve spine detection, and graph-theoretic algorithms to explore and extract the dendritic structure from the grayscale skeleton using an intensity-weighted minimum spanning tree (IW-MST) algorithm. This algorithm was evaluated on 30 datasets organized in 8 groups from multiple laboratories. Spines were detected with false negative rates less than 10% on most datasets (the average is 7.1%), and the average false positive rate was 11.8%. The software is available in open source form.

  18. Revealing Dimensions of Thinking in Open-Ended Self-Descriptions: An Automated Meaning Extraction Method for Natural Language.

    PubMed

    2008-02-01

    A new method for extracting common themes from written text is introduced and applied to 1,165 open-ended self-descriptive narratives. Drawing on a lexical approach to personality, the most commonly-used adjectives within narratives written by college students were identified using computerized text analytic tools. A factor analysis on the use of these adjectives in the self-descriptions produced a 7-factor solution consisting of psychologically meaningful dimensions. Some dimensions were unipolar (e.g., Negativity factor, wherein most loaded items were negatively valenced adjectives); others were dimensional in that semantically opposite words clustered together (e.g., Sociability factor, wherein terms such as shy, outgoing, reserved, and loud all loaded in the same direction). The factors exhibited modest reliability across different types of writ writing samples and were correlated with self-reports and behaviors consistent with the dimensions. Similar analyses with additional content words (adjectives, adverbs, nouns, and verbs) yielded additional psychological dimensions associated with physical appearance, school, relationships, etc. in which people contextualize their self-concepts. The results suggest that the meaning extraction method is a promising strategy that determines the dimensions along which people think about themselves.

  19. Revealing Dimensions of Thinking in Open-Ended Self-Descriptions: An Automated Meaning Extraction Method for Natural Language

    PubMed Central

    2008-01-01

    A new method for extracting common themes from written text is introduced and applied to 1,165 open-ended self-descriptive narratives. Drawing on a lexical approach to personality, the most commonly-used adjectives within narratives written by college students were identified using computerized text analytic tools. A factor analysis on the use of these adjectives in the self-descriptions produced a 7-factor solution consisting of psychologically meaningful dimensions. Some dimensions were unipolar (e.g., Negativity factor, wherein most loaded items were negatively valenced adjectives); others were dimensional in that semantically opposite words clustered together (e.g., Sociability factor, wherein terms such as shy, outgoing, reserved, and loud all loaded in the same direction). The factors exhibited modest reliability across different types of writ writing samples and were correlated with self-reports and behaviors consistent with the dimensions. Similar analyses with additional content words (adjectives, adverbs, nouns, and verbs) yielded additional psychological dimensions associated with physical appearance, school, relationships, etc. in which people contextualize their self-concepts. The results suggest that the meaning extraction method is a promising strategy that determines the dimensions along which people think about themselves. PMID:18802499

  20. Flat-relative optimal extraction. A quick and efficient algorithm for stabilised spectrographs

    NASA Astrophysics Data System (ADS)

    Zechmeister, M.; Anglada-Escudé, G.; Reiners, A.

    2014-01-01

    Context. Optimal extraction is a key step in processing the raw images of spectra as registered by two-dimensional detector arrays to a one-dimensional format. Previously reported algorithms reconstruct models for a mean one-dimensional spatial profile to assist a properly weighted extraction. Aims: We outline a simple optimal extraction algorithm (including error propagation), which is very suitable for stabilised, fibre-fed spectrographs and does not model the spatial profile shape. Methods: A high signal-to-noise ratio, master-flat image serves as reference image and is directly used as an extraction profile mask. Each extracted spectral value is the scaling factor relative to the cross-section of the unnormalised master flat that contains all information about the spatial profile, as well as pixel-to-pixel variations, fringing, and blaze. The extracted spectrum is measured relative to the flat spectrum. Results: Using echelle spectra of the HARPS spectrograph we demonstrate a competitive extraction performance in terms of a signal-to-noise ratio and show that extracted spectra can be used for high precision radial velocity measurement. Conclusions: Pre- or post-flat-fielding of the data is not necessary, since all spectrograph inefficiencies inherent to the extraction mask are automatically accounted for. Also the reconstruction of the mean spatial profile by models is not needed, thereby reducing the number of operations to extract spectra. Flat-relative optimal extraction is a simple, efficient, and robust method that can be applied easily to stabilised, fibre-fed spectrographs. Based on data obtained from the ESO Science Archive Facility under request number ZECHMEISTER73978. Based on observations made with the HARPS instrument on the ESO 3.6 m telescope under programme ID 074.D-0380.

  1. Human Factors In Aircraft Automation

    NASA Technical Reports Server (NTRS)

    Billings, Charles

    1995-01-01

    Report presents survey of state of art in human factors in automation of aircraft operation. Presents examination of aircraft automation and effects on flight crews in relation to human error and aircraft accidents.

  2. Investigation of dynamic thiol-disulphide homoeostasis in age-related cataract patients with a novel and automated assay.

    PubMed

    Sagdik, Haci Murat; Ucar, Fatma; Tetikoglu, Mehmet; Aktas, Serdar; Ozcura, Fatih; Kocak, Havva; Neselioglu, Salim; Eren, Funda

    2017-04-10

    The aim of this study was to determine plasma thiol-disulphide homoeostasis in patients with age-related cataract (ARC) and compare the results of the patients with healthy subjects. Plasma malondialdehyde (MDA) levels and catalase (CAT) activity were also investigated. The study included 53 cataract patients and 52 healthy volunteers. Native thiol-disulphide exchanges were determined using a novel and automated method. CAT activity was determined using the method described by Aebi, and MDA levels were calculated using the thiobarbituric acid method. Native thiol and total thiol levels were significantly lower in the cataract patients compared with the controls (p < 0.001, p = 0.002, respectively). The disulphide levels of the cataract patients were significantly higher than the controls (p = 0.002). The ratios of disulphide/native thiol and disulphide/total thiol were statistically higher in the cataract patients compared with the control group (p < 0.001, p < 0.001, respectively). Furthermore, CAT activity was significantly lower in the cataract patient group compared with the control group (p < 0.001), and MDA levels were insignificantly higher in the patient group (p = 0.581). Our study showed that dynamic thiol-disulphide homoeostasis has shifted towards disulphide formation, as a result of thiol oxidation in ARC patients. The present study is the first to measure thiol-disulphide homoeostasis in ARC patients with a novel automated assay. This study supports the hypothesis that cataract is an oxidative disorder. Further studies are required in order to examine the relationship between oxidative stress and the development of cataract formation.

  3. An unsupervised text mining method for relation extraction from biomedical literature.

    PubMed

    Quan, Changqin; Wang, Meng; Ren, Fuji

    2014-01-01

    The wealth of interaction information provided in biomedical articles motivated the implementation of text mining approaches to automatically extract biomedical relations. This paper presents an unsupervised method based on pattern clustering and sentence parsing to deal with biomedical relation extraction. Pattern clustering algorithm is based on Polynomial Kernel method, which identifies interaction words from unlabeled data; these interaction words are then used in relation extraction between entity pairs. Dependency parsing and phrase structure parsing are combined for relation extraction. Based on the semi-supervised KNN algorithm, we extend the proposed unsupervised approach to a semi-supervised approach by combining pattern clustering, dependency parsing and phrase structure parsing rules. We evaluated the approaches on two different tasks: (1) Protein-protein interactions extraction, and (2) Gene-suicide association extraction. The evaluation of task (1) on the benchmark dataset (AImed corpus) showed that our proposed unsupervised approach outperformed three supervised methods. The three supervised methods are rule based, SVM based, and Kernel based separately. The proposed semi-supervised approach is superior to the existing semi-supervised methods. The evaluation on gene-suicide association extraction on a smaller dataset from Genetic Association Database and a larger dataset from publicly available PubMed showed that the proposed unsupervised and semi-supervised methods achieved much higher F-scores than co-occurrence based method.

  4. Automated and quantitative headspace in-tube extraction for the accurate determination of highly volatile compounds from wines and beers.

    PubMed

    Zapata, Julián; Mateo-Vivaracho, Laura; Lopez, Ricardo; Ferreira, Vicente

    2012-03-23

    An automatic headspace in-tube extraction (ITEX) method for the accurate determination of acetaldehyde, ethyl acetate, diacetyl and other volatile compounds from wine and beer has been developed and validated. Method accuracy is based on the nearly quantitative transference of volatile compounds from the sample to the ITEX trap. For achieving that goal most methodological aspects and parameters have been carefully examined. The vial and sample sizes and the trapping materials were found to be critical due to the pernicious saturation effects of ethanol. Small 2 mL vials containing very small amounts of sample (20 μL of 1:10 diluted sample) and a trap filled with 22 mg of Bond Elut ENV resins could guarantee a complete trapping of sample vapors. The complete extraction requires 100 × 0.5 mL pumping strokes at 60 °C and takes 24 min. Analytes are further desorbed at 240 °C into the GC injector under a 1:5 split ratio. The proportion of analytes finally transferred to the trap ranged from 85 to 99%. The validation of the method showed satisfactory figures of merit. Determination coefficients were better than 0.995 in all cases and good repeatability was also obtained (better than 7% in all cases). Reproducibility was better than 8.3% except for acetaldehyde (13.1%). Detection limits were below the odor detection thresholds of these target compounds in wine and beer and well below the normal ranges of occurrence. Recoveries were not significantly different to 100%, except in the case of acetaldehyde. In such a case it could be determined that the method is not able to break some of the adducts that this compound forms with sulfites. However, such problem was avoided after incubating the sample with glyoxal. The method can constitute a general and reliable alternative for the analysis of very volatile compounds in other difficult matrixes.

  5. Extracting Concepts Related to Homelessness from the Free Text of VA Electronic Medical Records.

    PubMed

    Gundlapalli, Adi V; Carter, Marjorie E; Divita, Guy; Shen, Shuying; Palmer, Miland; South, Brett; Durgahee, B S Begum; Redd, Andrew; Samore, Matthew

    2014-01-01

    Mining the free text of electronic medical records (EMR) using natural language processing (NLP) is an effective method of extracting information not always captured in administrative data. We sought to determine if concepts related to homelessness, a non-medical condition, were amenable to extraction from the EMR of Veterans Affairs (VA) medical records. As there were no off-the-shelf products, a lexicon of terms related to homelessness was created. A corpus of free text documents from outpatient encounters was reviewed to create the reference standard for NLP training and testing. V3NLP Framework was used to detect instances of lexical terms and was compared to the reference standard. With a positive predictive value of 77% for extracting relevant concepts, this study demonstrates the feasibility of extracting positively asserted concepts related to homelessness from the free text of medical records.

  6. Extracting Concepts Related to Homelessness from the Free Text of VA Electronic Medical Records

    PubMed Central

    Gundlapalli, Adi V.; Carter, Marjorie E.; Divita, Guy; Shen, Shuying; Palmer, Miland; South, Brett; Durgahee, B.S. Begum; Redd, Andrew; Samore, Matthew

    2014-01-01

    Mining the free text of electronic medical records (EMR) using natural language processing (NLP) is an effective method of extracting information not always captured in administrative data. We sought to determine if concepts related to homelessness, a non-medical condition, were amenable to extraction from the EMR of Veterans Affairs (VA) medical records. As there were no off-the-shelf products, a lexicon of terms related to homelessness was created. A corpus of free text documents from outpatient encounters was reviewed to create the reference standard for NLP training and testing. V3NLP Framework was used to detect instances of lexical terms and was compared to the reference standard. With a positive predictive value of 77% for extracting relevant concepts, this study demonstrates the feasibility of extracting positively asserted concepts related to homelessness from the free text of medical records. PMID:25954364

  7. Rapid analysis of trace organic compounds in water by automated online solid-phase extraction coupled to liquid chromatography-tandem mass spectrometry.

    PubMed

    Anumol, Tarun; Snyder, Shane A

    2015-01-01

    A fully automated online solid-phase extraction (SPE) with directly coupled liquid chromatography-tandem mass spectrometry (LC-MS/MS) method for analysis of 34 trace organic compounds in diverse water matrices has been developed. The current method offers several advantages over traditional offline SPE methods including low sample volume (1.7 mL), decreased solvent use, higher throughput, and increased reproducibility. The method uses simultaneous positive and negative ESI for analysis of all compounds in one injection, which reduces cycle time (extraction+analysis) to <15 min. Method optimization included testing different online SPE cartridges, mobile phase compositions, and flow rates. The method detection limits (MDLs) ranged from 0.1 to 13.1 ng/L with 80% of the compounds having an MDL <5 ng/L. Matrix spike recoveries in three different water qualities were evaluated and ranged from 61.2% to145.1% with 95% of the recoveries ranging between 70-130%. As part of the method validation studies, linearity (0.9911-0.9998), intra-day variability (1.0-10.4%), inter-day variability (1.0-11.9%), and matrix effects were also assessed. The use of 26 isotopically-labeled standards increased the reliability of the method while retention time locking and use of two transitions for most compounds increased the specificity. The applicability of the method was tested on samples across treatment points from two wastewater plants, a septic tank, surface water and groundwater. Copyright © 2014 Elsevier B.V. All rights reserved.

  8. Analysis of trace contamination of phthalate esters in ultrapure water using a modified solid-phase extraction procedure and automated thermal desorption-gas chromatography/mass spectrometry.

    PubMed

    Liu, Hsu-Chuan; Den, Walter; Chan, Shu-Fei; Kin, Kuan Tzu

    2008-04-25

    The present study was aimed to develop a procedure modified from the conventional solid-phase extraction (SPE) method for the analysis of trace concentration of phthalate esters in industrial ultrapure water (UPW). The proposed procedure allows UPW sample to be drawn through a sampling tube containing hydrophobic sorbent (Tenax TA) to concentrate the aqueous phthalate esters. The solid trap was then demoisturized by two-stage gas drying before subjecting to thermal desorption and analysis by gas chromatography-mass spectrometry. This process removes the solvent extraction procedure necessary for the conventional SPE method, and permits automation of the analytical procedure for high-volume analyses. Several important parameters, including desorption temperature and duration, packing quantity and demoisturizing procedure, were optimized in this study based on the analytical sensitivity for a standard mixture containing five different phthalate esters. The method detection limits for the five phthalate esters were between 36 ng l(-1) and 95 ng l(-1) and recovery rates between 15% and 101%. Dioctyl phthalate (DOP) was not recovered adequately because the compound was both poorly adsorbed and desorbed on and off Tenax TA sorbents. Furthermore, analyses of material leaching from poly(vinyl chloride) (PVC) tubes as well as the actual water samples showed that di-n-butyl phthalate (DBP) and di(2-ethylhexyl) phthalate (DEHP) were the common contaminants detected from PVC contaminated UPW and the actual UPW, as well as in tap water. The reduction of DEHP in the production processes of actual UPW was clearly observed, however a DEHP concentration of 0.20 microg l(-1) at the point of use was still being quantified, suggesting that the contamination of phthalate esters could present a barrier to the future cleanliness requirement of UPW. The work demonstrated that the proposed modified SPE procedure provided an effective method for rapid analysis and contamination

  9. Simultaneous determination of cotinine and trans-3-hydroxycotinine in urine by automated solid-phase extraction using gas chromatography–mass spectrometry

    PubMed Central

    Chiadmi, Fouad; Schlatter, Joël

    2014-01-01

    A gas chromatography–mass spectrometry method was developed and validated for the simultaneous automated solid-phase extraction and quantification of cotinine and trans-3-hydroxycotinine in human urine. Good linearity was observed over the concentration ranges studied (R2 > 0.99). The limit of quantification was 10 ng/mL for both analytes. The limits of detection were 0.06 ng/mL for cotinine (COT) and 0.02 ng/mL for trans-3-hydroxycotinine (OH-COT). Accuracy for COT ranged from 0.98 to 5.28% and the precision ranged from 1.24 to 8.78%. Accuracy for OH-COT ranged from −2.66 to 3.72% and the precision ranged from 3.15 to 7.07%. Mean recoveries for cotinine and trans-3-hydroxycotinine ranged from 77.7 to 89.1%, and from 75.4 to 90.2%, respectively. This analytical method for the simultaneous measurement of cotinine and trans-3-hydroxycotinine in urine will be used to monitor tobacco smoking in pregnant women and will permit the usefulness of trans-3-hydroxycotinine as a specific biomarker of tobacco exposure to be determined. © 2014 The Authors. Biomedical Chromatography published by John Wiley & Sons Ltd. PMID:24616054

  10. Automated determination of total captopril in urine by liquid chromatography with post-column derivatization coupled to on-line solid phase extraction in a sequential injection manifold.

    PubMed

    Karakosta, Theano D; Tzanavaras, Paraskevas D; Themelis, Demetrius G

    2012-01-15

    The present study reports a new liquid chromatographic (HPLC) method for the determination of the anti-hypertension drug captopril (CAP) in human urine. After its separation from the sample matrix in a reversed phase HPLC column, CAP reacts with the thiol-selective reagent ethyl-propiolate (EP) in a post-column configuration and the formed thioacrylate derivative is detected at 285 nm. Automated 4-fold preconcentration of the analyte prior to analysis was achieved by an on-line solid phase extraction (SPE) step using a sequential injection (SI) manifold. The Oasis HLB SPE cartridges offered quantitative recoveries and effective sample cleaning by applying a simple SPE protocol. The limits of detection and quantitation were 10 μg L(-1) and 35 μg L(-1) respectively. The percent recoveries for the analysis of human urine samples ranged between 90 and 96% and 95 and 104% using aqueous and matrix matched calibration curves respectively. Copyright © 2011 Elsevier B.V. All rights reserved.

  11. Integrated DNA and RNA extraction and purification on an automated microfluidic cassette from bacterial and viral pathogens causing community-acquired lower respiratory tract infections.

    PubMed

    Van Heirstraeten, Liesbet; Spang, Peter; Schwind, Carmen; Drese, Klaus S; Ritzi-Lehnert, Marion; Nieto, Benjamin; Camps, Marta; Landgraf, Bryan; Guasch, Francesc; Corbera, Antoni Homs; Samitier, Josep; Goossens, Herman; Malhotra-Kumar, Surbhi; Roeser, Tina

    2014-05-07

    In this paper, we describe the development of an automated sample preparation procedure for etiological agents of community-acquired lower respiratory tract infections (CA-LRTI). The consecutive assay steps, including sample re-suspension, pre-treatment, lysis, nucleic acid purification, and concentration, were integrated into a microfluidic lab-on-a-chip (LOC) cassette that is operated hands-free by a demonstrator setup, providing fluidic and valve actuation. The performance of the assay was evaluated on viral and Gram-positive and Gram-negative bacterial broth cultures previously sampled using a nasopharyngeal swab. Sample preparation on the microfluidic cassette resulted in higher or similar concentrations of pure bacterial DNA or viral RNA compared to manual benchtop experiments. The miniaturization and integration of the complete sample preparation procedure, to extract purified nucleic acids from real samples of CA-LRTI pathogens to, and above, lab quality and efficiency, represent important steps towards its application in a point-of-care test (POCT) for rapid diagnosis of CA-LRTI.

  12. High sensitivity measurements of active oxysterols with automated filtration/filter backflush-solid phase extraction-liquid chromatography-mass spectrometry.

    PubMed

    Roberg-Larsen, Hanne; Strand, Martin Frank; Grimsmo, Anders; Olsen, Petter Angell; Dembinski, Jennifer L; Rise, Frode; Lundanes, Elsa; Greibrokk, Tyge; Krauss, Stefan; Wilson, Steven Ray

    2012-09-14

    Oxysterols are important in numerous biological processes, including cell signaling. Here we present an automated filtration/filter backflush-solid phase extraction-liquid chromatography-tandem mass spectrometry (AFFL-SPE-LC-MS/MS) method for determining 24-hydroxysterol and the isomers 25-hydroxycholesterol and 22S-hydroxycholesterol that enables simplified sample preparation, high sensitivity (~25 pg/mL cell lysis sample) and low sample variability. Only one sample transfer step was required for the entire process of cell lysis, derivatization and determination of selected oxysterols. During the procedure, autoxidation of cholesterol, a potential/common problem using standard analytical methods, was found to be negligible. The reversed phase AFFL-SPE-LC-MS/MS method utilizing a 1mm inner diameter column was validated, and used to determine levels of the oxysterol analytes in mouse fibroblast cell lines SSh-LII and NIH-3T3, and human cancer cell lines, BxPC3, HCT-15 and HCT-116. In BxPC3 cells, the AFFL-SPE-LC-MS/MS method was used to detect significant differences in 24S-OHC levels between vimentin+ and vimentin- heterogenous sub-populations. The methodology also allowed monitoring of significant alterations in 24S-OHC levels upon delivery of the Hedgehog (Hh) antagonist MS-0022 in HCT-116 colorectal carcinoma cell lines. Copyright © 2012 Elsevier B.V. All rights reserved.

  13. Comparison of turbulent-flow chromatography with automated solid-phase extraction in 96-well plates and liquid-liquid extraction used as plasma sample preparation techniques for liquid chromatography-tandem mass spectrometry.

    PubMed

    Zimmer, D; Pickard, V; Czembor, W; Müller, C

    1999-08-27

    Turbulent flow chromatography (TFC) combined with the high selectivity and sensitivity of tandem mass spectrometry (MS-MS) is a new technique for the fast direct analysis of drugs from crude plasma. TFC in the 96-well plate format reduces significantly the time required for sample clean-up in the laboratory. For example, for 100 samples the workload for a technician is reduced from about 8 h by a manual liquid-liquid extraction (LLE) assay to about 1 h in the case of TFC. Sample clean-up and analysis are performed on-line on the same column. Similar chromatographic performance and validation results were achieved using HTLC Turbo-C18 columns (Cohesive Technologies) and Oasis HLB extraction columns (Waters). One 96-well plate with 96 plasma samples is analyzed within 5.25 h, corresponding to 3.3 min per sample. Compared to this LLE and analysis of 96 samples takes about 16 h. Two structurally different and highly protein bound compounds, drug A and drug B, were analyzed under identical TFC conditions and the assays were fully validated for the application to toxicokinetics studies (compliant with Good Laboratory Practices-GLP). The limit of quantitation was 1.00 microg/l and the linear working range covered three orders of magnitude for both drugs. In the case of drug A the quality of analysis by TFC was similar to the reference LLE assay and slightly better than automated solid-phase extraction in 96-well plates. The accuracy was -3.1 to 6.7% and the precision was 3.1 to 6.8% in the case of drug A determined for dog plasma by TFC-MS-MS. For drug B the accuracy was -3.7 to 3.5% and the precision was 1.6 to 5.4% for rat plasma, which is even slightly better than what was achieved with the validated protein precipitation assay.

  14. Extraction and colorimetric determination of azadirachtin-related limonoids in neem seed kernel.

    PubMed

    Dai, J; Yaylayan, V A; Raghavan, G S; Parè, J R

    1999-09-01

    A colorimetric method was developed for the determination of total azadirachtin-related limonoids (AZRL) in neem seed kernel extracts. The method employed acidified vanillin solution in methanol for the colorization of the standard azadirachtin or neem seed kernel extracts in dichloromethane. Through the investigation of various factors influencing the sensitivity of detection, such as the concentration of vanillin, acid, and the time required for the formation of color, optimum conditions were selected to perform the assay. Under the optimum conditions, a good linearity was found between the absorbance at 577 nm and the concentration of standard azadirachtin solution in the range of 0.01-0.10 mg/mL. In addition, different extraction procedures were evaluated using the vanillin assay. The HPLC analysis of the extracts indicated that if the extractions were performed in methanol followed by partitioning in dichloromethane, approximately 50% of the value determined by the vanillin assay represents azadirachtin content.

  15. Development and validation of an automated liquid-liquid extraction GC/MS method for the determination of THC, 11-OH-THC, and free THC-carboxylic acid (THC-COOH) from blood serum.

    PubMed

    Purschke, Kirsten; Heinl, Sonja; Lerch, Oliver; Erdmann, Freidoon; Veit, Florian

    2016-06-01

    The analysis of Δ(9)-tetrahydrocannabinol (THC) and its metabolites 11-hydroxy-Δ(9)-tetrahydrocannabinol (11-OH-THC), and 11-nor-9-carboxy-Δ(9)-tetrahydrocannabinol (THC-COOH) from blood serum is a routine task in forensic toxicology laboratories. For examination of consumption habits, the concentration of the phase I metabolite THC-COOH is used. Recommendations for interpretation of analysis values in medical-psychological assessments (regranting of driver's licenses, Germany) include threshold values for the free, unconjugated THC-COOH. Using a fully automated two-step liquid-liquid extraction, THC, 11-OH-THC, and free, unconjugated THC-COOH were extracted from blood serum, silylated with N-methyl-N-(trimethylsilyl) trifluoroacetamide (MSTFA), and analyzed by GC/MS. The automation was carried out by an x-y-z sample robot equipped with modules for shaking, centrifugation, and solvent evaporation. This method was based on a previously developed manual sample preparation method. Validation guidelines of the Society of Toxicological and Forensic Chemistry (GTFCh) were fulfilled for both methods, at which the focus of this article is the automated one. Limits of detection and quantification for THC were 0.3 and 0.6 μg/L, for 11-OH-THC were 0.1 and 0.8 μg/L, and for THC-COOH were 0.3 and 1.1 μg/L, when extracting only 0.5 mL of blood serum. Therefore, the required limit of quantification for THC of 1 μg/L in driving under the influence of cannabis cases in Germany (and other countries) can be reached and the method can be employed in that context. Real and external control samples were analyzed, and a round robin test was passed successfully. To date, the method is employed in the Institute of Legal Medicine in Giessen, Germany, in daily routine. Automation helps in avoiding errors during sample preparation and reduces the workload of the laboratory personnel. Due to its flexibility, the analysis system can be employed for other liquid-liquid extractions as

  16. Developing an Automated Technique for Translating a Relational Database into an Equivalent Network One.

    DTIC Science & Technology

    1984-12-01

    be superior in terms of the logical description of databases, whereas the network model is more ef- ficient of space and time, and provides a more...as more familiar low-level operators are available. The high-level operators are those of the relational algebra and equivalent languages. The number...Principles of Database Systems (Second Edition). Maryland: Coputer Science Press, 1982. 8. Cardenas , Alfonso F. Database Management Systems. Boston

  17. Systematic review automation technologies.

    PubMed

    Tsafnat, Guy; Glasziou, Paul; Choong, Miew Keen; Dunn, Adam; Galgani, Filippo; Coiera, Enrico

    2014-07-09

    Systematic reviews, a cornerstone of evidence-based medicine, are not produced quickly enough to support clinical practice. The cost of production, availability of the requisite expertise and timeliness are often quoted as major contributors for the delay. This detailed survey of the state of the art of information systems designed to support or automate individual tasks in the systematic review, and in particular systematic reviews of randomized controlled clinical trials, reveals trends that see the convergence of several parallel research projects.We surveyed literature describing informatics systems that support or automate the processes of systematic review or each of the tasks of the systematic review. Several projects focus on automating, simplifying and/or streamlining specific tasks of the systematic review. Some tasks are already fully automated while others are still largely manual. In this review, we describe each task and the effect that its automation would have on the entire systematic review process, summarize the existing information system support for each task, and highlight where further research is needed for realizing automation for the task. Integration of the systems that automate systematic review tasks may lead to a revised systematic review workflow. We envisage the optimized workflow will lead to system in which each systematic review is described as a computer program that automatically retrieves relevant trials, appraises them, extracts and synthesizes data, evaluates the risk of bias, performs meta-analysis calculations, and produces a report in real time.

  18. Systematic review automation technologies

    PubMed Central

    2014-01-01

    Systematic reviews, a cornerstone of evidence-based medicine, are not produced quickly enough to support clinical practice. The cost of production, availability of the requisite expertise and timeliness are often quoted as major contributors for the delay. This detailed survey of the state of the art of information systems designed to support or automate individual tasks in the systematic review, and in particular systematic reviews of randomized controlled clinical trials, reveals trends that see the convergence of several parallel research projects. We surveyed literature describing informatics systems that support or automate the processes of systematic review or each of the tasks of the systematic review. Several projects focus on automating, simplifying and/or streamlining specific tasks of the systematic review. Some tasks are already fully automated while others are still largely manual. In this review, we describe each task and the effect that its automation would have on the entire systematic review process, summarize the existing information system support for each task, and highlight where further research is needed for realizing automation for the task. Integration of the systems that automate systematic review tasks may lead to a revised systematic review workflow. We envisage the optimized workflow will lead to system in which each systematic review is described as a computer program that automatically retrieves relevant trials, appraises them, extracts and synthesizes data, evaluates the risk of bias, performs meta-analysis calculations, and produces a report in real time. PMID:25005128

  19. Extraction conditions of white rose petals for the inhibition of enzymes related to skin aging.

    PubMed

    Choi, Ehn-Kyoung; Guo, Haiyu; Choi, Jae-Kwon; Jang, Su-Kil; Shin, Kyungha; Cha, Ye-Seul; Choi, Youngjin; Seo, Da-Woom; Lee, Yoon-Bok; Joo, Seong-So; Kim, Yun-Bae

    2015-09-01

    In order to assess inhibitory potentials of white rose petal extracts (WRPE) on the activities of enzymes related to dermal aging according to the extraction conditions, three extraction methods were adopted. WRPE was prepared by extracting dried white rose (Rosa hybrida) petals with 50% ethanol (WRPE-EtOH), Pectinex® SMASH XXL enzyme (WRPE-enzyme) or high temperature-high pressure (WRPE-HTHP). In the inhibition of matrix metalloproteinase-1, although the enzyme activity was fully inhibited by all 3 extracts at 100 µg/mL in 60 min, partial inhibition (50-70%) was achieved only by WRPE-EtOH and WRPE-enzyme at 50 µg/mL. High concentrations (≥250 µg/mL) of all 3 extracts markedly inhibited the elastase activity. However, at low concentrations (15.6-125 µg/mL), only WRPE-EtOH inhibited the enzyme activity. Notably, WRPE-EtOH was superior to WRPE-enzyme and WRPE-HTHP in the inhibition of tyrosinase. WRPE-EtOH significantly inhibited the enzyme activity from 31.2 µM, reaching 80% inhibition at 125 µM. In addition to its strong antioxidative activity, the ethanol extract of white rose petals was confirmed to be effective in inhibiting skin aging-related enzymes. Therefore, it is suggested that WRPE-EtOH could be a good candidate for the improvement of skin aging such as wrinkle formation and pigmentation.

  20. Extraction conditions of white rose petals for the inhibition of enzymes related to skin aging

    PubMed Central

    Choi, Ehn-Kyoung; Guo, Haiyu; Choi, Jae-Kwon; Jang, Su-Kil; Shin, Kyungha; Cha, Ye-Seul; Choi, Youngjin; Seo, Da-Woom; Lee, Yoon-Bok

    2015-01-01

    In order to assess inhibitory potentials of white rose petal extracts (WRPE) on the activities of enzymes related to dermal aging according to the extraction conditions, three extraction methods were adopted. WRPE was prepared by extracting dried white rose (Rosa hybrida) petals with 50% ethanol (WRPE-EtOH), Pectinex® SMASH XXL enzyme (WRPE-enzyme) or high temperature-high pressure (WRPE-HTHP). In the inhibition of matrix metalloproteinase-1, although the enzyme activity was fully inhibited by all 3 extracts at 100 µg/mL in 60 min, partial inhibition (50-70%) was achieved only by WRPE-EtOH and WRPE-enzyme at 50 µg/mL. High concentrations (≥250 µg/mL) of all 3 extracts markedly inhibited the elastase activity. However, at low concentrations (15.6-125 µg/mL), only WRPE-EtOH inhibited the enzyme activity. Notably, WRPE-EtOH was superior to WRPE-enzyme and WRPE-HTHP in the inhibition of tyrosinase. WRPE-EtOH significantly inhibited the enzyme activity from 31.2 µM, reaching 80% inhibition at 125 µM. In addition to its strong antioxidative activity, the ethanol extract of white rose petals was confirmed to be effective in inhibiting skin aging-related enzymes. Therefore, it is suggested that WRPE-EtOH could be a good candidate for the improvement of skin aging such as wrinkle formation and pigmentation. PMID:26472968

  1. Extracting relations from traditional Chinese medicine literature via heterogeneous entity networks.

    PubMed

    Wan, Huaiyu; Moens, Marie-Francine; Luyten, Walter; Zhou, Xuezhong; Mei, Qiaozhu; Liu, Lu; Tang, Jie

    2016-03-01

    Traditional Chinese medicine (TCM) is a unique and complex medical system that has developed over thousands of years. This article studies the problem of automatically extracting meaningful relations of entities from TCM literature, for the purposes of assisting clinical treatment or poly-pharmacology research and promoting the understanding of TCM in Western countries. Instead of separately extracting each relation from a single sentence or document, we propose to collectively and globally extract multiple types of relations (eg, herb-syndrome, herb-disease, formula-syndrome, formula-disease, and syndrome-disease relations) from the entire corpus of TCM literature, from the perspective of network mining. In our analysis, we first constructed heterogeneous entity networks from the TCM literature, in which each edge is a candidate relation, then used a heterogeneous factor graph model (HFGM) to simultaneously infer the existence of all the edges. We also employed a semi-supervised learning algorithm estimate the model's parameters. We performed our method to extract relations from a large dataset consisting of more than 100,000 TCM article abstracts. Our results show that the performance of the HFGM at extracting all types of relations from TCM literature was significantly better than a traditional support vector machine (SVM) classifier (increasing the average precision by 11.09%, the recall by 13.83%, and the F1-measure by 12.47% for different types of relations, compared with a traditional SVM classifier). This study exploits the power of collective inference and proposes an HFGM based on heterogeneous entity networks, which significantly improved our ability to extract relations from TCM literature. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  2. Automated semen analysis: 'zona pellucida preferred' sperm morphometry and straight-line velocity are related to pregnancy rate in subfertile couples.

    PubMed

    Garrett, C; Liu, D Y; Clarke, G N; Rushford, D D; Baker, H W G

    2003-08-01

    Standard semen analysis has low objectivity and reproducibility and is not closely related to fertility. We assess the prognostic value of automated measurements of sperm motility and morphology. During 1997-1999, 1191 infertile couples with no known absolute barrier to conception were assessed by conventional semen analysis, and automated measurements of average straight-line velocity (VSL) and the percentage of sperm with characteristics that conform to those of sperm which bind to the zona pellucida of the human oocyte (%Z). During follow-up to 2001, there were 336 natural pregnancies. Only %Z, VSL and female age were independently significantly related to pregnancy rate by Cox regression analysis. Pregnancy rate was higher with above average %Z and VSL, indicating a continuous rather than a threshold relationship. The likelihood of pregnancy within 12 cycles can be evaluated for specific values of %Z, VSL and female age using the Cox regression model. The automated semen measures of sperm morphometry (%Z) and velocity (VSL) are related to pregnancy rates in subfertile couples and should assist clinicians in counselling subfertile patients about their prognosis for a natural pregnancy. Objective automated methods should replace the traditional manual assessments of semen quality.

  3. Medication incidents related to automated dose dispensing in community pharmacies and hospitals--a reporting system study.

    PubMed

    Cheung, Ka-Chun; van den Bemt, Patricia M L A; Bouvy, Marcel L; Wensing, Michel; De Smet, Peter A G M

    2014-01-01

    Automated dose dispensing (ADD) is being introduced in several countries and the use of this technology is expected to increase as a growing number of elderly people need to manage their medication at home. ADD aims to improve medication safety and treatment adherence, but it may introduce new safety issues. This descriptive study provides insight into the nature and consequences of medication incidents related to ADD, as reported by healthcare professionals in community pharmacies and hospitals. The medication incidents that were submitted to the Dutch Central Medication incidents Registration (CMR) reporting system were selected and characterized independently by two researchers. Person discovering the incident, phase of the medication process in which the incident occurred, immediate cause of the incident, nature of incident from the healthcare provider's perspective, nature of incident from the patient's perspective, and consequent harm to the patient caused by the incident. From January 2012 to February 2013 the CMR received 15,113 incidents: 3,685 (24.4%) incidents from community pharmacies and 11,428 (75.6%) incidents from hospitals. Eventually 1 of 50 reported incidents (268/15,113 = 1.8%) were related to ADD; in community pharmacies more incidents (227/3,685 = 6.2%) were related to ADD than in hospitals (41/11,428 = 0.4%). The immediate cause of an incident was often a change in the patient's medicine regimen or relocation. Most reported incidents occurred in two phases: entering the prescription into the pharmacy information system and filling the ADD bag. A proportion of incidents was related to ADD and is reported regularly, especially by community pharmacies. In two phases, entering the prescription into the pharmacy information system and filling the ADD bag, most incidents occurred. A change in the patient's medicine regimen or relocation was the immediate causes of an incident.

  4. Discrete curvatures combined with machine learning for automated extraction of impact craters on 3D topographic meshes

    NASA Astrophysics Data System (ADS)

    Christoff, Nicole; Jorda, Laurent; Viseur, Sophie; Bouley, Sylvain; Manolova, Agata; Mari, Jean-Luc

    2017-04-01

    One of the challenges of Planetary Science is to estimate as accurately as possible the age of the geological units that crop out on the different space objects in the Solar system. This dating relies on the counting of the impact craters that cover the given outcrop surface. Using this technique, a chronology of the geological events can be determined and their formation and evolution processes can be understood. Over the last decade, several missions to asteroids and planets, such as Dawn to Vesta and Ceres, Messenger to Mercury, Mars Orbiter and Mars Express, produced a huge amount of images, from which equally huge DEMs have been generated. Planned missions, such as BepiColombo, will produce an even larger set of images. This rapidly growing amount of visible images and DEMs makes it more and more fastidious to manually identify craters. Acquisition data will become bigger and this will then require more accurate planetary surface analysis. Because of the importance of the problem, many Crater Detection Algorithm (CDA) were developed and applied onto either image data (2D) or DEM (2D1/5), and rarely onto full 3D data such as 3D topographic meshes. We propose a new approach, based on the detection of crater rim, which form a characteristic round shape. The proposed approach contains two main steps: 1) each vertex is labelled with the values of the mean curvature and minimal curvatures; 2) this curvature map is injected into a Neural Network (NN) to automatically process the region of interest. As a NN approach, it requires a training set of manually detected craters to estimate the optimal weights of the NN. Once trained, the NN can be applied onto the regions of interest for automatically extracting all the craters. As a result, it was observed that detecting forms using a two-dimensional map based on the computation of discrete differential estimators on the 3D mesh is more efficient than using a simple elevation map. This approach significantly reduces the

  5. The extraction of human urinary kinin (substance z) and its relation to the plasma kinins

    PubMed Central

    Gaddum, J. H.; Horton, E. W.

    1959-01-01

    Human urinary kinin (substance Z) has been extracted by modifications of the methods previously described by Gomes (1955) and Jensen (1958). The separation of two oxytocic fractions from such extracts by paper pulp chromatography (Walaszek, 1957; Jensen, 1958) could not be confirmed. Substance Z could not be distinguished from kallidin, bradykinin or glass-activated kinin by parallel quantitative assays, thus confirming that these four substances are very closely related. PMID:13651588

  6. Characteristics of prickly lettuce seed oil in relation to methods of extraction.

    PubMed

    Ramadan, A A

    1976-01-01

    Samples of seed oil of prickly lettuce (Lactuca Sacriola oleifera) which had been obtained by pressing or by extracting with acetone, ethyl ether, petroleum ether or carbon tetrachloride were analysed for the following parameters: viscosity, saponification number, iodine number, thiocyanogen value, unsaponifiable matter, free fatty acids, peroxide number and fatty acid composition. The different parameters varied in part considerably in relation to the method of production (pressing or solvent extraction) and to the solvent. It is tried to interprete these relationships.

  7. Automated measurement of parameters related to the deformities of lower limbs based on x-rays images.

    PubMed

    Wojciechowski, Wadim; Molka, Adrian; Tabor, Zbisław

    2016-03-01

    Measurement of the deformation of the lower limbs in the current standard full-limb X-rays images presents significant challenges to radiologists and orthopedists. The precision of these measurements is deteriorated because of inexact positioning of the leg during image acquisition, problems with selecting reliable anatomical landmarks in projective X-ray images, and inevitable errors of manual measurements. The influence of the random errors resulting from the last two factors on the precision of the measurement can be reduced if an automated measurement method is used instead of a manual one. In the paper a framework for an automated measurement of various metric and angular quantities used in the description of the lower extremity deformation in full-limb frontal X-ray images is described. The results of automated measurements are compared with manual measurements. These results demonstrate that an automated method can be a valuable alternative to the manual measurements.

  8. CD-REST: a system for extracting chemical-induced disease relation in literature

    PubMed Central

    Xu, Jun; Wu, Yonghui; Zhang, Yaoyun; Wang, Jingqi; Lee, Hee-Jin; Xu, Hua

    2016-01-01

    Mining chemical-induced disease relations embedded in the vast biomedical literature could facilitate a wide range of computational biomedical applications, such as pharmacovigilance. The BioCreative V organized a Chemical Disease Relation (CDR) Track regarding chemical-induced disease relation extraction from biomedical literature in 2015. We participated in all subtasks of this challenge. In this article, we present our participation system Chemical Disease Relation Extraction SysTem (CD-REST), an end-to-end system for extracting chemical-induced disease relations in biomedical literature. CD-REST consists of two main components: (1) a chemical and disease named entity recognition and normalization module, which employs the Conditional Random Fields algorithm for entity recognition and a Vector Space Model-based approach for normalization; and (2) a relation extraction module that classifies both sentence-level and document-level candidate drug–disease pairs by support vector machines. Our system achieved the best performance on the chemical-induced disease relation extraction subtask in the BioCreative V CDR Track, demonstrating the effectiveness of our proposed machine learning-based approaches for automatic extraction of chemical-induced disease relations in biomedical literature. The CD-REST system provides web services using HTTP POST request. The web services can be accessed from http://clinicalnlptool.com/cdr. The online CD-REST demonstration system is available at http://clinicalnlptool.com/cdr/cdr.html. Database URL: http://clinicalnlptool.com/cdr; http://clinicalnlptool.com/cdr/cdr.html PMID:27016700

  9. Automated Learning of Subcellular Variation among Punctate Protein Patterns and a Generative Model of Their Relation to Microtubules.

    PubMed

    Johnson, Gregory R; Li, Jieyue; Shariff, Aabid; Rohde, Gustavo K; Murphy, Robert F

    2015-12-01

    Characterizing the spatial distribution of proteins directly from microscopy images is a difficult problem with numerous applications in cell biology (e.g. identifying motor-related proteins) and clinical research (e.g. identification of cancer biomarkers). Here we describe the design of a system that provides automated analysis of punctate protein patterns in microscope images, including quantification of their relationships to microtubules. We constructed the system using confocal immunofluorescence microscopy images from the Human Protein Atlas project for 11 punctate proteins in three cultured cell lines. These proteins have previously been characterized as being primarily located in punctate structures, but their images had all been annotated by visual examination as being simply "vesicular". We were able to show that these patterns could be distinguished from each other with high accuracy, and we were able to assign to one of these subclasses hundreds of proteins whose subcellular localization had not previously been well defined. In addition to providing these novel annotations, we built a generative approach to modeling of punctate distributions that captures the essential characteristics of the distinct patterns. Such models are expected to be valuable for representing and summarizing each pattern and for constructing systems biology simulations of cell behaviors.

  10. Semi-automated relative quantification of cell culture contamination with mycoplasma by Photoshop-based image analysis on immunofluorescence preparations.

    PubMed

    Kumar, Ashok; Yerneni, Lakshmana K

    2009-01-01

    Mycoplasma contamination in cell culture is a serious setback for the cell-culturist. The experiments undertaken using contaminated cell cultures are known to yield unreliable or false results due to various morphological, biochemical and genetic effects. Earlier surveys revealed incidences of mycoplasma contamination in cell cultures to range from 15 to 80%. Out of a vast array of methods for detecting mycoplasma in cell culture, the cytological methods directly demonstrate the contaminating organism present in association with the cultured cells. In this investigation, we report the adoption of a cytological immunofluorescence assay (IFA), in an attempt to obtain a semi-automated relative quantification of contamination by employing the user-friendly Photoshop-based image analysis. The study performed on 77 cell cultures randomly collected from various laboratories revealed mycoplasma contamination in 18 cell cultures simultaneously by IFA and Hoechst DNA fluorochrome staining methods. It was observed that the Photoshop-based image analysis on IFA stained slides was very valuable as a sensitive tool in providing quantitative assessment on the extent of contamination both per se and in comparison to cellularity of cell cultures. The technique could be useful in estimating the efficacy of anti-mycoplasma agents during decontaminating measures.

  11. Automated Learning of Subcellular Variation among Punctate Protein Patterns and a Generative Model of Their Relation to Microtubules

    PubMed Central

    Johnson, Gregory R.; Li, Jieyue; Shariff, Aabid; Rohde, Gustavo K.; Murphy, Robert F.

    2015-01-01

    Characterizing the spatial distribution of proteins directly from microscopy images is a difficult problem with numerous applications in cell biology (e.g. identifying motor-related proteins) and clinical research (e.g. identification of cancer biomarkers). Here we describe the design of a system that provides automated analysis of punctate protein patterns in microscope images, including quantification of their relationships to microtubules. We constructed the system using confocal immunofluorescence microscopy images from the Human Protein Atlas project for 11 punctate proteins in three cultured cell lines. These proteins have previously been characterized as being primarily located in punctate structures, but their images had all been annotated by visual examination as being simply “vesicular”. We were able to show that these patterns could be distinguished from each other with high accuracy, and we were able to assign to one of these subclasses hundreds of proteins whose subcellular localization had not previously been well defined. In addition to providing these novel annotations, we built a generative approach to modeling of punctate distributions that captures the essential characteristics of the distinct patterns. Such models are expected to be valuable for representing and summarizing each pattern and for constructing systems biology simulations of cell behaviors. PMID:26624011

  12. Toward Automating HIV Identification: Machine Learning for Rapid Identification of HIV-Related Social Media Data.

    PubMed

    Young, Sean D; Yu, Wenchao; Wang, Wei

    2017-02-01

    "Social big data" from technologies such as social media, wearable devices, and online searches continue to grow and can be used as tools for HIV research. Although researchers can uncover patterns and insights associated with HIV trends and transmission, the review process is time consuming and resource intensive. Machine learning methods derived from computer science might be used to assist HIV domain experts by learning how to rapidly and accurately identify patterns associated with HIV from a large set of social data. Using an existing social media data set that was associated with HIV and coded by an HIV domain expert, we tested whether 4 commonly used machine learning methods could learn the patterns associated with HIV risk behavior. We used the 10-fold cross-validation method to examine the speed and accuracy of these models in applying that knowledge to detect HIV content in social media data. Logistic regression and random forest resulted in the highest accuracy in detecting HIV-related social data (85.3%), whereas the Ridge Regression Classifier resulted in the lowest accuracy. Logistic regression yielded the fastest processing time (16.98 seconds). Machine learning can enable social big data to become a new and important tool in HIV research, helping to create a new field of "digital HIV epidemiology." If a domain expert can identify patterns in social data associated with HIV risk or HIV transmission, machine learning models could quickly and accurately learn those associations and identify potential HIV patterns in large social data sets.

  13. A fully automated system with on-line micro solid-phase extraction combined with capillary liquid chromatography-tandem mass spectrometry for high throughput analysis of microcystins and nodularin-R in tap water and lake water.

    PubMed

    Shan, Yuanhong; Shi, Xianzhe; Dou, Abo; Zou, Cunjie; He, Hongbing; Yang, Qin; Zhao, Sumin; Lu, Xin; Xu, Guowang

    2011-04-01

    Microcystins and nodularins are cyclic peptide hepatotoxins and tumour promoters from cyanobacteria. The present study describes the development, validation and practical application of a fully automated analytical method based on on-line micro solid-phase extraction-capillary liquid chromatography-tandem mass spectrometry for the simultaneous determination of seven microcystins and nodularin-R in tap water and lake water. Aliquots of just 100 μL of water samples are sufficient for the detection and quantification of all eight toxins. Selected reaction monitoring was used to obtain the highest sensitivity. Good linear calibrations were obtained for microcystins (50-2000ng/L) and nodularin-R (25-1000 ng/L) in spiked tap water and lake water samples. Excellent interday and intraday repeatability were achieved for eight toxins with relative standard deviation less than 15.7% in three different concentrations. Acceptable recoveries were achieved in the three concentrations with both tap water matrix and lake water matrix and no significant matrix effect was found in tap water and lake water except for microcystin-RR. The limits of detection (signal to noise ratio=3) of toxins were lower than 56.6 ng/L which is far below the 1 μg/L defined by the World Health Organization provisional guideline for microcystin-LR. Finally, this method was successfully applied to lake water samples from Tai lake and proved to be useful for water quality monitoring.

  14. Automation of reverse engineering process in aircraft modeling and related optimization problems

    NASA Technical Reports Server (NTRS)

    Li, W.; Swetits, J.

    1994-01-01

    During the year of 1994, the engineering problems in aircraft modeling were studied. The initial concern was to obtain a surface model with desirable geometric characteristics. Much of the effort during the first half of the year was to find an efficient way of solving a computationally difficult optimization model. Since the smoothing technique in the proposal 'Surface Modeling and Optimization Studies of Aerodynamic Configurations' requires solutions of a sequence of large-scale quadratic programming problems, it is important to design algorithms that can solve each quadratic program in a few interactions. This research led to three papers by Dr. W. Li, which were submitted to SIAM Journal on Optimization and Mathematical Programming. Two of these papers have been accepted for publication. Even though significant progress has been made during this phase of research and computation times was reduced from 30 min. to 2 min. for a sample problem, it was not good enough for on-line processing of digitized data points. After discussion with Dr. Robert E. Smith Jr., it was decided not to enforce shape constraints in order in order to simplify the model. As a consequence, P. Dierckx's nonparametric spline fitting approach was adopted, where one has only one control parameter for the fitting process - the error tolerance. At the same time the surface modeling software developed by Imageware was tested. Research indicated a substantially improved fitting of digitalized data points can be achieved if a proper parameterization of the spline surface is chosen. A winning strategy is to incorporate Dierckx's surface fitting with a natural parameterization for aircraft parts. The report consists of 4 chapters. Chapter 1 provides an overview of reverse engineering related to aircraft modeling and some preliminary findings of the effort in the second half of the year. Chapters 2-4 are the research results by Dr. W. Li on penalty functions and conjugate gradient methods for

  15. Automated determination of mercury and arsenic in extracts from ancient papers by integration of solid-phase extraction and energy dispersive X-ray fluorescence detection using a lab-on-valve system.

    PubMed

    Alcalde-Molina, M; Ruiz-Jiménez, J; Luque de Castro, M D

    2009-10-12

    A method to detect the presence and determine the concentration of mercury and arsenic in extracts of ancient papers, based on an approach which allows integration of solid-phase concentration of the target analytes and their detection, is proposed. Automation of the overall process (viz. swelling and conditioning of the sorbent, sample introduction for analytes retention, drying of the sorbent by air for proper measurement, elution and conditioning of the sorbent prior to introduction of the next sample) is achieved by on-line connection of a lab-on-valve system to a laboratory-made methacrylate cell for concentration-measurement. After optimization of the variables influencing the method, characterization of the method provided LODs and LOQs of 0.006 and 0.02microgg(-1) of paper, respectively, for Hg; and 0.007 and 0.027microgg(-1), respectively, for As, with repeatability of 6.37% for Hg and 5.62% for As, and reproducibility of 8.13% and 7.46% for Hg and As, respectively.

  16. UHPLC/HRMS Analysis of African Mango (Irvingia gabonensis) Seeds, Extract and Related Dietary Supplements

    PubMed Central

    Sun, Jianghao; Chen, Pei

    2012-01-01

    Dietary Supplements based on an extract from Irvingia gabonensis (African Mango, AM for abbreviation) seeds are one of the popular herbal weight loss dietary supplements in the US market. The extract is believed to be a natural and healthy way to lose weight and improve overall health. However, the chemical composition of African mango based-dietary supplements (AMDS) has never been reported. In this study, the chemical constituents of African mango seeds, African mango seeds extract (AMSE), and different kinds of commercially available African mango based dietary supplements (AMDS) have been investigated using an ultra high-performance liquid chromatography with high resolution mass spectrometry (UHPLC-HRMS) method. Ellagic acid, mono, di, tri-O methyl-ellagic acids and their glycosides were found as major components in African Mango seeds. These compounds may be used for quality control of African Mango extract and related dietary supplements. PMID:22880691

  17. Exploiting syntactic and semantics information for chemical–disease relation extraction

    PubMed Central

    Zhou, Huiwei; Deng, Huijie; Chen, Long; Yang, Yunlong; Jia, Chen; Huang, Degen

    2016-01-01

    Identifying chemical–disease relations (CDR) from biomedical literature could improve chemical safety and toxicity studies. This article proposes a novel syntactic and semantic information exploitation method for CDR extraction. The proposed method consists of a feature-based model, a tree kernel-based model and a neural network model. The feature-based model exploits lexical features, the tree kernel-based model captures syntactic structure features, and the neural network model generates semantic representations. The motivation of our method is to fully utilize the nice properties of the three models to explore diverse information for CDR extraction. Experiments on the BioCreative V CDR dataset show that the three models are all effective for CDR extraction, and their combination could further improve extraction performance. Database URL: http://www.biocreative.org/resources/corpora/biocreative-v-cdr-corpus/. PMID:27081156

  18. Automatic extraction of semantic relations between medical entities: a rule based approach.

    PubMed

    Ben Abacha, Asma; Zweigenbaum, Pierre

    2011-10-06

    Information extraction is a complex task which is necessary to develop high-precision information retrieval tools. In this paper, we present the platform MeTAE (Medical Texts Annotation and Exploration). MeTAE allows (i) to extract and annotate medical entities and relationships from medical texts and (ii) to explore semantically the produced RDF annotations. Our annotation approach relies on linguistic patterns and domain knowledge and consists in two steps: (i) recognition of medical entities and (ii) identification of the correct semantic relation between each pair of entities. The first step is achieved by an enhanced use of MetaMap which improves the precision obtained by MetaMap by 19.59% in our evaluation. The second step relies on linguistic patterns which are built semi-automatically from a corpus selected according to semantic criteria. We evaluate our system's ability to identify medical entities of 16 types. We also evaluate the extraction of treatment relations between a treatment (e.g. medication) and a problem (e.g. disease): we obtain 75.72% precision and 60.46% recall. According to our experiments, using an external sentence segmenter and noun phrase chunker may improve the precision of MetaMap-based medical entity recognition. Our pattern-based relation extraction method obtains good precision and recall w.r.t related works. A more precise comparison with related approaches remains difficult however given the differences in corpora and in the exact nature of the extracted relations. The selection of MEDLINE articles through queries related to known drug-disease pairs enabled us to obtain a more focused corpus of relevant examples of treatment relations than a more general MEDLINE query.

  19. First experience with a fully automated extraction system for simultaneous on-line direct tandem mass spectrometric analysis of amino acids and (acyl-)carnitines in a newborn screening setting.

    PubMed

    Fingerhut, Ralph; Silva Polanco, Maria Lucia; Silva Arevalo, Gabriel De Jesus; Swiderska, Magdalena A

    2014-04-30

    For Newborn Screening (NBS) programs all over the world whole blood dried on filter paper, also referred to as dried blood spots (DBS), has been the standard specimen for decades. In recent years DBS have attracted the attention of pharmaceutical companies, mostly due to the low volume of collected sample and simplified, therefore more cost-efficient, transportation requirements. However, the classical NBS workflow did not totally fulfil the needs of their studies, especially with respect to high-throughput unassisted sample processing for tandem mass spectrometric (MS/MS) analysis. Automated on-line extraction systems for direct analysis have already been tested and proved to be suitable for these pharmaceutical applications. The suitability of the automated CAMAG DBS-MS 500 interface for simultaneous detection of amino acids and (acyl-)carnitines has been tested together with an Acquity TQD tandem mass spectrometer from Waters and MassChrom stable isotope labelled internal standards from Chromsystems. No chromatographic sample treatment was applied; instead, the extract was directly injected into the MS/MS instrument. The feasibility of the instrumental setting for the routine newborn screening was tested on original samples coming from previously diagnosed patients. The performance of the automated extraction technique and its application in preliminary quantitative screening for amino acids and (acyl-)carnitines for NBS showed very promising results. Several samples from patients, each diagnosed with one of four different inborn errors of metabolism (IEM), were tested and the correlation with the conventional punch-and-elute approach was very good. Although the presented method still needs further optimization, our study clearly shows the possibility to use direct on-line analysis in the NBS setting. Our report on direct on-line analysis of newborn samples is a first approach in the development of a fully automated screening method for NBS analysis. With regard

  20. Study on electrical current variations in electromembrane extraction process: Relation between extraction recovery and magnitude of electrical current.

    PubMed

    Rahmani, Turaj; Rahimi, Atyeh; Nojavan, Saeed

    2016-01-15

    This contribution presents an experimental approach to improve analytical performance of electromembrane extraction (EME) procedure, which is based on the scrutiny of current pattern under different extraction conditions such as using different organic solvents as supported liquid membrane, electrical potentials, pH values of donor and acceptor phases, variable extraction times, temperatures, stirring rates, different hollow fiber lengths and the addition of salts or organic solvents to the sample matrix. In this study, four basic drugs with different polarities were extracted under different conditions with the corresponding electrical current patterns compared against extraction recoveries. The extraction process was demonstrated in terms of EME-HPLC analyses of selected basic drugs. Comparing the obtained extraction recoveries with the electrical current patterns, most cases exhibited minimum recovery and repeatability at the highest investigated magnitude of electrical current. . It was further found that identical current patterns are associated with repeated extraction efficiencies. In other words, the pattern should be repeated for a successful extraction. The results showed completely different electrical currents under different extraction conditions, so that all variable parameters have contributions into the electrical current pattern. Finally, the current patterns of extractions from wastewater, plasma and urine samples were demonstrated. The results indicated an increase in the electrical current when extracting from complex matrices; this was seen to decrease the extraction efficiency. Copyright © 2015 Elsevier B.V. All rights reserved.

  1. Semisupervised Learning Based Disease-Symptom and Symptom-Therapeutic Substance Relation Extraction from Biomedical Literature

    PubMed Central

    Li, Yuxia

    2016-01-01

    With the rapid growth of biomedical literature, a large amount of knowledge about diseases, symptoms, and therapeutic substances hidden in the literature can be used for drug discovery and disease therapy. In this paper, we present a method of constructing two models for extracting the relations between the disease and symptom and symptom and therapeutic substance from biomedical texts, respectively. The former judges whether a disease causes a certain physiological phenomenon while the latter determines whether a substance relieves or eliminates a certain physiological phenomenon. These two kinds of relations can be further utilized to extract the relations between disease and therapeutic substance. In our method, first two training sets for extracting the relations between the disease-symptom and symptom-therapeutic substance are manually annotated and then two semisupervised learning algorithms, that is, Co-Training and Tri-Training, are applied to utilize the unlabeled data to boost the relation extraction performance. Experimental results show that exploiting the unlabeled data with both Co-Training and Tri-Training algorithms can enhance the performance effectively. PMID:27822473

  2. Microwave signal extraction from femtosecond mode-locked lasers with attosecond relative timing drift.

    PubMed

    Kim, Jungwon; Kärtner, Franz X

    2010-06-15

    We present a feedback-control method for suppression of excess phase noise in the optical-to-electronic conversion process involved in the extraction of microwave signals from femtosecond mode-locked lasers. A delay-locked loop based on drift-free phase detection with a differentially biased Sagnac loop is employed to eliminate low-frequency (e.g., <1 kHz) excess phase noise and drift in the regenerated microwave signals. A 10 GHz microwave signal is extracted from a 200 MHz repetition rate mode-locked laser with a relative rms timing jitter of 2.4 fs (integrated from 1 mHz to 1 MHz) and a relative rms timing drift of 0.84 fs (integrated over 8 h with 1 Hz bandwidth) between the optical pulse train and the extracted microwave signal.