Sample records for automated extraction method

  1. [DNA extraction from bones and teeth using AutoMate Express forensic DNA extraction system].

    PubMed

    Gao, Lin-Lin; Xu, Nian-Lai; Xie, Wei; Ding, Shao-Cheng; Wang, Dong-Jing; Ma, Li-Qin; Li, You-Ying

    2013-04-01

    To explore a new method in order to extract DNA from bones and teeth automatically. Samples of 33 bones and 15 teeth were acquired by freeze-mill method and manual method, respectively. DNA materials were extracted and quantified from the triturated samples by AutoMate Express forensic DNA extraction system. DNA extraction from bones and teeth were completed in 3 hours using the AutoMate Express forensic DNA extraction system. There was no statistical difference between the two methods in the DNA concentration of bones. Both bones and teeth got the good STR typing by freeze-mill method, and the DNA concentration of teeth was higher than those by manual method. AutoMate Express forensic DNA extraction system is a new method to extract DNA from bones and teeth, which can be applied in forensic practice.

  2. Automated extraction for the analysis of 11-nor-delta9-tetrahydrocannabinol-9-carboxylic acid (THCCOOH) in urine using a six-head probe Hamilton Microlab 2200 system and gas chromatography-mass spectrometry.

    PubMed

    Whitter, P D; Cary, P L; Leaton, J I; Johnson, J E

    1999-01-01

    An automated extraction scheme for the analysis of 11 -nor-delta9-tetrahydrocannabinol-9-carboxylic acid using the Hamilton Microlab 2200, which was modified for gravity-flow solid-phase extraction, has been evaluated. The Hamilton was fitted with a six-head probe, a modular valve positioner, and a peristaltic pump. The automated method significantly increased sample throughput, improved assay consistency, and reduced the time spent performing the extraction. Extraction recovery for the automated method was > 90%. The limit of detection, limit of quantitation, and upper limit of linearity were equivalent to the manual method: 1.5, 3.0, and 300 ng/mL, respectively. Precision at the 15-ng/mL cut-off was as follows: mean = 14.4, standard deviation = 0.5, coefficient of variation = 3.5%. Comparison of 38 patient samples, extracted by the manual and automated extraction methods, demonstrated the following correlation statistics: r = .991, slope 1.029, and y-intercept -2.895. Carryover was < 0.3% at 1000 ng/mL. Aliquoting/extraction time for the automated method (48 urine samples) was 50 min, and the manual procedure required approximately 2.5 h. The automated aliquoting/extraction method on the Hamilton Microlab 2200 and its use in forensic applications are reviewed.

  3. A COMPARISON OF AUTOMATED AND TRADITIONAL METHODS FOR THE EXTRACTION OF ARSENICALS FROM FISH

    EPA Science Inventory

    An automated extractor employing accelerated solvent extraction (ASE) has been compared with a traditional sonication method of extraction for the extraction of arsenicals from fish tissue. Four different species of fish and a standard reference material, DORM-2, were subjected t...

  4. Establishing a novel automated magnetic bead-based method for the extraction of DNA from a variety of forensic samples.

    PubMed

    Witt, Sebastian; Neumann, Jan; Zierdt, Holger; Gébel, Gabriella; Röscheisen, Christiane

    2012-09-01

    Automated systems have been increasingly utilized for DNA extraction by many forensic laboratories to handle growing numbers of forensic casework samples while minimizing the risk of human errors and assuring high reproducibility. The step towards automation however is not easy: The automated extraction method has to be very versatile to reliably prepare high yields of pure genomic DNA from a broad variety of sample types on different carrier materials. To prevent possible cross-contamination of samples or the loss of DNA, the components of the kit have to be designed in a way that allows for the automated handling of the samples with no manual intervention necessary. DNA extraction using paramagnetic particles coated with a DNA-binding surface is predestined for an automated approach. For this study, we tested different DNA extraction kits using DNA-binding paramagnetic particles with regard to DNA yield and handling by a Freedom EVO(®)150 extraction robot (Tecan) equipped with a Te-MagS magnetic separator. Among others, the extraction kits tested were the ChargeSwitch(®)Forensic DNA Purification Kit (Invitrogen), the PrepFiler™Automated Forensic DNA Extraction Kit (Applied Biosystems) and NucleoMag™96 Trace (Macherey-Nagel). After an extensive test phase, we established a novel magnetic bead extraction method based upon the NucleoMag™ extraction kit (Macherey-Nagel). The new method is readily automatable and produces high yields of DNA from different sample types (blood, saliva, sperm, contact stains) on various substrates (filter paper, swabs, cigarette butts) with no evidence of a loss of magnetic beads or sample cross-contamination. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  5. Extracting DNA from FFPE Tissue Biospecimens Using User-Friendly Automated Technology: Is There an Impact on Yield or Quality?

    PubMed

    Mathieson, William; Guljar, Nafia; Sanchez, Ignacio; Sroya, Manveer; Thomas, Gerry A

    2018-05-03

    DNA extracted from formalin-fixed, paraffin-embedded (FFPE) tissue blocks is amenable to analytical techniques, including sequencing. DNA extraction protocols are typically long and complex, often involving an overnight proteinase K digest. Automated platforms that shorten and simplify the process are therefore an attractive proposition for users wanting a faster turn-around or to process large numbers of biospecimens. It is, however, unclear whether automated extraction systems return poorer DNA yields or quality than manual extractions performed by experienced technicians. We extracted DNA from 42 FFPE clinical tissue biospecimens using the QiaCube (Qiagen) and ExScale (ExScale Biospecimen Solutions) automated platforms, comparing DNA yields and integrities with those from manual extractions. The QIAamp DNA FFPE Spin Column Kit was used for manual and QiaCube DNA extractions and the ExScale extractions were performed using two of the manufacturer's magnetic bead kits: one extracting DNA only and the other simultaneously extracting DNA and RNA. In all automated extraction methods, DNA yields and integrities (assayed using DNA Integrity Numbers from a 4200 TapeStation and the qPCR-based Illumina FFPE QC Assay) were poorer than in the manual method, with the QiaCube system performing better than the ExScale system. However, ExScale was fastest, offered the highest reproducibility when extracting DNA only, and required the least intervention or technician experience. Thus, the extraction methods have different strengths and weaknesses, would appeal to different users with different requirements, and therefore, we cannot recommend one method over another.

  6. A new automated spectral feature extraction method and its application in spectral classification and defective spectra recovery

    NASA Astrophysics Data System (ADS)

    Wang, Ke; Guo, Ping; Luo, A.-Li

    2017-03-01

    Spectral feature extraction is a crucial procedure in automated spectral analysis. This procedure starts from the spectral data and produces informative and non-redundant features, facilitating the subsequent automated processing and analysis with machine-learning and data-mining techniques. In this paper, we present a new automated feature extraction method for astronomical spectra, with application in spectral classification and defective spectra recovery. The basic idea of our approach is to train a deep neural network to extract features of spectra with different levels of abstraction in different layers. The deep neural network is trained with a fast layer-wise learning algorithm in an analytical way without any iterative optimization procedure. We evaluate the performance of the proposed scheme on real-world spectral data. The results demonstrate that our method is superior regarding its comprehensive performance, and the computational cost is significantly lower than that for other methods. The proposed method can be regarded as a new valid alternative general-purpose feature extraction method for various tasks in spectral data analysis.

  7. [DNA Extraction from Old Bones by AutoMate Express™ System].

    PubMed

    Li, B; Lü, Z

    2017-08-01

    To establish a method for extracting DNA from old bones by AutoMate Express™ system. Bones were grinded into powder by freeze-mill. After extraction by AutoMate Express™, DNA were amplified and genotyped by Identifiler®Plus and MinFiler™ kits. DNA were extracted from 10 old bone samples, which kept in different environments with the postmortem interval from 10 to 20 years, in 3 hours by AutoMate Express™ system. Complete STR typing results were obtained from 8 samples. AutoMate Express™ system can quickly and efficiently extract DNA from old bones, which can be applied in forensic practice. Copyright© by the Editorial Department of Journal of Forensic Medicine

  8. Automated extraction of pleural effusion in three-dimensional thoracic CT images

    NASA Astrophysics Data System (ADS)

    Kido, Shoji; Tsunomori, Akinori

    2009-02-01

    It is important for diagnosis of pulmonary diseases to measure volume of accumulating pleural effusion in threedimensional thoracic CT images quantitatively. However, automated extraction of pulmonary effusion correctly is difficult. Conventional extraction algorithm using a gray-level based threshold can not extract pleural effusion from thoracic wall or mediastinum correctly, because density of pleural effusion in CT images is similar to those of thoracic wall or mediastinum. So, we have developed an automated extraction method of pulmonary effusion by use of extracting lung area with pleural effusion. Our method used a template of lung obtained from a normal lung for segmentation of lungs with pleural effusions. Registration process consisted of two steps. First step was a global matching processing between normal and abnormal lungs of organs such as bronchi, bones (ribs, sternum and vertebrae) and upper surfaces of livers which were extracted using a region-growing algorithm. Second step was a local matching processing between normal and abnormal lungs which were deformed by the parameter obtained from the global matching processing. Finally, we segmented a lung with pleural effusion by use of the template which was deformed by two parameters obtained from the global matching processing and the local matching processing. We compared our method with a conventional extraction method using a gray-level based threshold and two published methods. The extraction rates of pleural effusions obtained from our method were much higher than those obtained from other methods. Automated extraction method of pulmonary effusion by use of extracting lung area with pleural effusion is promising for diagnosis of pulmonary diseases by providing quantitative volume of accumulating pleural effusion.

  9. Development of automated extraction method of biliary tract from abdominal CT volumes based on local intensity structure analysis

    NASA Astrophysics Data System (ADS)

    Koga, Kusuto; Hayashi, Yuichiro; Hirose, Tomoaki; Oda, Masahiro; Kitasaka, Takayuki; Igami, Tsuyoshi; Nagino, Masato; Mori, Kensaku

    2014-03-01

    In this paper, we propose an automated biliary tract extraction method from abdominal CT volumes. The biliary tract is the path by which bile is transported from liver to the duodenum. No extraction method have been reported for the automated extraction of the biliary tract from common contrast CT volumes. Our method consists of three steps including: (1) extraction of extrahepatic bile duct (EHBD) candidate regions, (2) extraction of intrahepatic bile duct (IHBD) candidate regions, and (3) combination of these candidate regions. The IHBD has linear structures and intensities of the IHBD are low in CT volumes. We use a dark linear structure enhancement (DLSE) filter based on a local intensity structure analysis method using the eigenvalues of the Hessian matrix for the IHBD candidate region extraction. The EHBD region is extracted using a thresholding process and a connected component analysis. In the combination process, we connect the IHBD candidate regions to each EHBD candidate region and select a bile duct region from the connected candidate regions. We applied the proposed method to 22 cases of CT volumes. An average Dice coefficient of extraction result was 66.7%.

  10. Optimization-based method for automated road network extraction

    DOT National Transportation Integrated Search

    2001-09-18

    Automated road information extraction has significant applicability in transportation. : It provides a means for creating, maintaining, and updating transportation network databases that : are needed for purposes ranging from traffic management to au...

  11. Automated extraction of DNA from biological stains on fabric from crime cases. A comparison of a manual and three automated methods.

    PubMed

    Stangegaard, Michael; Hjort, Benjamin B; Hansen, Thomas N; Hoflund, Anders; Mogensen, Helle S; Hansen, Anders J; Morling, Niels

    2013-05-01

    The presence of PCR inhibitors in extracted DNA may interfere with the subsequent quantification and short tandem repeat (STR) reactions used in forensic genetic DNA typing. DNA extraction from fabric for forensic genetic purposes may be challenging due to the occasional presence of PCR inhibitors that may be co-extracted with the DNA. Using 120 forensic trace evidence samples consisting of various types of fabric, we compared three automated DNA extraction methods based on magnetic beads (PrepFiler Express Forensic DNA Extraction Kit on an AutoMate Express, QIAsyphony DNA Investigator kit either with the sample pre-treatment recommended by Qiagen or an in-house optimized sample pre-treatment on a QIAsymphony SP) and one manual method (Chelex) with the aim of reducing the amount of PCR inhibitors in the DNA extracts and increasing the proportion of reportable STR-profiles. A total of 480 samples were processed. The highest DNA recovery was obtained with the PrepFiler Express kit on an AutoMate Express while the lowest DNA recovery was obtained using a QIAsymphony SP with the sample pre-treatment recommended by Qiagen. Extraction using a QIAsymphony SP with the sample pre-treatment recommended by Qiagen resulted in the lowest percentage of PCR inhibition (0%) while extraction using manual Chelex resulted in the highest percentage of PCR inhibition (51%). The largest number of reportable STR-profiles was obtained with DNA from samples extracted with the PrepFiler Express kit (75%) while the lowest number was obtained with DNA from samples extracted using a QIAsymphony SP with the sample pre-treatment recommended by Qiagen (41%). Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  12. The BUME method: a new rapid and simple chloroform-free method for total lipid extraction of animal tissue

    NASA Astrophysics Data System (ADS)

    Löfgren, Lars; Forsberg, Gun-Britt; Ståhlman, Marcus

    2016-06-01

    In this study we present a simple and rapid method for tissue lipid extraction. Snap-frozen tissue (15-150 mg) is collected in 2 ml homogenization tubes. 500 μl BUME mixture (butanol:methanol [3:1]) is added and automated homogenization of up to 24 frozen samples at a time in less than 60 seconds is performed, followed by a 5-minute single-phase extraction. After the addition of 500 μl heptane:ethyl acetate (3:1) and 500 μl 1% acetic acid a 5-minute two-phase extraction is performed. Lipids are recovered from the upper phase by automated liquid handling using a standard 96-tip robot. A second two-phase extraction is performed using 500 μl heptane:ethyl acetate (3:1). Validation of the method showed that the extraction recoveries for the investigated lipids, which included sterols, glycerolipids, glycerophospholipids and sphingolipids were similar or better than for the Folch method. We also applied the method for lipid extraction of liver and heart and compared the lipid species profiles with profiles generated after Folch and MTBE extraction. We conclude that the BUME method is superior to the Folch method in terms of simplicity, through-put, automation, solvent consumption, economy, health and environment yet delivering lipid recoveries fully comparable to or better than the Folch method.

  13. An automated method for the analysis of phenolic acids in plasma based on ion-pairing micro-extraction coupled on-line to gas chromatography/mass spectrometry with in-liner derivatisation.

    PubMed

    Peters, Sonja; Kaal, Erwin; Horsting, Iwan; Janssen, Hans-Gerd

    2012-02-24

    A new method is presented for the analysis of phenolic acids in plasma based on ion-pairing 'Micro-extraction in packed sorbent' (MEPS) coupled on-line to in-liner derivatisation-gas chromatography-mass spectrometry (GC-MS). The ion-pairing reagent served a dual purpose. It was used both to improve extraction yields of the more polar analytes and as the methyl donor in the automated in-liner derivatisation method. In this way, a fully automated procedure for the extraction, derivatisation and injection of a wide range of phenolic acids in plasma samples has been obtained. An extensive optimisation of the extraction and derivatisation procedure has been performed. The entire method showed excellent repeatabilities of under 10% and linearities of 0.99 or better for all phenolic acids. The limits of detection of the optimised method for the majority of phenolic acids were 10ng/mL or lower with three phenolic acids having less-favourable detection limits of around 100 ng/mL. Finally, the newly developed method has been applied in a human intervention trial in which the bioavailability of polyphenols from wine and tea was studied. Forty plasma samples could be analysed within 24h in a fully automated method including sample extraction, derivatisation and gas chromatographic analysis. Copyright © 2011 Elsevier B.V. All rights reserved.

  14. Comprehensive automation of the solid phase extraction gas chromatographic mass spectrometric analysis (SPE-GC/MS) of opioids, cocaine, and metabolites from serum and other matrices.

    PubMed

    Lerch, Oliver; Temme, Oliver; Daldrup, Thomas

    2014-07-01

    The analysis of opioids, cocaine, and metabolites from blood serum is a routine task in forensic laboratories. Commonly, the employed methods include many manual or partly automated steps like protein precipitation, dilution, solid phase extraction, evaporation, and derivatization preceding a gas chromatography (GC)/mass spectrometry (MS) or liquid chromatography (LC)/MS analysis. In this study, a comprehensively automated method was developed from a validated, partly automated routine method. This was possible by replicating method parameters on the automated system. Only marginal optimization of parameters was necessary. The automation relying on an x-y-z robot after manual protein precipitation includes the solid phase extraction, evaporation of the eluate, derivatization (silylation with N-methyl-N-trimethylsilyltrifluoroacetamide, MSTFA), and injection into a GC/MS. A quantitative analysis of almost 170 authentic serum samples and more than 50 authentic samples of other matrices like urine, different tissues, and heart blood on cocaine, benzoylecgonine, methadone, morphine, codeine, 6-monoacetylmorphine, dihydrocodeine, and 7-aminoflunitrazepam was conducted with both methods proving that the analytical results are equivalent even near the limits of quantification (low ng/ml range). To our best knowledge, this application is the first one reported in the literature employing this sample preparation system.

  15. Automated Protein Biomarker Analysis: on-line extraction of clinical samples by Molecularly Imprinted Polymers

    NASA Astrophysics Data System (ADS)

    Rossetti, Cecilia; Świtnicka-Plak, Magdalena A.; Grønhaug Halvorsen, Trine; Cormack, Peter A. G.; Sellergren, Börje; Reubsaet, Léon

    2017-03-01

    Robust biomarker quantification is essential for the accurate diagnosis of diseases and is of great value in cancer management. In this paper, an innovative diagnostic platform is presented which provides automated molecularly imprinted solid-phase extraction (MISPE) followed by liquid chromatography-mass spectrometry (LC-MS) for biomarker determination using ProGastrin Releasing Peptide (ProGRP), a highly sensitive biomarker for Small Cell Lung Cancer, as a model. Molecularly imprinted polymer microspheres were synthesized by precipitation polymerization and analytical optimization of the most promising material led to the development of an automated quantification method for ProGRP. The method enabled analysis of patient serum samples with elevated ProGRP levels. Particularly low sample volumes were permitted using the automated extraction within a method which was time-efficient, thereby demonstrating the potential of such a strategy in a clinical setting.

  16. Comparison of manual and automated nucleic acid extraction methods from clinical specimens for microbial diagnosis purposes.

    PubMed

    Wozniak, Aniela; Geoffroy, Enrique; Miranda, Carolina; Castillo, Claudia; Sanhueza, Francia; García, Patricia

    2016-11-01

    The choice of nucleic acids (NAs) extraction method for molecular diagnosis in microbiology is of major importance because of the low microbial load, different nature of microorganisms, and clinical specimens. The NA yield of different extraction methods has been mostly studied using spiked samples. However, information from real human clinical specimens is scarce. The purpose of this study was to compare the performance of a manual low-cost extraction method (Qiagen kit or salting-out extraction method) with the automated high-cost MagNAPure Compact method. According to cycle threshold values for different pathogens, MagNAPure is as efficient as Qiagen for NA extraction from noncomplex clinical specimens (nasopharyngeal swab, skin swab, plasma, respiratory specimens). In contrast, according to cycle threshold values for RNAseP, MagNAPure method may not be an appropriate method for NA extraction from blood. We believe that MagNAPure versatility reduced risk of cross-contamination and reduced hands-on time compensates its high cost. Copyright © 2016 Elsevier Inc. All rights reserved.

  17. Multicenter Comparative Evaluation of Five Commercial Methods for Toxoplasma DNA Extraction from Amniotic Fluid▿

    PubMed Central

    Yera, H.; Filisetti, D.; Bastien, P.; Ancelle, T.; Thulliez, P.; Delhaes, L.

    2009-01-01

    Over the past few years, a number of new nucleic acid extraction methods and extraction platforms using chemistry combined with magnetic or silica particles have been developed, in combination with instruments to facilitate the extraction procedure. The objective of the present study was to investigate the suitability of these automated methods for the isolation of Toxoplasma gondii DNA from amniotic fluid (AF). Therefore, three automated procedures were compared to two commercialized manual extraction methods. The MagNA Pure Compact (Roche), BioRobot EZ1 (Qiagen), and easyMAG (bioMérieux) automated procedures were compared to two manual DNA extraction kits, the QIAamp DNA minikit (Qiagen) and the High Pure PCR template preparation kit (Roche). Evaluation was carried out with two specific Toxoplasma PCRs (targeting the 529-bp repeat element), inhibitor search PCRs, and human beta-globin PCRs. The samples each consisted of 4 ml of AF with or without a calibrated Toxoplasma gondii RH strain suspension (0, 1, 2.5, 5, and 25 tachyzoites/ml). All PCR assays were laboratory-developed real-time PCR assays, using either TaqMan or fluorescent resonance energy transfer probes. A total of 1,178 PCRs were performed, including 978 Toxoplasma PCRs. The automated and manual methods were similar in sensitivity for DNA extraction from T. gondii at the highest concentration (25 Toxoplasma gondii cells/ml). However, our results showed that the DNA extraction procedures led to variable efficacy in isolating low concentrations of tachyzoites in AF samples (<5 Toxoplasma gondii cells/ml), a difference that might have repercussions since low parasite concentrations in AF exist and can lead to congenital toxoplasmosis. PMID:19846633

  18. A modified method for determining tannin-protein precipitation capacity using accelerated solvent extraction (ASE) and microplate gel filtration.

    PubMed

    McArt, Scott H; Spalinger, Donald E; Kennish, John M; Collins, William B

    2006-06-01

    The protein precipitation assay used by Robbins et al., (1987) Ecology 68:98-107 has been shown to predict successfully the reduction in protein availability to some ruminants due to tannins. The procedure, however, is expensive and laborious, which limits its utility, especially for quantitative ecological or nutritional applications where large numbers of assays may be required. We have modified the method to decrease its cost and increase laboratory efficiency by: (1) automating the extraction by using Accelerated Solvent Extraction (ASE); and (2) by scaling and automating the precipitation reaction, chromatography, and spectrometry with microplate gel filtration and an automated UV-VIS microplate spectrometer. ASE extraction is shown to be as effective at extracting tannins as the hot methanol technique. Additionally, the microplate assay is sensitive and precise. We show that the results from the new technique correspond in a nearly 1:1 relationship to the results of the previous technique. Hence, this method could reliably replace the older method with no loss in relevance to herbivore protein digestion. Moreover, the ASE extraction technique should be applicable to other tannin-protein precipitation assays and possibly other phenolic assays.

  19. Automated sample preparation using membrane microtiter extraction for bioanalytical mass spectrometry.

    PubMed

    Janiszewski, J; Schneider, P; Hoffmaster, K; Swyden, M; Wells, D; Fouda, H

    1997-01-01

    The development and application of membrane solid phase extraction (SPE) in 96-well microtiter plate format is described for the automated analysis of drugs in biological fluids. The small bed volume of the membrane allows elution of the analyte in a very small solvent volume, permitting direct HPLC injection and negating the need for the time consuming solvent evaporation step. A programmable liquid handling station (Quadra 96) was modified to automate all SPE steps. To avoid drying of the SPE bed and to enhance the analytical precision a novel protocol for performing the condition, load and wash steps in rapid succession was utilized. A block of 96 samples can now be extracted in 10 min., about 30 times faster than manual solvent extraction or single cartridge SPE methods. This processing speed complements the high-throughput speed of contemporary high performance liquid chromatography mass spectrometry (HPLC/MS) analysis. The quantitative analysis of a test analyte (Ziprasidone) in plasma demonstrates the utility and throughput of membrane SPE in combination with HPLC/MS. The results obtained with the current automated procedure compare favorably with those obtained using solvent and traditional solid phase extraction methods. The method has been used for the analysis of numerous drug prototypes in biological fluids to support drug discovery efforts.

  20. Automated DNA extraction from genetically modified maize using aminosilane-modified bacterial magnetic particles.

    PubMed

    Ota, Hiroyuki; Lim, Tae-Kyu; Tanaka, Tsuyoshi; Yoshino, Tomoko; Harada, Manabu; Matsunaga, Tadashi

    2006-09-18

    A novel, automated system, PNE-1080, equipped with eight automated pestle units and a spectrophotometer was developed for genomic DNA extraction from maize using aminosilane-modified bacterial magnetic particles (BMPs). The use of aminosilane-modified BMPs allowed highly accurate DNA recovery. The (A(260)-A(320)):(A(280)-A(320)) ratio of the extracted DNA was 1.9+/-0.1. The DNA quality was sufficiently pure for PCR analysis. The PNE-1080 offered rapid assay completion (30 min) with high accuracy. Furthermore, the results of real-time PCR confirmed that our proposed method permitted the accurate determination of genetically modified DNA composition and correlated well with results obtained by conventional cetyltrimethylammonium bromide (CTAB)-based methods.

  1. Automated anatomical labeling of bronchial branches extracted from CT datasets based on machine learning and combination optimization and its application to bronchoscope guidance.

    PubMed

    Mori, Kensaku; Ota, Shunsuke; Deguchi, Daisuke; Kitasaka, Takayuki; Suenaga, Yasuhito; Iwano, Shingo; Hasegawa, Yosihnori; Takabatake, Hirotsugu; Mori, Masaki; Natori, Hiroshi

    2009-01-01

    This paper presents a method for the automated anatomical labeling of bronchial branches extracted from 3D CT images based on machine learning and combination optimization. We also show applications of anatomical labeling on a bronchoscopy guidance system. This paper performs automated labeling by using machine learning and combination optimization. The actual procedure consists of four steps: (a) extraction of tree structures of the bronchus regions extracted from CT images, (b) construction of AdaBoost classifiers, (c) computation of candidate names for all branches by using the classifiers, (d) selection of best combination of anatomical names. We applied the proposed method to 90 cases of 3D CT datasets. The experimental results showed that the proposed method can assign correct anatomical names to 86.9% of the bronchial branches up to the sub-segmental lobe branches. Also, we overlaid the anatomical names of bronchial branches on real bronchoscopic views to guide real bronchoscopy.

  2. The current role of on-line extraction approaches in clinical and forensic toxicology.

    PubMed

    Mueller, Daniel M

    2014-08-01

    In today's clinical and forensic toxicological laboratories, automation is of interest because of its ability to optimize processes, to reduce manual workload and handling errors and to minimize exposition to potentially infectious samples. Extraction is usually the most time-consuming step; therefore, automation of this step is reasonable. Currently, from the field of clinical and forensic toxicology, methods using the following on-line extraction techniques have been published: on-line solid-phase extraction, turbulent flow chromatography, solid-phase microextraction, microextraction by packed sorbent, single-drop microextraction and on-line desorption of dried blood spots. Most of these published methods are either single-analyte or multicomponent procedures; methods intended for systematic toxicological analysis are relatively scarce. However, the use of on-line extraction will certainly increase in the near future.

  3. Evaluation of automated cell disruptor methods for oomycetous and ascomycetous model organisms

    USDA-ARS?s Scientific Manuscript database

    Two automated cell disruptor-based methods for RNA extraction; disruption of thawed cells submerged in TRIzol Reagent (method QP), and direct disruption of frozen cells on dry ice (method CP), were optimized for a model oomycete, Phytophthora capsici, and compared with grinding in a mortar and pestl...

  4. A METHOD FOR AUTOMATED ANALYSIS OF 10 ML WATER SAMPLES CONTAINING ACIDIC, BASIC, AND NEUTRAL SEMIVOLATILE COMPOUNDS LISTED IN USEPA METHOD 8270 BY SOLID PHASE EXTRACTION COUPLED IN-LINE TO LARGE VOLUME INJECTION GAS CHROMATOGRAPHY/MASS SPECTROMETRY

    EPA Science Inventory

    Data is presented showing the progress made towards the development of a new automated system combining solid phase extraction (SPE) with gas chromatography/mass spectrometry for the single run analysis of water samples containing a broad range of acid, base and neutral compounds...

  5. [Establishment of Automation System for Detection of Alcohol in Blood].

    PubMed

    Tian, L L; Shen, Lei; Xue, J F; Liu, M M; Liang, L J

    2017-02-01

    To establish an automation system for detection of alcohol content in blood. The determination was performed by automated workstation of extraction-headspace gas chromatography (HS-GC). The blood collection with negative pressure, sealing time of headspace bottle and sample needle were checked and optimized in the abstraction of automation system. The automatic sampling was compared with the manual sampling. The quantitative data obtained by the automated workstation of extraction-HS-GC for alcohol was stable. The relative differences of two parallel samples were less than 5%. The automated extraction was superior to the manual extraction. A good linear relationship was obtained at the alcohol concentration range of 0.1-3.0 mg/mL ( r ≥0.999) with good repeatability. The method is simple and quick, with more standard experiment process and accurate experimental data. It eliminates the error from the experimenter and has good repeatability, which can be applied to the qualitative and quantitative detections of alcohol in blood. Copyright© by the Editorial Department of Journal of Forensic Medicine

  6. RNA isolation from mammalian cells using porous polymer monoliths: an approach for high-throughput automation.

    PubMed

    Chatterjee, Anirban; Mirer, Paul L; Zaldivar Santamaria, Elvira; Klapperich, Catherine; Sharon, Andre; Sauer-Budge, Alexis F

    2010-06-01

    The life science and healthcare communities have been redefining the importance of ribonucleic acid (RNA) through the study of small molecule RNA (in RNAi/siRNA technologies), micro RNA (in cancer research and stem cell research), and mRNA (gene expression analysis for biologic drug targets). Research in this field increasingly requires efficient and high-throughput isolation techniques for RNA. Currently, several commercial kits are available for isolating RNA from cells. Although the quality and quantity of RNA yielded from these kits is sufficiently good for many purposes, limitations exist in terms of extraction efficiency from small cell populations and the ability to automate the extraction process. Traditionally, automating a process decreases the cost and personnel time while simultaneously increasing the throughput and reproducibility. As the RNA field matures, new methods for automating its extraction, especially from low cell numbers and in high throughput, are needed to achieve these improvements. The technology presented in this article is a step toward this goal. The method is based on a solid-phase extraction technology using a porous polymer monolith (PPM). A novel cell lysis approach and a larger binding surface throughout the PPM extraction column ensure a high yield from small starting samples, increasing sensitivity and reducing indirect costs in cell culture and sample storage. The method ensures a fast and simple procedure for RNA isolation from eukaryotic cells, with a high yield both in terms of quality and quantity. The technique is amenable to automation and streamlined workflow integration, with possible miniaturization of the sample handling process making it suitable for high-throughput applications.

  7. A novel image processing technique for 3D volumetric analysis of severely resorbed alveolar sockets with CBCT.

    PubMed

    Manavella, Valeria; Romano, Federica; Garrone, Federica; Terzini, Mara; Bignardi, Cristina; Aimetti, Mario

    2017-06-01

    The aim of this study was to present and validate a novel procedure for the quantitative volumetric assessment of extraction sockets that combines cone-beam computed tomography (CBCT) and image processing techniques. The CBCT dataset of 9 severely resorbed extraction sockets was analyzed by means of two image processing software, Image J and Mimics, using manual and automated segmentation techniques. They were also applied on 5-mm spherical aluminum markers of known volume and on a polyvinyl chloride model of one alveolar socket scanned with Micro-CT to test the accuracy. Statistical differences in alveolar socket volume were found between the different methods of volumetric analysis (P<0.0001). The automated segmentation using Mimics was the most reliable and accurate method with a relative error of 1.5%, considerably smaller than the error of 7% and of 10% introduced by the manual method using Mimics and by the automated method using ImageJ. The currently proposed automated segmentation protocol for the three-dimensional rendering of alveolar sockets showed more accurate results, excellent inter-observer similarity and increased user friendliness. The clinical application of this method enables a three-dimensional evaluation of extraction socket healing after the reconstructive procedures and during the follow-up visits.

  8. Gas pressure assisted microliquid-liquid extraction coupled online to direct infusion mass spectrometry: a new automated screening platform for bioanalysis.

    PubMed

    Raterink, Robert-Jan; Witkam, Yoeri; Vreeken, Rob J; Ramautar, Rawi; Hankemeier, Thomas

    2014-10-21

    In the field of bioanalysis, there is an increasing demand for miniaturized, automated, robust sample pretreatment procedures that can be easily connected to direct-infusion mass spectrometry (DI-MS) in order to allow the high-throughput screening of drugs and/or their metabolites in complex body fluids like plasma. Liquid-Liquid extraction (LLE) is a common sample pretreatment technique often used for complex aqueous samples in bioanalysis. Despite significant developments that have been made in automated and miniaturized LLE procedures, fully automated LLE techniques allowing high-throughput bioanalytical studies on small-volume samples using direct infusion mass spectrometry, have not been matured yet. Here, we introduce a new fully automated micro-LLE technique based on gas-pressure assisted mixing followed by passive phase separation, coupled online to nanoelectrospray-DI-MS. Our method was characterized by varying the gas flow and its duration through the solvent mixture. For evaluation of the analytical performance, four drugs were spiked to human plasma, resulting in highly acceptable precision (RSD down to 9%) and linearity (R(2) ranging from 0.990 to 0.998). We demonstrate that our new method does not only allow the reliable extraction of analytes from small sample volumes of a few microliters in an automated and high-throughput manner, but also performs comparable or better than conventional offline LLE, in which the handling of small volumes remains challenging. Finally, we demonstrate the applicability of our method for drug screening on dried blood spots showing excellent linearity (R(2) of 0.998) and precision (RSD of 9%). In conclusion, we present the proof of principe of a new high-throughput screening platform for bioanalysis based on a new automated microLLE method, coupled online to a commercially available nano-ESI-DI-MS.

  9. A simple rapid process for semi-automated brain extraction from magnetic resonance images of the whole mouse head.

    PubMed

    Delora, Adam; Gonzales, Aaron; Medina, Christopher S; Mitchell, Adam; Mohed, Abdul Faheem; Jacobs, Russell E; Bearer, Elaine L

    2016-01-15

    Magnetic resonance imaging (MRI) is a well-developed technique in neuroscience. Limitations in applying MRI to rodent models of neuropsychiatric disorders include the large number of animals required to achieve statistical significance, and the paucity of automation tools for the critical early step in processing, brain extraction, which prepares brain images for alignment and voxel-wise statistics. This novel timesaving automation of template-based brain extraction ("skull-stripping") is capable of quickly and reliably extracting the brain from large numbers of whole head images in a single step. The method is simple to install and requires minimal user interaction. This method is equally applicable to different types of MR images. Results were evaluated with Dice and Jacquard similarity indices and compared in 3D surface projections with other stripping approaches. Statistical comparisons demonstrate that individual variation of brain volumes are preserved. A downloadable software package not otherwise available for extraction of brains from whole head images is included here. This software tool increases speed, can be used with an atlas or a template from within the dataset, and produces masks that need little further refinement. Our new automation can be applied to any MR dataset, since the starting point is a template mask generated specifically for that dataset. The method reliably and rapidly extracts brain images from whole head images, rendering them useable for subsequent analytical processing. This software tool will accelerate the exploitation of mouse models for the investigation of human brain disorders by MRI. Copyright © 2015 Elsevier B.V. All rights reserved.

  10. Development and validation of an automated liquid-liquid extraction GC/MS method for the determination of THC, 11-OH-THC, and free THC-carboxylic acid (THC-COOH) from blood serum.

    PubMed

    Purschke, Kirsten; Heinl, Sonja; Lerch, Oliver; Erdmann, Freidoon; Veit, Florian

    2016-06-01

    The analysis of Δ(9)-tetrahydrocannabinol (THC) and its metabolites 11-hydroxy-Δ(9)-tetrahydrocannabinol (11-OH-THC), and 11-nor-9-carboxy-Δ(9)-tetrahydrocannabinol (THC-COOH) from blood serum is a routine task in forensic toxicology laboratories. For examination of consumption habits, the concentration of the phase I metabolite THC-COOH is used. Recommendations for interpretation of analysis values in medical-psychological assessments (regranting of driver's licenses, Germany) include threshold values for the free, unconjugated THC-COOH. Using a fully automated two-step liquid-liquid extraction, THC, 11-OH-THC, and free, unconjugated THC-COOH were extracted from blood serum, silylated with N-methyl-N-(trimethylsilyl) trifluoroacetamide (MSTFA), and analyzed by GC/MS. The automation was carried out by an x-y-z sample robot equipped with modules for shaking, centrifugation, and solvent evaporation. This method was based on a previously developed manual sample preparation method. Validation guidelines of the Society of Toxicological and Forensic Chemistry (GTFCh) were fulfilled for both methods, at which the focus of this article is the automated one. Limits of detection and quantification for THC were 0.3 and 0.6 μg/L, for 11-OH-THC were 0.1 and 0.8 μg/L, and for THC-COOH were 0.3 and 1.1 μg/L, when extracting only 0.5 mL of blood serum. Therefore, the required limit of quantification for THC of 1 μg/L in driving under the influence of cannabis cases in Germany (and other countries) can be reached and the method can be employed in that context. Real and external control samples were analyzed, and a round robin test was passed successfully. To date, the method is employed in the Institute of Legal Medicine in Giessen, Germany, in daily routine. Automation helps in avoiding errors during sample preparation and reduces the workload of the laboratory personnel. Due to its flexibility, the analysis system can be employed for other liquid-liquid extractions as well. To the best of our knowledge, this is the first publication on a comprehensively automated classical liquid-liquid extraction workflow in the field of forensic toxicological analysis. Graphical abstract GC/MS with MPS Dual Head at the Institute of Legal Medicine, Giessen, Germany. Modules from left to right: (quick) Mix (for LLE), wash station, tray 1 (vials for extracts), solvent reservoir, (m) VAP (for extract evaporation), Solvent Filling Station (solvent supply), cooled tray 2 (vials for serum samples), and centrifuge (for phase separation).

  11. First Steps to Automated Interior Reconstruction from Semantically Enriched Point Clouds and Imagery

    NASA Astrophysics Data System (ADS)

    Obrock, L. S.; Gülch, E.

    2018-05-01

    The automated generation of a BIM-Model from sensor data is a huge challenge for the modeling of existing buildings. Currently the measurements and analyses are time consuming, allow little automation and require expensive equipment. We do lack an automated acquisition of semantical information of objects in a building. We are presenting first results of our approach based on imagery and derived products aiming at a more automated modeling of interior for a BIM building model. We examine the building parts and objects visible in the collected images using Deep Learning Methods based on Convolutional Neural Networks. For localization and classification of building parts we apply the FCN8s-Model for pixel-wise Semantic Segmentation. We, so far, reach a Pixel Accuracy of 77.2 % and a mean Intersection over Union of 44.2 %. We finally use the network for further reasoning on the images of the interior room. We combine the segmented images with the original images and use photogrammetric methods to produce a three-dimensional point cloud. We code the extracted object types as colours of the 3D-points. We thus are able to uniquely classify the points in three-dimensional space. We preliminary investigate a simple extraction method for colour and material of building parts. It is shown, that the combined images are very well suited to further extract more semantic information for the BIM-Model. With the presented methods we see a sound basis for further automation of acquisition and modeling of semantic and geometric information of interior rooms for a BIM-Model.

  12. Automated In Vivo Platform for the Discovery of Functional Food Treatments of Hypercholesterolemia

    PubMed Central

    Littleton, Robert M.; Haworth, Kevin J.; Tang, Hong; Setchell, Kenneth D. R.; Nelson, Sandra; Hove, Jay R.

    2013-01-01

    The zebrafish is becoming an increasingly popular model system for both automated drug discovery and investigating hypercholesterolemia. Here we combine these aspects and for the first time develop an automated high-content confocal assay for treatments of hypercholesterolemia. We also create two algorithms for automated analysis of cardiodynamic data acquired by high-speed confocal microscopy. The first algorithm computes cardiac parameters solely from the frequency-domain representation of cardiodynamic data while the second uses both frequency- and time-domain data. The combined approach resulted in smaller differences relative to manual measurements. The methods are implemented to test the ability of a methanolic extract of the hawthorn plant (Crataegus laevigata) to treat hypercholesterolemia and its peripheral cardiovascular effects. Results demonstrate the utility of these methods and suggest the extract has both antihypercholesterolemic and postitively inotropic properties. PMID:23349685

  13. Automated in vivo platform for the discovery of functional food treatments of hypercholesterolemia.

    PubMed

    Littleton, Robert M; Haworth, Kevin J; Tang, Hong; Setchell, Kenneth D R; Nelson, Sandra; Hove, Jay R

    2013-01-01

    The zebrafish is becoming an increasingly popular model system for both automated drug discovery and investigating hypercholesterolemia. Here we combine these aspects and for the first time develop an automated high-content confocal assay for treatments of hypercholesterolemia. We also create two algorithms for automated analysis of cardiodynamic data acquired by high-speed confocal microscopy. The first algorithm computes cardiac parameters solely from the frequency-domain representation of cardiodynamic data while the second uses both frequency- and time-domain data. The combined approach resulted in smaller differences relative to manual measurements. The methods are implemented to test the ability of a methanolic extract of the hawthorn plant (Crataegus laevigata) to treat hypercholesterolemia and its peripheral cardiovascular effects. Results demonstrate the utility of these methods and suggest the extract has both antihypercholesterolemic and postitively inotropic properties.

  14. Determination of Low Concentrations of Acetochlor in Water by Automated Solid-Phase Extraction and Gas Chromatography with Mass-Selective Detection

    USGS Publications Warehouse

    Lindley, C.E.; Stewart, J.T.; Sandstrom, M.W.

    1996-01-01

    A sensitive and reliable gas chromatographic/mass spectrometric (GC/MS) method for determining acetochlor in environmental water samples was developed. The method involves automated extraction of the herbicide from a filtered 1 L water sample through a C18 solid-phase extraction column, elution from the column with hexane-isopropyl alcohol (3 + 1), and concentration of the extract with nitrogen gas. The herbicide is quantitated by capillary/column GC/MS with selected-ion monitoring of 3 characteristic ions. The single-operator method detection limit for reagent water samples is 0.0015 ??g/L. Mean recoveries ranged from about 92 to 115% for 3 water matrixes fortified at 0.05 and 0.5 ??g/L. Average single-operator precision, over the course of 1 week, was better than 5%.

  15. Automated solid-phase extraction and liquid chromatography for assay of cyclosporine in whole blood.

    PubMed

    Kabra, P M; Wall, J H; Dimson, P

    1987-12-01

    In this rapid, precise, accurate, cost-effective, automated liquid-chromatographic procedure for determining cyclosporine in whole blood, the cyclosporine is extracted from 0.5 mL of whole blood together with 300 micrograms of cyclosporin D per liter, added as internal standard, by using an Advanced Automated Sample Processing unit. The on-line solid-phase extraction is performed on an octasilane sorbent cartridge, which is interfaced with a RP-8 guard column and an octyl analytical column, packed with 5-microns packing material. Both columns are eluted with a mobile phase containing acetonitrile/methanol/water (53/20/27 by vol) at a flow rate of 1.5 mL/min and column temperature of 70 degrees C. Absolute recovery of cyclosporine exceeded 85% and the standard curve was linear to 5000 micrograms/L. Within-run and day-to-day CVs were less than 8%. Correlation between automated and manual Bond-Elut extraction methods was excellent (r = 0.987). None of 18 drugs and four steroids tested interfered.

  16. [Development of an automated processing method to detect coronary motion for coronary magnetic resonance angiography].

    PubMed

    Asou, Hiroya; Imada, N; Sato, T

    2010-06-20

    On coronary MR angiography (CMRA), cardiac motions worsen the image quality. To improve the image quality, detection of cardiac especially for individual coronary motion is very important. Usually, scan delay and duration were determined manually by the operator. We developed a new evaluation method to calculate static time of individual coronary artery. At first, coronary cine MRI was taken at the level of about 3 cm below the aortic valve (80 images/R-R). Chronological change of the signals were evaluated with Fourier transformation of each pixel of the images were done. Noise reduction with subtraction process and extraction process were done. To extract higher motion such as coronary arteries, morphological filter process and labeling process were added. Using these imaging processes, individual coronary motion was extracted and individual coronary static time was calculated automatically. We compared the images with ordinary manual method and new automated method in 10 healthy volunteers. Coronary static times were calculated with our method. Calculated coronary static time was shorter than that of ordinary manual method. And scan time became about 10% longer than that of ordinary method. Image qualities were improved in our method. Our automated detection method for coronary static time with chronological Fourier transformation has a potential to improve the image quality of CMRA and easy processing.

  17. Semi-automated 96-well liquid-liquid extraction for quantitation of drugs in biological fluids.

    PubMed

    Zhang, N; Hoffman, K L; Li, W; Rossi, D T

    2000-02-01

    A semi-automated liquid-liquid extraction (LLE) technique for biological fluid sample preparation was introduced for the quantitation of four drugs in rat plasma. All liquid transferring during the sample preparation was automated using a Tomtec Quadra 96 Model 320 liquid handling robot, which processed up to 96 samples in parallel. The samples were either in 96-deep-well plate or tube-rack format. One plate of samples can be prepared in approximately 1.5 h, and the 96-well plate is directly compatible with the autosampler of an LC/MS system. Selection of organic solvents and recoveries are discussed. Also, precision, relative error, linearity and quantitation of the semi automated LLE method are estimated for four example drugs using LC/MS/MS with a multiple reaction monitoring (MRM) approach. The applicability of this method and future directions are evaluated.

  18. A new fast and fully automated software based algorithm for extracting respiratory signal from raw PET data and its comparison to other methods.

    PubMed

    Kesner, Adam Leon; Kuntner, Claudia

    2010-10-01

    Respiratory gating in PET is an approach used to minimize the negative effects of respiratory motion on spatial resolution. It is based on an initial determination of a patient's respiratory movements during a scan, typically using hardware based systems. In recent years, several fully automated databased algorithms have been presented for extracting a respiratory signal directly from PET data, providing a very practical strategy for implementing gating in the clinic. In this work, a new method is presented for extracting a respiratory signal from raw PET sinogram data and compared to previously presented automated techniques. The acquisition of respiratory signal from PET data in the newly proposed method is based on rebinning the sinogram data into smaller data structures and then analyzing the time activity behavior in the elements of these structures. From this analysis, a 1D respiratory trace is produced, analogous to a hardware derived respiratory trace. To assess the accuracy of this fully automated method, respiratory signal was extracted from a collection of 22 clinical FDG-PET scans using this method, and compared to signal derived from several other software based methods as well as a signal derived from a hardware system. The method presented required approximately 9 min of processing time for each 10 min scan (using a single 2.67 GHz processor), which in theory can be accomplished while the scan is being acquired and therefore allowing a real-time respiratory signal acquisition. Using the mean correlation between the software based and hardware based respiratory traces, the optimal parameters were determined for the presented algorithm. The mean/median/range of correlations for the set of scans when using the optimal parameters was found to be 0.58/0.68/0.07-0.86. The speed of this method was within the range of real-time while the accuracy surpassed the most accurate of the previously presented algorithms. PET data inherently contains information about patient motion; information that is not currently being utilized. We have shown that a respiratory signal can be extracted from raw PET data in potentially real-time and in a fully automated manner. This signal correlates well with hardware based signal for a large percentage of scans, and avoids the efforts and complications associated with hardware. The proposed method to extract a respiratory signal can be implemented on existing scanners and, if properly integrated, can be applied without changes to routine clinical procedures.

  19. A fully automated method for simultaneous determination of aflatoxins and ochratoxin A in dried fruits by pressurized liquid extraction and online solid-phase extraction cleanup coupled to ultra-high-pressure liquid chromatography-tandem mass spectrometry.

    PubMed

    Campone, Luca; Piccinelli, Anna Lisa; Celano, Rita; Russo, Mariateresa; Valdés, Alberto; Ibáñez, Clara; Rastrelli, Luca

    2015-04-01

    According to current demands and future perspectives in food safety, this study reports a fast and fully automated analytical method for the simultaneous analysis of the mycotoxins with high toxicity and wide spread, aflatoxins (AFs) and ochratoxin A (OTA) in dried fruits, a high-risk foodstuff. The method is based on pressurized liquid extraction (PLE), with aqueous methanol (30%) at 110 °C, of the slurried dried fruit and online solid-phase extraction (online SPE) cleanup of the PLE extracts with a C18 cartridge. The purified sample was directly analysed by ultra-high-pressure liquid chromatography-tandem mass spectrometry (UHPLC-MS/MS) for sensitive and selective determination of AFs and OTA. The proposed analytical procedure was validated for different dried fruits (vine fruit, fig and apricot), providing method detection and quantification limits much lower than the AFs and OTA maximum levels imposed by EU regulation in dried fruit for direct human consumption. Also, recoveries (83-103%) and repeatability (RSD < 8, n = 3) meet the performance criteria required by EU regulation for the determination of the levels of mycotoxins in foodstuffs. The main advantage of the proposed method is full automation of the whole analytical procedure that reduces the time and cost of the analysis, sample manipulation and solvent consumption, enabling high-throughput analysis and highly accurate and precise results.

  20. Automated PCR setup for forensic casework samples using the Normalization Wizard and PCR Setup robotic methods.

    PubMed

    Greenspoon, S A; Sykes, K L V; Ban, J D; Pollard, A; Baisden, M; Farr, M; Graham, N; Collins, B L; Green, M M; Christenson, C C

    2006-12-20

    Human genome, pharmaceutical and research laboratories have long enjoyed the application of robotics to performing repetitive laboratory tasks. However, the utilization of robotics in forensic laboratories for processing casework samples is relatively new and poses particular challenges. Since the quantity and quality (a mixture versus a single source sample, the level of degradation, the presence of PCR inhibitors) of the DNA contained within a casework sample is unknown, particular attention must be paid to procedural susceptibility to contamination, as well as DNA yield, especially as it pertains to samples with little biological material. The Virginia Department of Forensic Science (VDFS) has successfully automated forensic casework DNA extraction utilizing the DNA IQ(trade mark) System in conjunction with the Biomek 2000 Automation Workstation. Human DNA quantitation is also performed in a near complete automated fashion utilizing the AluQuant Human DNA Quantitation System and the Biomek 2000 Automation Workstation. Recently, the PCR setup for casework samples has been automated, employing the Biomek 2000 Automation Workstation and Normalization Wizard, Genetic Identity version, which utilizes the quantitation data, imported into the software, to create a customized automated method for DNA dilution, unique to that plate of DNA samples. The PCR Setup software method, used in conjunction with the Normalization Wizard method and written for the Biomek 2000, functions to mix the diluted DNA samples, transfer the PCR master mix, and transfer the diluted DNA samples to PCR amplification tubes. Once the process is complete, the DNA extracts, still on the deck of the robot in PCR amplification strip tubes, are transferred to pre-labeled 1.5 mL tubes for long-term storage using an automated method. The automation of these steps in the process of forensic DNA casework analysis has been accomplished by performing extensive optimization, validation and testing of the software methods.

  1. Automated methods of tree boundary extraction and foliage transparency estimation from digital imagery

    Treesearch

    Sang-Mook Lee; Neil A. Clark; Philip A. Araman

    2003-01-01

    Foliage transparency in trees is an important indicator for forest health assessment. This paper helps advance transparency measurement research by presenting methods of automatic tree boundary extraction and foliage transparency estimation from digital images taken from the ground of open grown trees.Extraction of proper boundaries of tree crowns is the...

  2. Evaluation Of A Powder-Free DNA Extraction Method For Skeletal Remains.

    PubMed

    Harrel, Michelle; Mayes, Carrie; Gangitano, David; Hughes-Stamm, Sheree

    2018-02-07

    Bones are often recovered in forensic investigations, including missing persons and mass disasters. While traditional DNA extraction methods rely on grinding bone into powder prior to DNA purification, the TBone Ex buffer (DNA Chip Research Inc.) digests bone chips without powdering. In this study, six bones were extracted using the TBone Ex kit in conjunction with the PrepFiler ® BTA™ DNA extraction kit (Thermo Fisher Scientific) both manually and via an automated platform. Comparable amounts of DNA were recovered from a 50 mg bone chip using the TBone Ex kit and 50 mg of powdered bone with the PrepFiler ® BTA™ kit. However, automated DNA purification decreased DNA yield (p < 0.05). Nevertheless, short tandem repeat (STR) success was comparable across all methods tested. This study demonstrates that digestion of whole bone fragments is an efficient alternative to powdering bones for DNA extraction without compromising downstream STR profile quality. © 2018 American Academy of Forensic Sciences.

  3. Combining semi-automated image analysis techniques with machine learning algorithms to accelerate large-scale genetic studies.

    PubMed

    Atkinson, Jonathan A; Lobet, Guillaume; Noll, Manuel; Meyer, Patrick E; Griffiths, Marcus; Wells, Darren M

    2017-10-01

    Genetic analyses of plant root systems require large datasets of extracted architectural traits. To quantify such traits from images of root systems, researchers often have to choose between automated tools (that are prone to error and extract only a limited number of architectural traits) or semi-automated ones (that are highly time consuming). We trained a Random Forest algorithm to infer architectural traits from automatically extracted image descriptors. The training was performed on a subset of the dataset, then applied to its entirety. This strategy allowed us to (i) decrease the image analysis time by 73% and (ii) extract meaningful architectural traits based on image descriptors. We also show that these traits are sufficient to identify the quantitative trait loci that had previously been discovered using a semi-automated method. We have shown that combining semi-automated image analysis with machine learning algorithms has the power to increase the throughput of large-scale root studies. We expect that such an approach will enable the quantification of more complex root systems for genetic studies. We also believe that our approach could be extended to other areas of plant phenotyping. © The Authors 2017. Published by Oxford University Press.

  4. Combining semi-automated image analysis techniques with machine learning algorithms to accelerate large-scale genetic studies

    PubMed Central

    Atkinson, Jonathan A.; Lobet, Guillaume; Noll, Manuel; Meyer, Patrick E.; Griffiths, Marcus

    2017-01-01

    Abstract Genetic analyses of plant root systems require large datasets of extracted architectural traits. To quantify such traits from images of root systems, researchers often have to choose between automated tools (that are prone to error and extract only a limited number of architectural traits) or semi-automated ones (that are highly time consuming). We trained a Random Forest algorithm to infer architectural traits from automatically extracted image descriptors. The training was performed on a subset of the dataset, then applied to its entirety. This strategy allowed us to (i) decrease the image analysis time by 73% and (ii) extract meaningful architectural traits based on image descriptors. We also show that these traits are sufficient to identify the quantitative trait loci that had previously been discovered using a semi-automated method. We have shown that combining semi-automated image analysis with machine learning algorithms has the power to increase the throughput of large-scale root studies. We expect that such an approach will enable the quantification of more complex root systems for genetic studies. We also believe that our approach could be extended to other areas of plant phenotyping. PMID:29020748

  5. Evaluation of four automated protocols for extraction of DNA from FTA cards.

    PubMed

    Stangegaard, Michael; Børsting, Claus; Ferrero-Miliani, Laura; Frank-Hansen, Rune; Poulsen, Lena; Hansen, Anders J; Morling, Niels

    2013-10-01

    Extraction of DNA using magnetic bead-based techniques on automated DNA extraction instruments provides a fast, reliable, and reproducible method for DNA extraction from various matrices. Here, we have compared the yield and quality of DNA extracted from FTA cards using four automated extraction protocols on three different instruments. The extraction processes were repeated up to six times with the same pieces of FTA cards. The sample material on the FTA cards was either blood or buccal cells. With the QIAamp DNA Investigator and QIAsymphony DNA Investigator kits, it was possible to extract DNA from the FTA cards in all six rounds of extractions in sufficient amount and quality to obtain complete short tandem repeat (STR) profiles on a QIAcube and a QIAsymphony SP. With the PrepFiler Express kit, almost all the extractable DNA was extracted in the first two rounds of extractions. Furthermore, we demonstrated that it was possible to successfully extract sufficient DNA for STR profiling from previously processed FTA card pieces that had been stored at 4 °C for up to 1 year. This showed that rare or precious FTA card samples may be saved for future analyses even though some DNA was already extracted from the FTA cards.

  6. Benefits of a clinical data warehouse with data mining tools to collect data for a radiotherapy trial

    PubMed Central

    Roelofs, Erik; Persoon, Lucas; Nijsten, Sebastiaan; Wiessler, Wolfgang; Dekker, André; Lambin, Philippe

    2016-01-01

    Introduction Collecting trial data in a medical environment is at present mostly performed manually and therefore time-consuming, prone to errors and often incomplete with the complex data considered. Faster and more accurate methods are needed to improve the data quality and to shorten data collection times where information is often scattered over multiple data sources. The purpose of this study is to investigate the possible benefit of modern data warehouse technology in the radiation oncology field. Material and methods In this study, a Computer Aided Theragnostics (CAT) data warehouse combined with automated tools for feature extraction was benchmarked against the regular manual data-collection processes. Two sets of clinical parameters were compiled for non-small cell lung cancer (NSCLC) and rectal cancer, using 27 patients per disease. Data collection times and inconsistencies were compared between the manual and the automated extraction method. Results The average time per case to collect the NSCLC data manually was 10.4 ± 2.1 min and 4.3 ± 1.1 min when using the automated method (p < 0.001). For rectal cancer, these times were 13.5 ± 4.1 and 6.8 ± 2.4 min, respectively (p < 0.001). In 3.2% of the data collected for NSCLC and 5.3% for rectal cancer, there was a discrepancy between the manual and automated method. Conclusions Aggregating multiple data sources in a data warehouse combined with tools for extraction of relevant parameters is beneficial for data collection times and offers the ability to improve data quality. The initial investments in digitizing the data are expected to be compensated due to the flexibility of the data analysis. Furthermore, successive investigations can easily select trial candidates and extract new parameters from the existing databases. PMID:23394741

  7. Automated control of robotic camera tacheometers for measurements of industrial large scale objects

    NASA Astrophysics Data System (ADS)

    Heimonen, Teuvo; Leinonen, Jukka; Sipola, Jani

    2013-04-01

    The modern robotic tacheometers equipped with digital cameras (called also imaging total stations) and capable to measure reflectorless offer new possibilities to gather 3d data. In this paper an automated approach for the tacheometer measurements needed in the dimensional control of industrial large scale objects is proposed. There are two new contributions in the approach: the automated extraction of the vital points (i.e. the points to be measured) and the automated fine aiming of the tacheometer. The proposed approach proceeds through the following steps: First the coordinates of the vital points are automatically extracted from the computer aided design (CAD) data. The extracted design coordinates are then used to aim the tacheometer to point out to the designed location of the points, one after another. However, due to the deviations between the designed and the actual location of the points, the aiming need to be adjusted. An automated dynamic image-based look-and-move type servoing architecture is proposed to be used for this task. After a successful fine aiming, the actual coordinates of the point in question can be automatically measured by using the measuring functionalities of the tacheometer. The approach was validated experimentally and noted to be feasible. On average 97 % of the points actually measured in four different shipbuilding measurement cases were indeed proposed to be vital points by the automated extraction algorithm. The accuracy of the results obtained with the automatic control method of the tachoemeter were comparable to the results obtained with the manual control, and also the reliability of the image processing step of the method was found to be high in the laboratory experiments.

  8. Automated Mini-Column Solid-Phase Extraction Cleanup for High-Throughput Analysis of Chemical Contaminants in Foods by Low-Pressure Gas Chromatography-Tandem Mass Spectrometry.

    PubMed

    Lehotay, Steven J; Han, Lijun; Sapozhnikova, Yelena

    2016-01-01

    This study demonstrated the application of an automated high-throughput mini-cartridge solid-phase extraction (mini-SPE) cleanup for the rapid low-pressure gas chromatography-tandem mass spectrometry (LPGC-MS/MS) analysis of pesticides and environmental contaminants in QuEChERS extracts of foods. Cleanup efficiencies and breakthrough volumes using different mini-SPE sorbents were compared using avocado, salmon, pork loin, and kale as representative matrices. Optimum extract load volume was 300 µL for the 45 mg mini-cartridges containing 20/12/12/1 (w/w/w/w) anh. MgSO 4 /PSA (primary secondary amine)/C 18 /CarbonX sorbents used in the final method. In method validation to demonstrate high-throughput capabilities and performance results, 230 spiked extracts of 10 different foods (apple, kiwi, carrot, kale, orange, black olive, wheat grain, dried basil, pork, and salmon) underwent automated mini-SPE cleanup and analysis over the course of 5 days. In all, 325 analyses for 54 pesticides and 43 environmental contaminants (3 analyzed together) were conducted using the 10 min LPGC-MS/MS method without changing the liner or retuning the instrument. Merely, 1 mg equivalent sample injected achieved <5 ng g -1 limits of quantification. With the use of internal standards, method validation results showed that 91 of the 94 analytes including pairs achieved satisfactory results (70-120 % recovery and RSD ≤ 25 %) in the 10 tested food matrices ( n  = 160). Matrix effects were typically less than ±20 %, mainly due to the use of analyte protectants, and minimal human review of software data processing was needed due to summation function integration of analyte peaks. This study demonstrated that the automated mini-SPE + LPGC-MS/MS method yielded accurate results in rugged, high-throughput operations with minimal labor and data review.

  9. Classification of product inspection items using nonlinear features

    NASA Astrophysics Data System (ADS)

    Talukder, Ashit; Casasent, David P.; Lee, H.-W.

    1998-03-01

    Automated processing and classification of real-time x-ray images of randomly oriented touching pistachio nuts is discussed. The ultimate objective is the development of a system for automated non-invasive detection of defective product items on a conveyor belt. This approach involves two main steps: preprocessing and classification. Preprocessing locates individual items and segments ones that touch using a modified watershed algorithm. The second stage involves extraction of features that allow discrimination between damaged and clean items (pistachio nuts). This feature extraction and classification stage is the new aspect of this paper. We use a new nonlinear feature extraction scheme called the maximum representation and discriminating feature (MRDF) extraction method to compute nonlinear features that are used as inputs to a classifier. The MRDF is shown to provide better classification and a better ROC (receiver operating characteristic) curve than other methods.

  10. Object-oriented classification of drumlins from digital elevation models

    NASA Astrophysics Data System (ADS)

    Saha, Kakoli

    Drumlins are common elements of glaciated landscapes which are easily identified by their distinct morphometric characteristics including shape, length/width ratio, elongation ratio, and uniform direction. To date, most researchers have mapped drumlins by tracing contours on maps, or through on-screen digitization directly on top of hillshaded digital elevation models (DEMs). This paper seeks to utilize the unique morphometric characteristics of drumlins and investigates automated extraction of the landforms as objects from DEMs by Definiens Developer software (V.7), using the 30 m United States Geological Survey National Elevation Dataset DEM as input. The Chautauqua drumlin field in Pennsylvania and upstate New York, USA was chosen as a study area. As the study area is huge (approximately covers 2500 sq.km. of area), small test areas were selected for initial testing of the method. Individual polygons representing the drumlins were extracted from the elevation data set by automated recognition, using Definiens' Multiresolution Segmentation tool, followed by rule-based classification. Subsequently parameters such as length, width and length-width ratio, perimeter and area were measured automatically. To test the accuracy of the method, a second base map was produced by manual on-screen digitization of drumlins from topographic maps and the same morphometric parameters were extracted from the mapped landforms using Definiens Developer. Statistical comparison showed a high agreement between the two methods confirming that object-oriented classification for extraction of drumlins can be used for mapping these landforms. The proposed method represents an attempt to solve the problem by providing a generalized rule-set for mass extraction of drumlins. To check that the automated extraction process was next applied to a larger area. Results showed that the proposed method is as successful for the bigger area as it was for the smaller test areas.

  11. Automated drumlin shape and volume estimation using high resolution LiDAR imagery (Curvature Based Relief Separation): A test from the Wadena Drumlin Field, Minnesota

    NASA Astrophysics Data System (ADS)

    Yu, Peter; Eyles, Nick; Sookhan, Shane

    2015-10-01

    Resolving the origin(s) of drumlins and related megaridges in areas of megascale glacial lineations (MSGL) left by paleo-ice sheets is critical to understanding how ancient ice sheets interacted with their sediment beds. MSGL is now linked with fast-flowing ice streams but there is a broad range of erosional and depositional models. Further progress is reliant on constraining fluxes of subglacial sediment at the ice sheet base which in turn is dependent on morphological data such as landform shape and elongation and most importantly landform volume. Past practice in determining shape has employed a broad range of geomorphological methods from strictly visualisation techniques to more complex semi-automated and automated drumlin extraction methods. This paper reviews and builds on currently available visualisation, semi-automated and automated extraction methods and presents a new, Curvature Based Relief Separation (CBRS) technique; for drumlin mapping. This uses curvature analysis to generate a base level from which topography can be normalized and drumlin volume can be derived. This methodology is tested using a high resolution (3 m) LiDAR elevation dataset from the Wadena Drumlin Field, Minnesota, USA, which was constructed by the Wadena Lobe of the Laurentide Ice Sheet ca. 20,000 years ago and which as a whole contains 2000 drumlins across an area of 7500 km2. This analysis demonstrates that CBRS provides an objective and robust procedure for automated drumlin extraction. There is strong agreement with manually selected landforms but the method is also capable of resolving features that were not detectable manually thereby considerably expanding the known population of streamlined landforms. CBRS provides an effective automatic method for visualisation of large areas of the streamlined beds of former ice sheets and for modelling sediment fluxes below ice sheets.

  12. Composite Wavelet Filters for Enhanced Automated Target Recognition

    NASA Technical Reports Server (NTRS)

    Chiang, Jeffrey N.; Zhang, Yuhan; Lu, Thomas T.; Chao, Tien-Hsin

    2012-01-01

    Automated Target Recognition (ATR) systems aim to automate target detection, recognition, and tracking. The current project applies a JPL ATR system to low-resolution sonar and camera videos taken from unmanned vehicles. These sonar images are inherently noisy and difficult to interpret, and pictures taken underwater are unreliable due to murkiness and inconsistent lighting. The ATR system breaks target recognition into three stages: 1) Videos of both sonar and camera footage are broken into frames and preprocessed to enhance images and detect Regions of Interest (ROIs). 2) Features are extracted from these ROIs in preparation for classification. 3) ROIs are classified as true or false positives using a standard Neural Network based on the extracted features. Several preprocessing, feature extraction, and training methods are tested and discussed in this paper.

  13. Automated extraction of DNA from blood and PCR setup using a Tecan Freedom EVO liquid handler for forensic genetic STR typing of reference samples.

    PubMed

    Stangegaard, Michael; Frøslev, Tobias G; Frank-Hansen, Rune; Hansen, Anders J; Morling, Niels

    2011-04-01

    We have implemented and validated automated protocols for DNA extraction and PCR setup using a Tecan Freedom EVO liquid handler mounted with the Te-MagS magnetic separation device (Tecan, Männedorf, Switzerland). The protocols were validated for accredited forensic genetic work according to ISO 17025 using the Qiagen MagAttract DNA Mini M48 kit (Qiagen GmbH, Hilden, Germany) from fresh whole blood and blood from deceased individuals. The workflow was simplified by returning the DNA extracts to the original tubes minimizing the risk of misplacing samples. The tubes that originally contained the samples were washed with MilliQ water before the return of the DNA extracts. The PCR was setup in 96-well microtiter plates. The methods were validated for the kits: AmpFℓSTR Identifiler, SGM Plus and Yfiler (Applied Biosystems, Foster City, CA), GenePrint FFFL and PowerPlex Y (Promega, Madison, WI). The automated protocols allowed for extraction and addition of PCR master mix of 96 samples within 3.5h. In conclusion, we demonstrated that (1) DNA extraction with magnetic beads and (2) PCR setup for accredited, forensic genetic short tandem repeat typing can be implemented on a simple automated liquid handler leading to the reduction of manual work, and increased quality and throughput. Copyright © 2011 Society for Laboratory Automation and Screening. Published by Elsevier Inc. All rights reserved.

  14. MARS: bringing the automation of small-molecule bioanalytical sample preparations to a new frontier.

    PubMed

    Li, Ming; Chou, Judy; Jing, Jing; Xu, Hui; Costa, Aldo; Caputo, Robin; Mikkilineni, Rajesh; Flannelly-King, Shane; Rohde, Ellen; Gan, Lawrence; Klunk, Lewis; Yang, Liyu

    2012-06-01

    In recent years, there has been a growing interest in automating small-molecule bioanalytical sample preparations specifically using the Hamilton MicroLab(®) STAR liquid-handling platform. In the most extensive work reported thus far, multiple small-molecule sample preparation assay types (protein precipitation extraction, SPE and liquid-liquid extraction) have been integrated into a suite that is composed of graphical user interfaces and Hamilton scripts. Using that suite, bioanalytical scientists have been able to automate various sample preparation methods to a great extent. However, there are still areas that could benefit from further automation, specifically, the full integration of analytical standard and QC sample preparation with study sample extraction in one continuous run, real-time 2D barcode scanning on the Hamilton deck and direct Laboratory Information Management System database connectivity. We developed a new small-molecule sample-preparation automation system that improves in all of the aforementioned areas. The improved system presented herein further streamlines the bioanalytical workflow, simplifies batch run design, reduces analyst intervention and eliminates sample-handling error.

  15. Hierarchical extraction of urban objects from mobile laser scanning data

    NASA Astrophysics Data System (ADS)

    Yang, Bisheng; Dong, Zhen; Zhao, Gang; Dai, Wenxia

    2015-01-01

    Point clouds collected in urban scenes contain a huge number of points (e.g., billions), numerous objects with significant size variability, complex and incomplete structures, and variable point densities, raising great challenges for the automated extraction of urban objects in the field of photogrammetry, computer vision, and robotics. This paper addresses these challenges by proposing an automated method to extract urban objects robustly and efficiently. The proposed method generates multi-scale supervoxels from 3D point clouds using the point attributes (e.g., colors, intensities) and spatial distances between points, and then segments the supervoxels rather than individual points by combining graph based segmentation with multiple cues (e.g., principal direction, colors) of the supervoxels. The proposed method defines a set of rules for merging segments into meaningful units according to types of urban objects and forms the semantic knowledge of urban objects for the classification of objects. Finally, the proposed method extracts and classifies urban objects in a hierarchical order ranked by the saliency of the segments. Experiments show that the proposed method is efficient and robust for extracting buildings, streetlamps, trees, telegraph poles, traffic signs, cars, and enclosures from mobile laser scanning (MLS) point clouds, with an overall accuracy of 92.3%.

  16. Extraction of the number of peroxisomes in yeast cells by automated image analysis.

    PubMed

    Niemistö, Antti; Selinummi, Jyrki; Saleem, Ramsey; Shmulevich, Ilya; Aitchison, John; Yli-Harja, Olli

    2006-01-01

    An automated image analysis method for extracting the number of peroxisomes in yeast cells is presented. Two images of the cell population are required for the method: a bright field microscope image from which the yeast cells are detected and the respective fluorescent image from which the number of peroxisomes in each cell is found. The segmentation of the cells is based on clustering the local mean-variance space. The watershed transformation is thereafter employed to separate cells that are clustered together. The peroxisomes are detected by thresholding the fluorescent image. The method is tested with several images of a budding yeast Saccharomyces cerevisiae population, and the results are compared with manually obtained results.

  17. Benefits of a clinical data warehouse with data mining tools to collect data for a radiotherapy trial.

    PubMed

    Roelofs, Erik; Persoon, Lucas; Nijsten, Sebastiaan; Wiessler, Wolfgang; Dekker, André; Lambin, Philippe

    2013-07-01

    Collecting trial data in a medical environment is at present mostly performed manually and therefore time-consuming, prone to errors and often incomplete with the complex data considered. Faster and more accurate methods are needed to improve the data quality and to shorten data collection times where information is often scattered over multiple data sources. The purpose of this study is to investigate the possible benefit of modern data warehouse technology in the radiation oncology field. In this study, a Computer Aided Theragnostics (CAT) data warehouse combined with automated tools for feature extraction was benchmarked against the regular manual data-collection processes. Two sets of clinical parameters were compiled for non-small cell lung cancer (NSCLC) and rectal cancer, using 27 patients per disease. Data collection times and inconsistencies were compared between the manual and the automated extraction method. The average time per case to collect the NSCLC data manually was 10.4 ± 2.1 min and 4.3 ± 1.1 min when using the automated method (p<0.001). For rectal cancer, these times were 13.5 ± 4.1 and 6.8 ± 2.4 min, respectively (p<0.001). In 3.2% of the data collected for NSCLC and 5.3% for rectal cancer, there was a discrepancy between the manual and automated method. Aggregating multiple data sources in a data warehouse combined with tools for extraction of relevant parameters is beneficial for data collection times and offers the ability to improve data quality. The initial investments in digitizing the data are expected to be compensated due to the flexibility of the data analysis. Furthermore, successive investigations can easily select trial candidates and extract new parameters from the existing databases. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  18. EEG artifact elimination by extraction of ICA-component features using image processing algorithms.

    PubMed

    Radüntz, T; Scouten, J; Hochmuth, O; Meffert, B

    2015-03-30

    Artifact rejection is a central issue when dealing with electroencephalogram recordings. Although independent component analysis (ICA) separates data in linearly independent components (IC), the classification of these components as artifact or EEG signal still requires visual inspection by experts. In this paper, we achieve automated artifact elimination using linear discriminant analysis (LDA) for classification of feature vectors extracted from ICA components via image processing algorithms. We compare the performance of this automated classifier to visual classification by experts and identify range filtering as a feature extraction method with great potential for automated IC artifact recognition (accuracy rate 88%). We obtain almost the same level of recognition performance for geometric features and local binary pattern (LBP) features. Compared to the existing automated solutions the proposed method has two main advantages: First, it does not depend on direct recording of artifact signals, which then, e.g. have to be subtracted from the contaminated EEG. Second, it is not limited to a specific number or type of artifact. In summary, the present method is an automatic, reliable, real-time capable and practical tool that reduces the time intensive manual selection of ICs for artifact removal. The results are very promising despite the relatively small channel resolution of 25 electrodes. Copyright © 2015 The Authors. Published by Elsevier B.V. All rights reserved.

  19. A Hybrid DNA Extraction Method for the Qualitative and Quantitative Assessment of Bacterial Communities from Poultry Production Samples

    PubMed Central

    Rothrock, Michael J.; Hiett, Kelli L.; Gamble, John; Caudill, Andrew C.; Cicconi-Hogan, Kellie M.; Caporaso, J. Gregory

    2014-01-01

    The efficacy of DNA extraction protocols can be highly dependent upon both the type of sample being investigated and the types of downstream analyses performed. Considering that the use of new bacterial community analysis techniques (e.g., microbiomics, metagenomics) is becoming more prevalent in the agricultural and environmental sciences and many environmental samples within these disciplines can be physiochemically and microbiologically unique (e.g., fecal and litter/bedding samples from the poultry production spectrum), appropriate and effective DNA extraction methods need to be carefully chosen. Therefore, a novel semi-automated hybrid DNA extraction method was developed specifically for use with environmental poultry production samples. This method is a combination of the two major types of DNA extraction: mechanical and enzymatic. A two-step intense mechanical homogenization step (using bead-beating specifically formulated for environmental samples) was added to the beginning of the “gold standard” enzymatic DNA extraction method for fecal samples to enhance the removal of bacteria and DNA from the sample matrix and improve the recovery of Gram-positive bacterial community members. Once the enzymatic extraction portion of the hybrid method was initiated, the remaining purification process was automated using a robotic workstation to increase sample throughput and decrease sample processing error. In comparison to the strict mechanical and enzymatic DNA extraction methods, this novel hybrid method provided the best overall combined performance when considering quantitative (using 16S rRNA qPCR) and qualitative (using microbiomics) estimates of the total bacterial communities when processing poultry feces and litter samples. PMID:25548939

  20. Extraction of prostatic lumina and automated recognition for prostatic calculus image using PCA-SVM.

    PubMed

    Wang, Zhuocai; Xu, Xiangmin; Ding, Xiaojun; Xiao, Hui; Huang, Yusheng; Liu, Jian; Xing, Xiaofen; Wang, Hua; Liao, D Joshua

    2011-01-01

    Identification of prostatic calculi is an important basis for determining the tissue origin. Computation-assistant diagnosis of prostatic calculi may have promising potential but is currently still less studied. We studied the extraction of prostatic lumina and automated recognition for calculus images. Extraction of lumina from prostate histology images was based on local entropy and Otsu threshold recognition using PCA-SVM and based on the texture features of prostatic calculus. The SVM classifier showed an average time 0.1432 second, an average training accuracy of 100%, an average test accuracy of 93.12%, a sensitivity of 87.74%, and a specificity of 94.82%. We concluded that the algorithm, based on texture features and PCA-SVM, can recognize the concentric structure and visualized features easily. Therefore, this method is effective for the automated recognition of prostatic calculi.

  1. Testing of a Composite Wavelet Filter to Enhance Automated Target Recognition in SONAR

    NASA Technical Reports Server (NTRS)

    Chiang, Jeffrey N.

    2011-01-01

    Automated Target Recognition (ATR) systems aim to automate target detection, recognition, and tracking. The current project applies a JPL ATR system to low resolution SONAR and camera videos taken from Unmanned Underwater Vehicles (UUVs). These SONAR images are inherently noisy and difficult to interpret, and pictures taken underwater are unreliable due to murkiness and inconsistent lighting. The ATR system breaks target recognition into three stages: 1) Videos of both SONAR and camera footage are broken into frames and preprocessed to enhance images and detect Regions of Interest (ROIs). 2) Features are extracted from these ROIs in preparation for classification. 3) ROIs are classified as true or false positives using a standard Neural Network based on the extracted features. Several preprocessing, feature extraction, and training methods are tested and discussed in this report.

  2. Automated solid-phase extraction workstations combined with quantitative bioanalytical LC/MS.

    PubMed

    Huang, N H; Kagel, J R; Rossi, D T

    1999-03-01

    An automated solid-phase extraction workstation was used to develop, characterize and validate an LC/MS/MS method for quantifying a novel lipid-regulating drug in dog plasma. Method development was facilitated by workstation functions that allowed wash solvents of varying organic composition to be mixed and tested automatically. Precision estimates for this approach were within 9.8% relative standard deviation (RSD) across the calibration range. Accuracy for replicate determinations of quality controls was between -7.2 and +6.2% relative error (RE) over 5-1,000 ng/ml(-1). Recoveries were evaluated for a wide variety of wash solvents, elution solvents and sorbents. Optimized recoveries were generally > 95%. A sample throughput benchmark for the method was approximately equal 8 min per sample. Because of parallel sample processing, 100 samples were extracted in less than 120 min. The approach has proven useful for use with LC/MS/MS, using a multiple reaction monitoring (MRM) approach.

  3. Automated Solar Flare Detection and Feature Extraction in High-Resolution and Full-Disk Hα Images

    NASA Astrophysics Data System (ADS)

    Yang, Meng; Tian, Yu; Liu, Yangyi; Rao, Changhui

    2018-05-01

    In this article, an automated solar flare detection method applied to both full-disk and local high-resolution Hα images is proposed. An adaptive gray threshold and an area threshold are used to segment the flare region. Features of each detected flare event are extracted, e.g. the start, peak, and end time, the importance class, and the brightness class. Experimental results have verified that the proposed method can obtain more stable and accurate segmentation results than previous works on full-disk images from Big Bear Solar Observatory (BBSO) and Kanzelhöhe Observatory for Solar and Environmental Research (KSO), and satisfying segmentation results on high-resolution images from the Goode Solar Telescope (GST). Moreover, the extracted flare features correlate well with the data given by KSO. The method may be able to implement a more complicated statistical analysis of Hα solar flares.

  4. Rapid System to Quantitatively Characterize the Airborne Microbial Community

    NASA Technical Reports Server (NTRS)

    Macnaughton, Sarah J.

    1998-01-01

    Bioaerosols have been linked to a wide range of different allergies and respiratory illnesses. Currently, microorganism culture is the most commonly used method for exposure assessment. Such culture techniques, however, generally fail to detect between 90-99% of the actual viable biomass. Consequently, an unbiased technique for detecting airborne microorganisms is essential. In this Phase II proposal, a portable air sampling device his been developed for the collection of airborne microbial biomass from indoor (and outdoor) environments. Methods were evaluated for extracting and identifying lipids that provide information on indoor air microbial biomass, and automation of these procedures was investigated. Also, techniques to automate the extraction of DNA were explored.

  5. Automated solid-phase extraction of herbicides from water for gas chromatographic-mass spectrometric analysis

    USGS Publications Warehouse

    Meyer, M.T.; Mills, M.S.; Thurman, E.M.

    1993-01-01

    An automated solid-phase extraction (SPE) method was developed for the pre-concentration of chloroacetanilide and triazine herbicides, and two triazine metabolites from 100-ml water samples. Breakthrough experiments for the C18 SPE cartridge show that the two triazine metabolites are not fully retained and that increasing flow-rate decreases their retention. Standard curve r2 values of 0.998-1.000 for each compound were consistently obtained and a quantitation level of 0.05 ??g/l was achieved for each compound tested. More than 10,000 surface and ground water samples have been analyzed by this method.

  6. Comparative evaluation of three automated systems for DNA extraction in conjunction with three commercially available real-time PCR assays for quantitation of plasma Cytomegalovirus DNAemia in allogeneic stem cell transplant recipients.

    PubMed

    Bravo, Dayana; Clari, María Ángeles; Costa, Elisa; Muñoz-Cobo, Beatriz; Solano, Carlos; José Remigia, María; Navarro, David

    2011-08-01

    Limited data are available on the performance of different automated extraction platforms and commercially available quantitative real-time PCR (QRT-PCR) methods for the quantitation of cytomegalovirus (CMV) DNA in plasma. We compared the performance characteristics of the Abbott mSample preparation system DNA kit on the m24 SP instrument (Abbott), the High Pure viral nucleic acid kit on the COBAS AmpliPrep system (Roche), and the EZ1 Virus 2.0 kit on the BioRobot EZ1 extraction platform (Qiagen) coupled with the Abbott CMV PCR kit, the LightCycler CMV Quant kit (Roche), and the Q-CMV complete kit (Nanogen), for both plasma specimens from allogeneic stem cell transplant (Allo-SCT) recipients (n = 42) and the OptiQuant CMV DNA panel (AcroMetrix). The EZ1 system displayed the highest extraction efficiency over a wide range of CMV plasma DNA loads, followed by the m24 and the AmpliPrep methods. The Nanogen PCR assay yielded higher mean CMV plasma DNA values than the Abbott and the Roche PCR assays, regardless of the platform used for DNA extraction. Overall, the effects of the extraction method and the QRT-PCR used on CMV plasma DNA load measurements were less pronounced for specimens with high CMV DNA content (>10,000 copies/ml). The performance characteristics of the extraction methods and QRT-PCR assays evaluated herein for clinical samples were extensible at cell-based standards from AcroMetrix. In conclusion, different automated systems are not equally efficient for CMV DNA extraction from plasma specimens, and the plasma CMV DNA loads measured by commercially available QRT-PCRs can differ significantly. The above findings should be taken into consideration for the establishment of cutoff values for the initiation or cessation of preemptive antiviral therapies and for the interpretation of data from clinical studies in the Allo-SCT setting.

  7. Automated Image Registration Using Morphological Region of Interest Feature Extraction

    NASA Technical Reports Server (NTRS)

    Plaza, Antonio; LeMoigne, Jacqueline; Netanyahu, Nathan S.

    2005-01-01

    With the recent explosion in the amount of remotely sensed imagery and the corresponding interest in temporal change detection and modeling, image registration has become increasingly important as a necessary first step in the integration of multi-temporal and multi-sensor data for applications such as the analysis of seasonal and annual global climate changes, as well as land use/cover changes. The task of image registration can be divided into two major components: (1) the extraction of control points or features from images; and (2) the search among the extracted features for the matching pairs that represent the same feature in the images to be matched. Manual control feature extraction can be subjective and extremely time consuming, and often results in few usable points. Automated feature extraction is a solution to this problem, where desired target features are invariant, and represent evenly distributed landmarks such as edges, corners and line intersections. In this paper, we develop a novel automated registration approach based on the following steps. First, a mathematical morphology (MM)-based method is used to obtain a scale-orientation morphological profile at each image pixel. Next, a spectral dissimilarity metric such as the spectral information divergence is applied for automated extraction of landmark chips, followed by an initial approximate matching. This initial condition is then refined using a hierarchical robust feature matching (RFM) procedure. Experimental results reveal that the proposed registration technique offers a robust solution in the presence of seasonal changes and other interfering factors. Keywords-Automated image registration, multi-temporal imagery, mathematical morphology, robust feature matching.

  8. Evaluation of different distortion correction methods and interpolation techniques for an automated classification of celiac disease☆

    PubMed Central

    Gadermayr, M.; Liedlgruber, M.; Uhl, A.; Vécsei, A.

    2013-01-01

    Due to the optics used in endoscopes, a typical degradation observed in endoscopic images are barrel-type distortions. In this work we investigate the impact of methods used to correct such distortions in images on the classification accuracy in the context of automated celiac disease classification. For this purpose we compare various different distortion correction methods and apply them to endoscopic images, which are subsequently classified. Since the interpolation used in such methods is also assumed to have an influence on the resulting classification accuracies, we also investigate different interpolation methods and their impact on the classification performance. In order to be able to make solid statements about the benefit of distortion correction we use various different feature extraction methods used to obtain features for the classification. Our experiments show that it is not possible to make a clear statement about the usefulness of distortion correction methods in the context of an automated diagnosis of celiac disease. This is mainly due to the fact that an eventual benefit of distortion correction highly depends on the feature extraction method used for the classification. PMID:23981585

  9. Fast and accurate determination of arsenobetaine in fish tissues using accelerated solvent extraction and HPLC-ICP-MS determination.

    PubMed

    Wahlen, Raimund

    2004-04-01

    A high-performance liquid chromatography-inductively coupled plasma-mass spectrometry (HPLC-ICP-MS) method has been developed for the fast and accurate analysis of arsenobetaine (AsB) in fish samples extracted by accelerated solvent extraction. The combined extraction and analysis approach is validated using certified reference materials for AsB in fish and during a European intercomparison exercise with a blind sample. Up to six species of arsenic (As) can be separated and quantitated in the extracts within a 10-min isocratic elution. The method is optimized so as to minimize time-consuming sample preparation steps and allow for automated extraction and analysis of large sample batches. A comparison of standard addition and external calibration show no significant difference in the results obtained, which indicates that the LC-ICP-MS method is not influenced by severe matrix effects. The extraction procedure can process up to 24 samples in an automated manner, yet the robustness of the developed HPLC-ICP-MS approach is highlighted by the capability to run more than 50 injections per sequence, which equates to a total run-time of more than 12 h. The method can therefore be used to rapidly and accurately assess the proportion of nontoxic AsB in fish samples with high total As content during toxicological screening studies.

  10. Automated DNA extraction platforms offer solutions to challenges of assessing microbial biofouling in oil production facilities.

    PubMed

    Oldham, Athenia L; Drilling, Heather S; Stamps, Blake W; Stevenson, Bradley S; Duncan, Kathleen E

    2012-11-20

    The analysis of microbial assemblages in industrial, marine, and medical systems can inform decisions regarding quality control or mitigation. Modern molecular approaches to detect, characterize, and quantify microorganisms provide rapid and thorough measures unbiased by the need for cultivation. The requirement of timely extraction of high quality nucleic acids for molecular analysis is faced with specific challenges when used to study the influence of microorganisms on oil production. Production facilities are often ill equipped for nucleic acid extraction techniques, making the preservation and transportation of samples off-site a priority. As a potential solution, the possibility of extracting nucleic acids on-site using automated platforms was tested. The performance of two such platforms, the Fujifilm QuickGene-Mini80™ and Promega Maxwell®16 was compared to a widely used manual extraction kit, MOBIO PowerBiofilm™ DNA Isolation Kit, in terms of ease of operation, DNA quality, and microbial community composition. Three pipeline biofilm samples were chosen for these comparisons; two contained crude oil and corrosion products and the third transported seawater. Overall, the two more automated extraction platforms produced higher DNA yields than the manual approach. DNA quality was evaluated for amplification by quantitative PCR (qPCR) and end-point PCR to generate 454 pyrosequencing libraries for 16S rRNA microbial community analysis. Microbial community structure, as assessed by DGGE analysis and pyrosequencing, was comparable among the three extraction methods. Therefore, the use of automated extraction platforms should enhance the feasibility of rapidly evaluating microbial biofouling at remote locations or those with limited resources.

  11. Automated DNA extraction platforms offer solutions to challenges of assessing microbial biofouling in oil production facilities

    PubMed Central

    2012-01-01

    The analysis of microbial assemblages in industrial, marine, and medical systems can inform decisions regarding quality control or mitigation. Modern molecular approaches to detect, characterize, and quantify microorganisms provide rapid and thorough measures unbiased by the need for cultivation. The requirement of timely extraction of high quality nucleic acids for molecular analysis is faced with specific challenges when used to study the influence of microorganisms on oil production. Production facilities are often ill equipped for nucleic acid extraction techniques, making the preservation and transportation of samples off-site a priority. As a potential solution, the possibility of extracting nucleic acids on-site using automated platforms was tested. The performance of two such platforms, the Fujifilm QuickGene-Mini80™ and Promega Maxwell®16 was compared to a widely used manual extraction kit, MOBIO PowerBiofilm™ DNA Isolation Kit, in terms of ease of operation, DNA quality, and microbial community composition. Three pipeline biofilm samples were chosen for these comparisons; two contained crude oil and corrosion products and the third transported seawater. Overall, the two more automated extraction platforms produced higher DNA yields than the manual approach. DNA quality was evaluated for amplification by quantitative PCR (qPCR) and end-point PCR to generate 454 pyrosequencing libraries for 16S rRNA microbial community analysis. Microbial community structure, as assessed by DGGE analysis and pyrosequencing, was comparable among the three extraction methods. Therefore, the use of automated extraction platforms should enhance the feasibility of rapidly evaluating microbial biofouling at remote locations or those with limited resources. PMID:23168231

  12. Multichannel Convolutional Neural Network for Biological Relation Extraction.

    PubMed

    Quan, Chanqin; Hua, Lei; Sun, Xiao; Bai, Wenjun

    2016-01-01

    The plethora of biomedical relations which are embedded in medical logs (records) demands researchers' attention. Previous theoretical and practical focuses were restricted on traditional machine learning techniques. However, these methods are susceptible to the issues of "vocabulary gap" and data sparseness and the unattainable automation process in feature extraction. To address aforementioned issues, in this work, we propose a multichannel convolutional neural network (MCCNN) for automated biomedical relation extraction. The proposed model has the following two contributions: (1) it enables the fusion of multiple (e.g., five) versions in word embeddings; (2) the need for manual feature engineering can be obviated by automated feature learning with convolutional neural network (CNN). We evaluated our model on two biomedical relation extraction tasks: drug-drug interaction (DDI) extraction and protein-protein interaction (PPI) extraction. For DDI task, our system achieved an overall f -score of 70.2% compared to the standard linear SVM based system (e.g., 67.0%) on DDIExtraction 2013 challenge dataset. And for PPI task, we evaluated our system on Aimed and BioInfer PPI corpus; our system exceeded the state-of-art ensemble SVM system by 2.7% and 5.6% on f -scores.

  13. Extraction of Prostatic Lumina and Automated Recognition for Prostatic Calculus Image Using PCA-SVM

    PubMed Central

    Wang, Zhuocai; Xu, Xiangmin; Ding, Xiaojun; Xiao, Hui; Huang, Yusheng; Liu, Jian; Xing, Xiaofen; Wang, Hua; Liao, D. Joshua

    2011-01-01

    Identification of prostatic calculi is an important basis for determining the tissue origin. Computation-assistant diagnosis of prostatic calculi may have promising potential but is currently still less studied. We studied the extraction of prostatic lumina and automated recognition for calculus images. Extraction of lumina from prostate histology images was based on local entropy and Otsu threshold recognition using PCA-SVM and based on the texture features of prostatic calculus. The SVM classifier showed an average time 0.1432 second, an average training accuracy of 100%, an average test accuracy of 93.12%, a sensitivity of 87.74%, and a specificity of 94.82%. We concluded that the algorithm, based on texture features and PCA-SVM, can recognize the concentric structure and visualized features easily. Therefore, this method is effective for the automated recognition of prostatic calculi. PMID:21461364

  14. Automated data mining: an innovative and efficient web-based approach to maintaining resident case logs.

    PubMed

    Bhattacharya, Pratik; Van Stavern, Renee; Madhavan, Ramesh

    2010-12-01

    Use of resident case logs has been considered by the Residency Review Committee for Neurology of the Accreditation Council for Graduate Medical Education (ACGME). This study explores the effectiveness of a data-mining program for creating resident logs and compares the results to a manual data-entry system. Other potential applications of data mining to enhancing resident education are also explored. Patient notes dictated by residents were extracted from the Hospital Information System and analyzed using an unstructured mining program. History, examination and ICD codes were obtained and compared to the existing manual log. The automated data History, examination, and ICD codes were gathered for a 30-day period and compared to manual case logs. The automated method extracted all resident dictations with the dates of encounter and transcription. The automated data-miner processed information from all 19 residents, while only 4 residents logged manually. The manual method identified only broad categories of diseases; the major categories were stroke or vascular disorder 53 (27.6%), epilepsy 28 (14.7%), and pain syndromes 26 (13.5%). In the automated method, epilepsy 114 (21.1%), cerebral atherosclerosis 114 (21.1%), and headache 105 (19.4%) were the most frequent primary diagnoses, and headache 89 (16.5%), seizures 94 (17.4%), and low back pain 47 (9%) were the most common chief complaints. More detailed patient information such as tobacco use 227 (42%), alcohol use 205 (38%), and drug use 38 (7%) were extracted by the data-mining method. Manual case logs are time-consuming, provide limited information, and may be unpopular with residents. Data mining is a time-effective tool that may aid in the assessment of resident experience or the ACGME core competencies or in resident clinical research. More study of this method in larger numbers of residency programs is needed.

  15. Analysis of drugs in human tissues by supercritical fluid extraction/immunoassay

    NASA Astrophysics Data System (ADS)

    Furton, Kenneth G.; Sabucedo, Alberta; Rein, Joseph; Hearn, W. L.

    1997-02-01

    A rapid, readily automated method has been developed for the quantitative analysis of phenobarbital from human liver tissues based on supercritical carbon dioxide extraction followed by fluorescence enzyme immunoassay. The method developed significantly reduces sample handling and utilizes the entire liver homogenate. The current method yields comparable recoveries and precision and does not require the use of an internal standard, although traditional GC/MS confirmation can still be performed on sample extracts. Additionally, the proposed method uses non-toxic, inexpensive carbon dioxide, thus eliminating the use of halogenated organic solvents.

  16. Literature mining of protein-residue associations with graph rules learned through distant supervision.

    PubMed

    Ravikumar, Ke; Liu, Haibin; Cohn, Judith D; Wall, Michael E; Verspoor, Karin

    2012-10-05

    We propose a method for automatic extraction of protein-specific residue mentions from the biomedical literature. The method searches text for mentions of amino acids at specific sequence positions and attempts to correctly associate each mention with a protein also named in the text. The methods presented in this work will enable improved protein functional site extraction from articles, ultimately supporting protein function prediction. Our method made use of linguistic patterns for identifying the amino acid residue mentions in text. Further, we applied an automated graph-based method to learn syntactic patterns corresponding to protein-residue pairs mentioned in the text. We finally present an approach to automated construction of relevant training and test data using the distant supervision model. The performance of the method was assessed by extracting protein-residue relations from a new automatically generated test set of sentences containing high confidence examples found using distant supervision. It achieved a F-measure of 0.84 on automatically created silver corpus and 0.79 on a manually annotated gold data set for this task, outperforming previous methods. The primary contributions of this work are to (1) demonstrate the effectiveness of distant supervision for automatic creation of training data for protein-residue relation extraction, substantially reducing the effort and time involved in manual annotation of a data set and (2) show that the graph-based relation extraction approach we used generalizes well to the problem of protein-residue association extraction. This work paves the way towards effective extraction of protein functional residues from the literature.

  17. Toward high-throughput phenotyping: unbiased automated feature extraction and selection from knowledge sources.

    PubMed

    Yu, Sheng; Liao, Katherine P; Shaw, Stanley Y; Gainer, Vivian S; Churchill, Susanne E; Szolovits, Peter; Murphy, Shawn N; Kohane, Isaac S; Cai, Tianxi

    2015-09-01

    Analysis of narrative (text) data from electronic health records (EHRs) can improve population-scale phenotyping for clinical and genetic research. Currently, selection of text features for phenotyping algorithms is slow and laborious, requiring extensive and iterative involvement by domain experts. This paper introduces a method to develop phenotyping algorithms in an unbiased manner by automatically extracting and selecting informative features, which can be comparable to expert-curated ones in classification accuracy. Comprehensive medical concepts were collected from publicly available knowledge sources in an automated, unbiased fashion. Natural language processing (NLP) revealed the occurrence patterns of these concepts in EHR narrative notes, which enabled selection of informative features for phenotype classification. When combined with additional codified features, a penalized logistic regression model was trained to classify the target phenotype. The authors applied our method to develop algorithms to identify patients with rheumatoid arthritis and coronary artery disease cases among those with rheumatoid arthritis from a large multi-institutional EHR. The area under the receiver operating characteristic curves (AUC) for classifying RA and CAD using models trained with automated features were 0.951 and 0.929, respectively, compared to the AUCs of 0.938 and 0.929 by models trained with expert-curated features. Models trained with NLP text features selected through an unbiased, automated procedure achieved comparable or slightly higher accuracy than those trained with expert-curated features. The majority of the selected model features were interpretable. The proposed automated feature extraction method, generating highly accurate phenotyping algorithms with improved efficiency, is a significant step toward high-throughput phenotyping. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  18. A modular computational framework for automated peak extraction from ion mobility spectra

    PubMed Central

    2014-01-01

    Background An ion mobility (IM) spectrometer coupled with a multi-capillary column (MCC) measures volatile organic compounds (VOCs) in the air or in exhaled breath. This technique is utilized in several biotechnological and medical applications. Each peak in an MCC/IM measurement represents a certain compound, which may be known or unknown. For clustering and classification of measurements, the raw data matrix must be reduced to a set of peaks. Each peak is described by its coordinates (retention time in the MCC and reduced inverse ion mobility) and shape (signal intensity, further shape parameters). This fundamental step is referred to as peak extraction. It is the basis for identifying discriminating peaks, and hence putative biomarkers, between two classes of measurements, such as a healthy control group and a group of patients with a confirmed disease. Current state-of-the-art peak extraction methods require human interaction, such as hand-picking approximate peak locations, assisted by a visualization of the data matrix. In a high-throughput context, however, it is preferable to have robust methods for fully automated peak extraction. Results We introduce PEAX, a modular framework for automated peak extraction. The framework consists of several steps in a pipeline architecture. Each step performs a specific sub-task and can be instantiated by different methods implemented as modules. We provide open-source software for the framework and several modules for each step. Additionally, an interface that allows easy extension by a new module is provided. Combining the modules in all reasonable ways leads to a large number of peak extraction methods. We evaluate all combinations using intrinsic error measures and by comparing the resulting peak sets with an expert-picked one. Conclusions Our software PEAX is able to automatically extract peaks from MCC/IM measurements within a few seconds. The automatically obtained results keep up with the results provided by current state-of-the-art peak extraction methods. This opens a high-throughput context for the MCC/IM application field. Our software is available at http://www.rahmannlab.de/research/ims. PMID:24450533

  19. A modular computational framework for automated peak extraction from ion mobility spectra.

    PubMed

    D'Addario, Marianna; Kopczynski, Dominik; Baumbach, Jörg Ingo; Rahmann, Sven

    2014-01-22

    An ion mobility (IM) spectrometer coupled with a multi-capillary column (MCC) measures volatile organic compounds (VOCs) in the air or in exhaled breath. This technique is utilized in several biotechnological and medical applications. Each peak in an MCC/IM measurement represents a certain compound, which may be known or unknown. For clustering and classification of measurements, the raw data matrix must be reduced to a set of peaks. Each peak is described by its coordinates (retention time in the MCC and reduced inverse ion mobility) and shape (signal intensity, further shape parameters). This fundamental step is referred to as peak extraction. It is the basis for identifying discriminating peaks, and hence putative biomarkers, between two classes of measurements, such as a healthy control group and a group of patients with a confirmed disease. Current state-of-the-art peak extraction methods require human interaction, such as hand-picking approximate peak locations, assisted by a visualization of the data matrix. In a high-throughput context, however, it is preferable to have robust methods for fully automated peak extraction. We introduce PEAX, a modular framework for automated peak extraction. The framework consists of several steps in a pipeline architecture. Each step performs a specific sub-task and can be instantiated by different methods implemented as modules. We provide open-source software for the framework and several modules for each step. Additionally, an interface that allows easy extension by a new module is provided. Combining the modules in all reasonable ways leads to a large number of peak extraction methods. We evaluate all combinations using intrinsic error measures and by comparing the resulting peak sets with an expert-picked one. Our software PEAX is able to automatically extract peaks from MCC/IM measurements within a few seconds. The automatically obtained results keep up with the results provided by current state-of-the-art peak extraction methods. This opens a high-throughput context for the MCC/IM application field. Our software is available at http://www.rahmannlab.de/research/ims.

  20. How automated image analysis techniques help scientists in species identification and classification?

    PubMed

    Yousef Kalafi, Elham; Town, Christopher; Kaur Dhillon, Sarinder

    2017-09-04

    Identification of taxonomy at a specific level is time consuming and reliant upon expert ecologists. Hence the demand for automated species identification increased over the last two decades. Automation of data classification is primarily focussed on images, incorporating and analysing image data has recently become easier due to developments in computational technology. Research efforts in identification of species include specimens' image processing, extraction of identical features, followed by classifying them into correct categories. In this paper, we discuss recent automated species identification systems, categorizing and evaluating their methods. We reviewed and compared different methods in step by step scheme of automated identification and classification systems of species images. The selection of methods is influenced by many variables such as level of classification, number of training data and complexity of images. The aim of writing this paper is to provide researchers and scientists an extensive background study on work related to automated species identification, focusing on pattern recognition techniques in building such systems for biodiversity studies.

  1. An algorithm for automatic parameter adjustment for brain extraction in BrainSuite

    NASA Astrophysics Data System (ADS)

    Rajagopal, Gautham; Joshi, Anand A.; Leahy, Richard M.

    2017-02-01

    Brain Extraction (classification of brain and non-brain tissue) of MRI brain images is a crucial pre-processing step necessary for imaging-based anatomical studies of the human brain. Several automated methods and software tools are available for performing this task, but differences in MR image parameters (pulse sequence, resolution) and instrumentand subject-dependent noise and artefacts affect the performance of these automated methods. We describe and evaluate a method that automatically adapts the default parameters of the Brain Surface Extraction (BSE) algorithm to optimize a cost function chosen to reflect accurate brain extraction. BSE uses a combination of anisotropic filtering, Marr-Hildreth edge detection, and binary morphology for brain extraction. Our algorithm automatically adapts four parameters associated with these steps to maximize the brain surface area to volume ratio. We evaluate the method on a total of 109 brain volumes with ground truth brain masks generated by an expert user. A quantitative evaluation of the performance of the proposed algorithm showed an improvement in the mean (s.d.) Dice coefficient from 0.8969 (0.0376) for default parameters to 0.9509 (0.0504) for the optimized case. These results indicate that automatic parameter optimization can result in significant improvements in definition of the brain mask.

  2. Spatial resolution requirements for automated cartographic road extraction

    USGS Publications Warehouse

    Benjamin, S.; Gaydos, L.

    1990-01-01

    Ground resolution requirements for detection and extraction of road locations in a digitized large-scale photographic database were investigated. A color infrared photograph of Sunnyvale, California was scanned, registered to a map grid, and spatially degraded to 1- to 5-metre resolution pixels. Road locations in each data set were extracted using a combination of image processing and CAD programs. These locations were compared to a photointerpretation of road locations to determine a preferred pixel size for the extraction method. Based on road pixel omission error computations, a 3-metre pixel resolution appears to be the best choice for this extraction method. -Authors

  3. Automated prostate cancer localization without the need for peripheral zone extraction using multiparametric MRI.

    PubMed

    Liu, Xin; Yetik, Imam Samil

    2011-06-01

    Multiparametric magnetic resonance imaging (MRI) has been shown to have higher localization accuracy than transrectal ultrasound (TRUS) for prostate cancer. Therefore, automated cancer segmentation using multiparametric MRI is receiving a growing interest, since MRI can provide both morphological and functional images for tissue of interest. However, all automated methods to this date are applicable to a single zone of the prostate, and the peripheral zone (PZ) of the prostate needs to be extracted manually, which is a tedious and time-consuming job. In this paper, our goal is to remove the need of PZ extraction by incorporating the spatial and geometric information of prostate tumors with multiparametric MRI derived from T2-weighted MRI, diffusion-weighted imaging (DWI) and dynamic contrast enhanced MRI (DCE-MRI). In order to remove the need of PZ extraction, the authors propose a new method to incorporate the spatial information of the cancer. This is done by introducing a new feature called location map. This new feature is constructed by applying a nonlinear transformation to the spatial position coordinates of each pixel, so that the location map implicitly represents the geometric position of each pixel with respect to the prostate region. Then, this new feature is combined with multiparametric MR images to perform tumor localization. The proposed algorithm is applied to multiparametric prostate MRI data obtained from 20 patients with biopsy-confirmed prostate cancer. The proposed method which does not need the masks of PZ was found to have prostate cancer detection specificity of 0.84, sensitivity of 0.80 and dice coefficient value of 0.42. The authors have found that fusing the spatial information allows us to obtain tumor outline without the need of PZ extraction with a considerable success (better or similar performance to methods that require manual PZ extraction). Our experimental results quantitatively demonstrate the effectiveness of the proposed method, depicting that the proposed method has a slightly better or similar localization performance compared to methods which require the masks of PZ.

  4. Evaluation of mericon E. coli O157 Screen Plus and mericon E. coli STEC O-Type Pathogen Detection Assays in Select Foods: Collaborative Study, First Action 2017.05.

    PubMed

    Bird, Patrick; Benzinger, M Joseph; Bastin, Benjamin; Crowley, Erin; Agin, James; Goins, David; Armstrong, Marcia

    2018-05-01

    QIAGEN mericon Escherichia coli O157 Screen Plus and mericon E. coli Shiga toxin-producing E. coli (STEC) O-Type Pathogen Detection Assays use Real-Time PCR technology for the rapid, accurate detection of E. coli O157 and the "big six" (O26, O45, O103, O111, O121, O145) (non-O157 STEC) in select food types. Using a paired study design, the assays were compared with the U.S. Department of Agriculture, Food Safety Inspection Service Microbiology Laboratory Guidebook Chapter 5.09 reference method for the detection of E. coli O157:H7 in raw ground beef. Both mericon assays were evaluated using the manual and an automated DNA extraction method. Thirteen technicians from five laboratories located within the continental United States participated in the collaborative study. Three levels of contamination were evaluated. Statistical analysis was conducted according to the probability of detection (POD) statistical model. Results obtained for the low-inoculum level test portions produced a difference between laboratories POD (dLPOD) value with a 95% confidence interval of 0.00 (-0.12, 0.12) for the mericon E. coli O157 Screen Plus with manual and automated extraction and mericon E. coli STEC O-Type with manual extraction and -0.01 (-0.13, 0.10) for the mericon E. coli STEC O-Type with automated extraction. The dLPOD results indicate equivalence between the candidate methods and the reference method.

  5. Pediatric Brain Extraction Using Learning-based Meta-algorithm

    PubMed Central

    Shi, Feng; Wang, Li; Dai, Yakang; Gilmore, John H.; Lin, Weili; Shen, Dinggang

    2012-01-01

    Magnetic resonance imaging of pediatric brain provides valuable information for early brain development studies. Automated brain extraction is challenging due to the small brain size and dynamic change of tissue contrast in the developing brains. In this paper, we propose a novel Learning Algorithm for Brain Extraction and Labeling (LABEL) specially for the pediatric MR brain images. The idea is to perform multiple complementary brain extractions on a given testing image by using a meta-algorithm, including BET and BSE, where the parameters of each run of the meta-algorithm are effectively learned from the training data. Also, the representative subjects are selected as exemplars and used to guide brain extraction of new subjects in different age groups. We further develop a level-set based fusion method to combine multiple brain extractions together with a closed smooth surface for obtaining the final extraction. The proposed method has been extensively evaluated in subjects of three representative age groups, such as neonate (less than 2 months), infant (1–2 years), and child (5–18 years). Experimental results show that, with 45 subjects for training (15 neonates, 15 infant, and 15 children), the proposed method can produce more accurate brain extraction results on 246 testing subjects (75 neonates, 126 infants, and 45 children), i.e., at average Jaccard Index of 0.953, compared to those by BET (0.918), BSE (0.902), ROBEX (0.901), GCUT (0.856), and other fusion methods such as Majority Voting (0.919) and STAPLE (0.941). Along with the largely-improved computational efficiency, the proposed method demonstrates its ability of automated brain extraction for pediatric MR images in a large age range. PMID:22634859

  6. Semi-automated extraction of longitudinal subglacial bedforms from digital terrain models - Two new methods

    NASA Astrophysics Data System (ADS)

    Jorge, Marco G.; Brennand, Tracy A.

    2017-07-01

    Relict drumlin and mega-scale glacial lineation (positive relief, longitudinal subglacial bedforms - LSBs) morphometry has been used as a proxy for paleo ice-sheet dynamics. LSB morphometric inventories have relied on manual mapping, which is slow and subjective and thus potentially difficult to reproduce. Automated methods are faster and reproducible, but previous methods for LSB semi-automated mapping have not been highly successful. Here, two new object-based methods for the semi-automated extraction of LSBs (footprints) from digital terrain models are compared in a test area in the Puget Lowland, Washington, USA. As segmentation procedures to create LSB-candidate objects, the normalized closed contour method relies on the contouring of a normalized local relief model addressing LSBs on slopes, and the landform elements mask method relies on the classification of landform elements derived from the digital terrain model. For identifying which LSB-candidate objects correspond to LSBs, both methods use the same LSB operational definition: a ruleset encapsulating expert knowledge, published morphometric data, and the morphometric range of LSBs in the study area. The normalized closed contour method was separately applied to four different local relief models, two computed in moving windows and two hydrology-based. Overall, the normalized closed contour method outperformed the landform elements mask method. The normalized closed contour method performed on a hydrological relief model from a multiple direction flow routing algorithm performed best. For an assessment of its transferability, the normalized closed contour method was evaluated on a second area, the Chautauqua drumlin field, Pennsylvania and New York, USA where it performed better than in the Puget Lowland. A broad comparison to previous methods suggests that the normalized relief closed contour method may be the most capable method to date, but more development is required.

  7. Arduino-based automation of a DNA extraction system.

    PubMed

    Kim, Kyung-Won; Lee, Mi-So; Ryu, Mun-Ho; Kim, Jong-Won

    2015-01-01

    There have been many studies to detect infectious diseases with the molecular genetic method. This study presents an automation process for a DNA extraction system based on microfluidics and magnetic bead, which is part of a portable molecular genetic test system. This DNA extraction system consists of a cartridge with chambers, syringes, four linear stepper actuators, and a rotary stepper actuator. The actuators provide a sequence of steps in the DNA extraction process, such as transporting, mixing, and washing for the gene specimen, magnetic bead, and reagent solutions. The proposed automation system consists of a PC-based host application and an Arduino-based controller. The host application compiles a G code sequence file and interfaces with the controller to execute the compiled sequence. The controller executes stepper motor axis motion, time delay, and input-output manipulation. It drives the stepper motor with an open library, which provides a smooth linear acceleration profile. The controller also provides a homing sequence to establish the motor's reference position, and hard limit checking to prevent any over-travelling. The proposed system was implemented and its functionality was investigated, especially regarding positioning accuracy and velocity profile.

  8. Automated anatomical labeling of bronchial branches using multiple classifiers and its application to bronchoscopy guidance based on fusion of virtual and real bronchoscopy

    NASA Astrophysics Data System (ADS)

    Ota, Shunsuke; Deguchi, Daisuke; Kitasaka, Takayuki; Mori, Kensaku; Suenaga, Yasuhito; Hasegawa, Yoshinori; Imaizumi, Kazuyoshi; Takabatake, Hirotsugu; Mori, Masaki; Natori, Hiroshi

    2008-03-01

    This paper presents a method for automated anatomical labeling of bronchial branches (ALBB) extracted from 3D CT datasets. The proposed method constructs classifiers that output anatomical names of bronchial branches by employing the machine-learning approach. We also present its application to a bronchoscopy guidance system. Since the bronchus has a complex tree structure, bronchoscopists easily tend to get disoriented and lose the way to a target location. A bronchoscopy guidance system is strongly expected to be developed to assist bronchoscopists. In such guidance system, automated presentation of anatomical names is quite useful information for bronchoscopy. Although several methods for automated ALBB were reported, most of them constructed models taking only variations of branching patterns into account and did not consider those of running directions. Since the running directions of bronchial branches differ greatly in individuals, they could not perform ALBB accurately when running directions of bronchial branches were different from those of models. Our method tries to solve such problems by utilizing the machine-learning approach. Actual procedure consists of three steps: (a) extraction of bronchial tree structures from 3D CT datasets, (b) construction of classifiers using the multi-class AdaBoost technique, and (c) automated classification of bronchial branches by using the constructed classifiers. We applied the proposed method to 51 cases of 3D CT datasets. The constructed classifiers were evaluated by leave-one-out scheme. The experimental results showed that the proposed method could assign correct anatomical names to bronchial branches of 89.1% up to segmental lobe branches. Also, we confirmed that it was quite useful to assist the bronchoscopy by presenting anatomical names of bronchial branches on real bronchoscopic views.

  9. Validation of an automated solid-phase extraction method for the analysis of 23 opioids, cocaine, and metabolites in urine with ultra-performance liquid chromatography-tandem mass spectrometry.

    PubMed

    Ramírez Fernández, María del Mar; Van Durme, Filip; Wille, Sarah M R; di Fazio, Vincent; Kummer, Natalie; Samyn, Nele

    2014-06-01

    The aim of this work was to automate a sample preparation procedure extracting morphine, hydromorphone, oxymorphone, norcodeine, codeine, dihydrocodeine, oxycodone, 6-monoacetyl-morphine, hydrocodone, ethylmorphine, benzoylecgonine, cocaine, cocaethylene, tramadol, meperidine, pentazocine, fentanyl, norfentanyl, buprenorphine, norbuprenorphine, propoxyphene, methadone and 2-ethylidene-1,5-dimethyl-3,3-diphenylpyrrolidine from urine samples. Samples were extracted by solid-phase extraction (SPE) with cation exchange cartridges using a TECAN Freedom Evo 100 base robotic system, including a hydrolysis step previous extraction when required. Block modules were carefully selected in order to use the same consumable material as in manual procedures to reduce cost and/or manual sample transfers. Moreover, the present configuration included pressure monitoring pipetting increasing pipetting accuracy and detecting sampling errors. The compounds were then separated in a chromatographic run of 9 min using a BEH Phenyl analytical column on a ultra-performance liquid chromatography-tandem mass spectrometry system. Optimization of the SPE was performed with different wash conditions and elution solvents. Intra- and inter-day relative standard deviations (RSDs) were within ±15% and bias was within ±15% for most of the compounds. Recovery was >69% (RSD < 11%) and matrix effects ranged from 1 to 26% when compensated with the internal standard. The limits of quantification ranged from 3 to 25 ng/mL depending on the compound. No cross-contamination in the automated SPE system was observed. The extracted samples were stable for 72 h in the autosampler (4°C). This method was applied to authentic samples (from forensic and toxicology cases) and to proficiency testing schemes containing cocaine, heroin, buprenorphine and methadone, offering fast and reliable results. Automation resulted in improved precision and accuracy, and a minimum operator intervention, leading to safer sample handling and less time-consuming procedures.

  10. ALE: automated label extraction from GEO metadata.

    PubMed

    Giles, Cory B; Brown, Chase A; Ripperger, Michael; Dennis, Zane; Roopnarinesingh, Xiavan; Porter, Hunter; Perz, Aleksandra; Wren, Jonathan D

    2017-12-28

    NCBI's Gene Expression Omnibus (GEO) is a rich community resource containing millions of gene expression experiments from human, mouse, rat, and other model organisms. However, information about each experiment (metadata) is in the format of an open-ended, non-standardized textual description provided by the depositor. Thus, classification of experiments for meta-analysis by factors such as gender, age of the sample donor, and tissue of origin is not feasible without assigning labels to the experiments. Automated approaches are preferable for this, primarily because of the size and volume of the data to be processed, but also because it ensures standardization and consistency. While some of these labels can be extracted directly from the textual metadata, many of the data available do not contain explicit text informing the researcher about the age and gender of the subjects with the study. To bridge this gap, machine-learning methods can be trained to use the gene expression patterns associated with the text-derived labels to refine label-prediction confidence. Our analysis shows only 26% of metadata text contains information about gender and 21% about age. In order to ameliorate the lack of available labels for these data sets, we first extract labels from the textual metadata for each GEO RNA dataset and evaluate the performance against a gold standard of manually curated labels. We then use machine-learning methods to predict labels, based upon gene expression of the samples and compare this to the text-based method. Here we present an automated method to extract labels for age, gender, and tissue from textual metadata and GEO data using both a heuristic approach as well as machine learning. We show the two methods together improve accuracy of label assignment to GEO samples.

  11. Microtiter miniature shaken bioreactor system as a scale-down model for process development of production of therapeutic alpha-interferon2b by recombinant Escherichia coli.

    PubMed

    Tan, Joo Shun; Abbasiliasi, Sahar; Kadkhodaei, Saeid; Tam, Yew Joon; Tang, Teck-Kim; Lee, Yee-Ying; Ariff, Arbakariya B

    2018-01-04

    Demand for high-throughput bioprocessing has dramatically increased especially in the biopharmaceutical industry because the technologies are of vital importance to process optimization and media development. This can be efficiently boosted by using microtiter plate (MTP) cultivation setup embedded into an automated liquid-handling system. The objective of this study was to establish an automated microscale method for upstream and downstream bioprocessing of α-IFN2b production by recombinant Escherichia coli. The extraction performance of α-IFN2b by osmotic shock using two different systems, automated microscale platform and manual extraction in MTP was compared. The amount of α-IFN2b extracted using automated microscale platform (49.2 μg/L) was comparable to manual osmotic shock method (48.8 μg/L), but the standard deviation was 2 times lower as compared to manual osmotic shock method. Fermentation parameters in MTP involving inoculum size, agitation speed, working volume and induction profiling revealed that the fermentation conditions for the highest production of α-IFN2b (85.5 μg/L) was attained at inoculum size of 8%, working volume of 40% and agitation speed of 1000 rpm with induction at 4 h after the inoculation. Although the findings at MTP scale did not show perfect scalable results as compared to shake flask culture, but microscale technique development would serve as a convenient and low-cost solution in process optimization for recombinant protein.

  12. Automated road network extraction from high spatial resolution multi-spectral imagery

    NASA Astrophysics Data System (ADS)

    Zhang, Qiaoping

    For the last three decades, the Geomatics Engineering and Computer Science communities have considered automated road network extraction from remotely-sensed imagery to be a challenging and important research topic. The main objective of this research is to investigate the theory and methodology of automated feature extraction for image-based road database creation, refinement or updating, and to develop a series of algorithms for road network extraction from high resolution multi-spectral imagery. The proposed framework for road network extraction from multi-spectral imagery begins with an image segmentation using the k-means algorithm. This step mainly concerns the exploitation of the spectral information for feature extraction. The road cluster is automatically identified using a fuzzy classifier based on a set of predefined road surface membership functions. These membership functions are established based on the general spectral signature of road pavement materials and the corresponding normalized digital numbers on each multi-spectral band. Shape descriptors of the Angular Texture Signature are defined and used to reduce the misclassifications between roads and other spectrally similar objects (e.g., crop fields, parking lots, and buildings). An iterative and localized Radon transform is developed for the extraction of road centerlines from the classified images. The purpose of the transform is to accurately and completely detect the road centerlines. It is able to find short, long, and even curvilinear lines. The input image is partitioned into a set of subset images called road component images. An iterative Radon transform is locally applied to each road component image. At each iteration, road centerline segments are detected based on an accurate estimation of the line parameters and line widths. Three localization approaches are implemented and compared using qualitative and quantitative methods. Finally, the road centerline segments are grouped into a road network. The extracted road network is evaluated against a reference dataset using a line segment matching algorithm. The entire process is unsupervised and fully automated. Based on extensive experimentation on a variety of remotely-sensed multi-spectral images, the proposed methodology achieves a moderate success in automating road network extraction from high spatial resolution multi-spectral imagery.

  13. High-throughput analysis of sulfatides in cerebrospinal fluid using automated extraction and UPLC-MS/MS.

    PubMed

    Blomqvist, Maria; Borén, Jan; Zetterberg, Henrik; Blennow, Kaj; Månsson, Jan-Eric; Ståhlman, Marcus

    2017-07-01

    Sulfatides (STs) are a group of glycosphingolipids that are highly expressed in brain. Due to their importance for normal brain function and their potential involvement in neurological diseases, development of accurate and sensitive methods for their determination is needed. Here we describe a high-throughput oriented and quantitative method for the determination of STs in cerebrospinal fluid (CSF). The STs were extracted using a fully automated liquid/liquid extraction method and quantified using ultra-performance liquid chromatography coupled to tandem mass spectrometry. With the high sensitivity of the developed method, quantification of 20 ST species from only 100 μl of CSF was performed. Validation of the method showed that the STs were extracted with high recovery (90%) and could be determined with low inter- and intra-day variation. Our method was applied to a patient cohort of subjects with an Alzheimer's disease biomarker profile. Although the total ST levels were unaltered compared with an age-matched control group, we show that the ratio of hydroxylated/nonhydroxylated STs was increased in the patient cohort. In conclusion, we believe that the fast, sensitive, and accurate method described in this study is a powerful new tool for the determination of STs in clinical as well as preclinical settings. Copyright © 2017 by the American Society for Biochemistry and Molecular Biology, Inc.

  14. Analysis of volatiles in fire debris by combination of activated charcoal strips (ACS) and automated thermal desorption-gas chromatography-mass spectrometry (ATD/GC-MS).

    PubMed

    Martin Fabritius, Marie; Broillet, Alain; König, Stefan; Weinmann, Wolfgang

    2018-06-04

    Adsorption of volatiles in gaseous phase to activated charcoal strip (ACS) is one possibility for the extraction and concentration of ignitable liquid residues (ILRs) from fire debris in arson investigations. Besides liquid extraction using carbon dioxide or hexane, automated thermo-desorption can be used to transfer adsorbed residues to direct analysis by gas chromatography-mass spectrometry (GC-MS). We present a fire debris analysis work-flow with headspace adsorption of volatiles onto ACS and subsequent automated thermo-desorption (ATD) GC-MS analysis. Only a small portion of the ACS is inserted in the ATD tube for thermal desorption coupled to GC-MS, allowing for subsequent confirmation analysis with another portion of the same ACS. This approach is a promising alternative to the routinely used ACS method with solvent extraction of retained volatiles, and the application to fire debris analysis is demonstrated. Copyright © 2018 Elsevier B.V. All rights reserved.

  15. Mixed-mode isolation of triazine metabolites from soil and aquifer sediments using automated solid-phase extraction

    USGS Publications Warehouse

    Mills, M.S.; Thurman, E.M.

    1992-01-01

    Reversed-phase isolation and ion-exchange purification were combined in the automated solid-phase extraction of two polar s-triazine metabolites, 2-amino-4-chloro-6-(isopropylamino)-s-triazine (deethylatrazine) and 2-amino-4-chloro-6-(ethylamino)-s-triazine (deisopropylatrazine) from clay-loam and slit-loam soils and sandy aquifer sediments. First, methanol/ water (4/1, v/v) soil extracts were transferred to an automated workstation following evaporation of the methanol phase for the rapid reversed-phase isolation of the metabolites on an octadecylresin (C18). The retention of the triazine metabolites on C18 decreased substantially when trace methanol concentrations (1%) remained. Furthermore, the retention on C18 increased with decreasing aqueous solubility and increasing alkyl-chain length of the metabolites and parent herbicides, indicating a reversed-phase interaction. The analytes were eluted with ethyl acetate, which left much of the soil organic-matter impurities on the resin. Second, the small-volume organic eluate was purified on an anion-exchange resin (0.5 mL/min) to extract the remaining soil pigments that could foul the ion source of the GC/MS system. Recoveries of the analytes were 75%, using deuterated atrazine as a surrogate, and were comparable to recoveries by soxhlet extraction. The detection limit was 0.1 ??g/kg with a coefficient of variation of 15%. The ease and efficiency of this automated method makes it viable, practical technique for studying triazine metabolites in the environment.

  16. An automated approach for extracting Barrier Island morphology from digital elevation models

    NASA Astrophysics Data System (ADS)

    Wernette, Phillipe; Houser, Chris; Bishop, Michael P.

    2016-06-01

    The response and recovery of a barrier island to extreme storms depends on the elevation of the dune base and crest, both of which can vary considerably alongshore and through time. Quantifying the response to and recovery from storms requires that we can first identify and differentiate the dune(s) from the beach and back-barrier, which in turn depends on accurate identification and delineation of the dune toe, crest and heel. The purpose of this paper is to introduce a multi-scale automated approach for extracting beach, dune (dune toe, dune crest and dune heel), and barrier island morphology. The automated approach introduced here extracts the shoreline and back-barrier shoreline based on elevation thresholds, and extracts the dune toe, dune crest and dune heel based on the average relative relief (RR) across multiple spatial scales of analysis. The multi-scale automated RR approach to extracting dune toe, dune crest, and dune heel based upon relative relief is more objective than traditional approaches because every pixel is analyzed across multiple computational scales and the identification of features is based on the calculated RR values. The RR approach out-performed contemporary approaches and represents a fast objective means to define important beach and dune features for predicting barrier island response to storms. The RR method also does not require that the dune toe, crest, or heel are spatially continuous, which is important because dune morphology is likely naturally variable alongshore.

  17. Automated Design Tools for Integrated Mixed-Signal Microsystems (NeoCAD)

    DTIC Science & Technology

    2005-02-01

    method, Model Order Reduction (MOR) tools, system-level, mixed-signal circuit synthesis and optimization tools, and parsitic extraction tools. A unique...Mission Area: Command and Control mixed signal circuit simulation parasitic extraction time-domain simulation IC design flow model order reduction... Extraction 1.2 Overall Program Milestones CHAPTER 2 FAST TIME DOMAIN MIXED-SIGNAL CIRCUIT SIMULATION 2.1 HAARSPICE Algorithms 2.1.1 Mathematical Background

  18. A multi-atlas based method for automated anatomical Macaca fascicularis brain MRI segmentation and PET kinetic extraction.

    PubMed

    Ballanger, Bénédicte; Tremblay, Léon; Sgambato-Faure, Véronique; Beaudoin-Gobert, Maude; Lavenne, Franck; Le Bars, Didier; Costes, Nicolas

    2013-08-15

    MRI templates and digital atlases are needed for automated and reproducible quantitative analysis of non-human primate PET studies. Segmenting brain images via multiple atlases outperforms single-atlas labelling in humans. We present a set of atlases manually delineated on brain MRI scans of the monkey Macaca fascicularis. We use this multi-atlas dataset to evaluate two automated methods in terms of accuracy, robustness and reliability in segmenting brain structures on MRI and extracting regional PET measures. Twelve individual Macaca fascicularis high-resolution 3DT1 MR images were acquired. Four individual atlases were created by manually drawing 42 anatomical structures, including cortical and sub-cortical structures, white matter regions, and ventricles. To create the MRI template, we first chose one MRI to define a reference space, and then performed a two-step iterative procedure: affine registration of individual MRIs to the reference MRI, followed by averaging of the twelve resampled MRIs. Automated segmentation in native space was obtained in two ways: 1) Maximum probability atlases were created by decision fusion of two to four individual atlases in the reference space, and transformation back into the individual native space (MAXPROB)(.) 2) One to four individual atlases were registered directly to the individual native space, and combined by decision fusion (PROPAG). Accuracy was evaluated by computing the Dice similarity index and the volume difference. The robustness and reproducibility of PET regional measurements obtained via automated segmentation was evaluated on four co-registered MRI/PET datasets, which included test-retest data. Dice indices were always over 0.7 and reached maximal values of 0.9 for PROPAG with all four individual atlases. There was no significant mean volume bias. The standard deviation of the bias decreased significantly when increasing the number of individual atlases. MAXPROB performed better when increasing the number of atlases used. When all four atlases were used for the MAXPROB creation, the accuracy of morphometric segmentation approached that of the PROPAG method. PET measures extracted either via automatic methods or via the manually defined regions were strongly correlated, with no significant regional differences between methods. Intra-class correlation coefficients for test-retest data were over 0.87. Compared to single atlas extractions, multi-atlas methods improve the accuracy of region definition. They also perform comparably to manually defined regions for PET quantification. Multiple atlases of Macaca fascicularis brains are now available and allow reproducible and simplified analyses. Copyright © 2013 Elsevier Inc. All rights reserved.

  19. Comparison of different methods to quantify fat classes in bakery products.

    PubMed

    Shin, Jae-Min; Hwang, Young-Ok; Tu, Ock-Ju; Jo, Han-Bin; Kim, Jung-Hun; Chae, Young-Zoo; Rhu, Kyung-Hun; Park, Seung-Kook

    2013-01-15

    The definition of fat differs in different countries; thus whether fat is listed on food labels depends on the country. Some countries list crude fat content in the 'Fat' section on the food label, whereas other countries list total fat. In this study, three methods were used for determining fat classes and content in bakery products: the Folch method, the automated Soxhlet method, and the AOAC 996.06 method. The results using these methods were compared. Fat (crude) extracted by the Folch and Soxhlet methods was gravimetrically determined and assessed by fat class using capillary gas chromatography (GC). In most samples, fat (total) content determined by the AOAC 996.06 method was lower than the fat (crude) content determined by the Folch or automated Soxhlet methods. Furthermore, monounsaturated fat or saturated fat content determined by the AOAC 996.06 method was lowest. Almost no difference was observed between fat (crude) content determined by the Folch method and that determined by the automated Soxhlet method for nearly all samples. In three samples (wheat biscuits, butter cookies-1, and chocolate chip cookies), monounsaturated fat, saturated fat, and trans fat content obtained by the automated Soxhlet method was higher than that obtained by the Folch method. The polyunsaturated fat content obtained by the automated Soxhlet method was not higher than that obtained by the Folch method in any sample. Copyright © 2012 Elsevier Ltd. All rights reserved.

  20. Fully automated analysis of four tobacco-specific N-nitrosamines in mainstream cigarette smoke using two-dimensional online solid phase extraction combined with liquid chromatography-tandem mass spectrometry.

    PubMed

    Zhang, Jie; Bai, Ruoshi; Yi, Xiaoli; Yang, Zhendong; Liu, Xingyu; Zhou, Jun; Liang, Wei

    2016-01-01

    A fully automated method for the detection of four tobacco-specific nitrosamines (TSNAs) in mainstream cigarette smoke (MSS) has been developed. The new developed method is based on two-dimensional online solid-phase extraction-liquid chromatography-tandem mass spectrometry (SPE/LC-MS/MS). The two dimensional SPE was performed in the method utilizing two cartridges with different extraction mechanisms to cleanup disturbances of different polarity to minimize sample matrix effects on each analyte. Chromatographic separation was achieved using a UPLC C18 reversed phase analytical column. Under the optimum online SPE/LC-MS/MS conditions, N'-nitrosonornicotine (NNN), N'-nitrosoanatabine (NAT), N'-nitrosoanabasine (NAB), and 4-(methylnitrosamino)-1-(3-pyridyl)-1-butanone (NNK) were baseline separated with good peak shapes. This method appears to be the most sensitive method yet reported for determination of TSNAs in mainstream cigarette smoke. The limits of quantification for NNN, NNK, NAT and NAB reached the levels of 6.0, 1.0, 3.0 and 0.6 pg/cig, respectively, which were well below the lowest levels of TSNAs in MSS of current commercial cigarettes. The accuracy of the measurement of four TSNAs was from 92.8 to 107.3%. The relative standard deviations of intra-and inter-day analysis were less than 5.4% and 7.5%, respectively. The main advantages of the method developed are fairly high sensitivity, selectivity and accuracy of results, minimum sample pre-treatment, full automation, and high throughput. As a part of the validation procedure, the developed method was applied to evaluate TSNAs yields for 27 top-selling commercial cigarettes in China. Copyright © 2015 Elsevier B.V. All rights reserved.

  1. Automated Detection of Solar Loops by the Oriented Connectivity Method

    NASA Technical Reports Server (NTRS)

    Lee, Jong Kwan; Newman, Timothy S.; Gary, G. Allen

    2004-01-01

    An automated technique to segment solar coronal loops from intensity images of the Sun s corona is introduced. It exploits physical characteristics of the solar magnetic field to enable robust extraction from noisy images. The technique is a constructive curve detection approach, constrained by collections of estimates of the magnetic fields orientation. Its effectiveness is evaluated through experiments on synthetic and real coronal images.

  2. Isolation of Mitochondrial DNA from Single, Short Hairs without Roots Using Pressure Cycling Technology.

    PubMed

    Harper, Kathryn A; Meiklejohn, Kelly A; Merritt, Richard T; Walker, Jessica; Fisher, Constance L; Robertson, James M

    2018-02-01

    Hairs are commonly submitted as evidence to forensic laboratories, but standard nuclear DNA analysis is not always possible. Mitochondria (mt) provide another source of genetic material; however, manual isolation is laborious. In a proof-of-concept study, we assessed pressure cycling technology (PCT; an automated approach that subjects samples to varying cycles of high and low pressure) for extracting mtDNA from single, short hairs without roots. Using three microscopically similar donors, we determined the ideal PCT conditions and compared those yields to those obtained using the traditional manual micro-tissue grinder method. Higher yields were recovered from grinder extracts, but yields from PCT extracts exceeded the requirements for forensic analysis, with the DNA quality confirmed through sequencing. Automated extraction of mtDNA from hairs without roots using PCT could be useful for forensic laboratories processing numerous samples.

  3. Automation of DNA and miRNA co-extraction for miRNA-based identification of human body fluids and tissues.

    PubMed

    Kulstein, Galina; Marienfeld, Ralf; Miltner, Erich; Wiegand, Peter

    2016-10-01

    In the last years, microRNA (miRNA) analysis came into focus in the field of forensic genetics. Yet, no standardized and recommendable protocols for co-isolation of miRNA and DNA from forensic relevant samples have been developed so far. Hence, this study evaluated the performance of an automated Maxwell® 16 System-based strategy (Promega) for co-extraction of DNA and miRNA from forensically relevant (blood and saliva) samples compared to (semi-)manual extraction methods. Three procedures were compared on the basis of recovered quantity of DNA and miRNA (as determined by real-time PCR and Bioanalyzer), miRNA profiling (shown by Cq values and extraction efficiency), STR profiles, duration, contamination risk and handling. All in all, the results highlight that the automated co-extraction procedure yielded the highest miRNA and DNA amounts from saliva and blood samples compared to both (semi-)manual protocols. Also, for aged and genuine samples of forensically relevant traces the miRNA and DNA yields were sufficient for subsequent downstream analysis. Furthermore, the strategy allows miRNA extraction only in cases where it is relevant to obtain additional information about the sample type. Besides, this system enables flexible sample throughput and labor-saving sample processing with reduced risk of cross-contamination. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  4. Introducing automation to the molecular diagnosis of Trypanosoma cruzi infection: A comparative study of sample treatments, DNA extraction methods and real-time PCR assays.

    PubMed

    Abras, Alba; Ballart, Cristina; Llovet, Teresa; Roig, Carme; Gutiérrez, Cristina; Tebar, Silvia; Berenguer, Pere; Pinazo, María-Jesús; Posada, Elizabeth; Gascón, Joaquim; Schijman, Alejandro G; Gállego, Montserrat; Muñoz, Carmen

    2018-01-01

    Polymerase chain reaction (PCR) has become a useful tool for the diagnosis of Trypanosoma cruzi infection. The development of automated DNA extraction methodologies and PCR systems is an important step toward the standardization of protocols in routine diagnosis. To date, there are only two commercially available Real-Time PCR assays for the routine laboratory detection of T. cruzi DNA in clinical samples: TCRUZIDNA.CE (Diagnostic Bioprobes Srl) and RealCycler CHAG (Progenie Molecular). Our aim was to evaluate the RealCycler CHAG assay taking into account the whole process. We assessed the usefulness of an automated DNA extraction system based on magnetic particles (EZ1 Virus Mini Kit v2.0, Qiagen) combined with a commercially available Real-Time PCR assay targeting satellite DNA (SatDNA) of T. cruzi (RealCycler CHAG), a methodology used for routine diagnosis in our hospital. It was compared with a well-known strategy combining a commercial DNA isolation kit based on silica columns (High Pure PCR Template Preparation Kit, Roche Diagnostics) with an in-house Real-Time PCR targeting SatDNA. The results of the two methodologies were in almost perfect agreement, indicating they can be used interchangeably. However, when variations in protocol factors were applied (sample treatment, extraction method and Real-Time PCR), the results were less convincing. A comprehensive fine-tuning of the whole procedure is the key to successful results. Guanidine EDTA-blood (GEB) samples are not suitable for DNA extraction based on magnetic particles due to inhibition, at least when samples are not processed immediately. This is the first study to evaluate the RealCycler CHAG assay taking into account the overall process, including three variables (sample treatment, extraction method and Real-Time PCR). Our findings may contribute to the harmonization of protocols between laboratories and to a wider application of Real-Time PCR in molecular diagnostic laboratories associated with health centers.

  5. Quantification of 31 illicit and medicinal drugs and metabolites in whole blood by fully automated solid-phase extraction and ultra-performance liquid chromatography-tandem mass spectrometry.

    PubMed

    Bjørk, Marie Kjærgaard; Simonsen, Kirsten Wiese; Andersen, David Wederkinck; Dalsgaard, Petur Weihe; Sigurðardóttir, Stella Rögn; Linnet, Kristian; Rasmussen, Brian Schou

    2013-03-01

    An efficient method for analyzing illegal and medicinal drugs in whole blood using fully automated sample preparation and short ultra-high-performance liquid chromatography-tandem mass spectrometry (MS/MS) run time is presented. A selection of 31 drugs, including amphetamines, cocaine, opioids, and benzodiazepines, was used. In order to increase the efficiency of routine analysis, a robotic system based on automated liquid handling and capable of handling all unit operation for sample preparation was built on a Freedom Evo 200 platform with several add-ons from Tecan and third-party vendors. Solid-phase extraction was performed using Strata X-C plates. Extraction time for 96 samples was less than 3 h. Chromatography was performed using an ACQUITY UPLC system (Waters Corporation, Milford, USA). Analytes were separated on a 100 mm × 2.1 mm, 1.7 μm Acquity UPLC CSH C(18) column using a 6.5 min 0.1 % ammonia (25 %) in water/0.1 % ammonia (25 %) in methanol gradient and quantified by MS/MS (Waters Quattro Premier XE) in multiple-reaction monitoring mode. Full validation, including linearity, precision and trueness, matrix effect, ion suppression/enhancement of co-eluting analytes, recovery, and specificity, was performed. The method was employed successfully in the laboratory and used for routine analysis of forensic material. In combination with tetrahydrocannabinol analysis, the method covered 96 % of cases involving driving under the influence of drugs. The manual labor involved in preparing blood samples, solvents, etc., was reduced to a half an hour per batch. The automated sample preparation setup also minimized human exposure to hazardous materials, provided highly improved ergonomics, and eliminated manual pipetting.

  6. Automatic information extraction from unstructured mammography reports using distributed semantics.

    PubMed

    Gupta, Anupama; Banerjee, Imon; Rubin, Daniel L

    2018-02-01

    To date, the methods developed for automated extraction of information from radiology reports are mainly rule-based or dictionary-based, and, therefore, require substantial manual effort to build these systems. Recent efforts to develop automated systems for entity detection have been undertaken, but little work has been done to automatically extract relations and their associated named entities in narrative radiology reports that have comparable accuracy to rule-based methods. Our goal is to extract relations in a unsupervised way from radiology reports without specifying prior domain knowledge. We propose a hybrid approach for information extraction that combines dependency-based parse tree with distributed semantics for generating structured information frames about particular findings/abnormalities from the free-text mammography reports. The proposed IE system obtains a F 1 -score of 0.94 in terms of completeness of the content in the information frames, which outperforms a state-of-the-art rule-based system in this domain by a significant margin. The proposed system can be leveraged in a variety of applications, such as decision support and information retrieval, and may also easily scale to other radiology domains, since there is no need to tune the system with hand-crafted information extraction rules. Copyright © 2018 Elsevier Inc. All rights reserved.

  7. Automated anatomical labeling method for abdominal arteries extracted from 3D abdominal CT images

    NASA Astrophysics Data System (ADS)

    Oda, Masahiro; Hoang, Bui Huy; Kitasaka, Takayuki; Misawa, Kazunari; Fujiwara, Michitaka; Mori, Kensaku

    2012-02-01

    This paper presents an automated anatomical labeling method of abdominal arteries. In abdominal surgery, understanding of blood vessel structure concerning with a target organ is very important. Branching pattern of blood vessels differs among individuals. It is required to develop a system that can assist understanding of a blood vessel structure and anatomical names of blood vessels of a patient. Previous anatomical labbeling methods for abdominal arteries deal with either of the upper or lower abdominal arteries. In this paper, we present an automated anatomical labeling method of both of the upper and lower abdominal arteries extracted from CT images. We obtain a tree structure of artery regions and calculate feature values for each branch. These feature values include the diameter, curvature, direction, and running vectors of a branch. Target arteries of this method are grouped based on branching conditions. The following processes are separately applied for each group. We compute candidate artery names by using classifiers that are trained to output artery names. A correction process of the candidate anatomical names based on the rule of majority is applied to determine final names. We applied the proposed method to 23 cases of 3D abdominal CT images. Experimental results showed that the proposed method is able to perform nomenclature of entire major abdominal arteries. The recall and the precision rates of labeling are 79.01% and 80.41%, respectively.

  8. Automated solid-phase extraction coupled online with HPLC-FLD for the quantification of zearalenone in edible oil.

    PubMed

    Drzymala, Sarah S; Weiz, Stefan; Heinze, Julia; Marten, Silvia; Prinz, Carsten; Zimathies, Annett; Garbe, Leif-Alexander; Koch, Matthias

    2015-05-01

    Established maximum levels for the mycotoxin zearalenone (ZEN) in edible oil require monitoring by reliable analytical methods. Therefore, an automated SPE-HPLC online system based on dynamic covalent hydrazine chemistry has been developed. The SPE step comprises a reversible hydrazone formation by ZEN and a hydrazine moiety covalently attached to a solid phase. Seven hydrazine materials with different properties regarding the resin backbone, pore size, particle size, specific surface area, and loading have been evaluated. As a result, a hydrazine-functionalized silica gel was chosen. The final automated online method was validated and applied to the analysis of three maize germ oil samples including a provisionally certified reference material. Important performance criteria for the recovery (70-120 %) and precision (RSDr <25 %) as set by the Commission Regulation EC 401/2006 were fulfilled: The mean recovery was 78 % and RSDr did not exceed 8 %. The results of the SPE-HPLC online method were further compared to results obtained by liquid-liquid extraction with stable isotope dilution analysis LC-MS/MS and found to be in good agreement. The developed SPE-HPLC online system with fluorescence detection allows a reliable, accurate, and sensitive quantification (limit of quantification, 30 μg/kg) of ZEN in edible oils while significantly reducing the workload. To our knowledge, this is the first report on an automated SPE-HPLC method based on a covalent SPE approach.

  9. Automated identification of best-quality coronary artery segments from multiple-phase coronary CT angiography (cCTA) for vessel analysis

    NASA Astrophysics Data System (ADS)

    Zhou, Chuan; Chan, Heang-Ping; Hadjiiski, Lubomir M.; Chughtai, Aamer; Wei, Jun; Kazerooni, Ella A.

    2016-03-01

    We are developing an automated method to identify the best quality segment among the corresponding segments in multiple-phase cCTA. The coronary artery trees are automatically extracted from different cCTA phases using our multi-scale vessel segmentation and tracking method. An automated registration method is then used to align the multiple-phase artery trees. The corresponding coronary artery segments are identified in the registered vessel trees and are straightened by curved planar reformation (CPR). Four features are extracted from each segment in each phase as quality indicators in the original CT volume and the straightened CPR volume. Each quality indicator is used as a voting classifier to vote the corresponding segments. A newly designed weighted voting ensemble (WVE) classifier is finally used to determine the best-quality coronary segment. An observer preference study is conducted with three readers to visually rate the quality of the vessels in 1 to 6 rankings. Six and 10 cCTA cases are used as training and test set in this preliminary study. For the 10 test cases, the agreement between automatically identified best-quality (AI-BQ) segments and radiologist's top 2 rankings is 79.7%, and between AI-BQ and the other two readers are 74.8% and 83.7%, respectively. The results demonstrated that the performance of our automated method was comparable to those of experienced readers for identification of the best-quality coronary segments.

  10. CRD's Daniela Ushizima Receives DOE Early Career Award

    Science.gov Websites

    Science. The award will fund research into developing new methods to help scientists extract more -the-art data analysis methods with emphasis on pattern recognition and machine learning emerging sources, multidisciplinary teams to interpret the data and the computational methods to automate some of

  11. Application of the BioMek 2000 Laboratory Automation Workstation and the DNA IQ System to the extraction of forensic casework samples.

    PubMed

    Greenspoon, Susan A; Ban, Jeffrey D; Sykes, Karen; Ballard, Elizabeth J; Edler, Shelley S; Baisden, Melissa; Covington, Brian L

    2004-01-01

    Robotic systems are commonly utilized for the extraction of database samples. However, the application of robotic extraction to forensic casework samples is a more daunting task. Such a system must be versatile enough to accommodate a wide range of samples that may contain greatly varying amounts of DNA, but it must also pose no more risk of contamination than the manual DNA extraction methods. This study demonstrates that the BioMek 2000 Laboratory Automation Workstation, used in combination with the DNA IQ System, is versatile enough to accommodate the wide range of samples typically encountered by a crime laboratory. The use of a silica coated paramagnetic resin, as with the DNA IQ System, facilitates the adaptation of an open well, hands off, robotic system to the extraction of casework samples since no filtration or centrifugation steps are needed. Moreover, the DNA remains tightly coupled to the silica coated paramagnetic resin for the entire process until the elution step. A short pre-extraction incubation step is necessary prior to loading samples onto the robot and it is at this step that most modifications are made to accommodate the different sample types and substrates commonly encountered with forensic evidentiary samples. Sexual assault (mixed stain) samples, cigarette butts, blood stains, buccal swabs, and various tissue samples were successfully extracted with the BioMek 2000 Laboratory Automation Workstation and the DNA IQ System, with no evidence of contamination throughout the extensive validation studies reported here.

  12. Automatic Centerline Extraction of Coverd Roads by Surrounding Objects from High Resolution Satellite Images

    NASA Astrophysics Data System (ADS)

    Kamangir, H.; Momeni, M.; Satari, M.

    2017-09-01

    This paper presents an automatic method to extract road centerline networks from high and very high resolution satellite images. The present paper addresses the automated extraction roads covered with multiple natural and artificial objects such as trees, vehicles and either shadows of buildings or trees. In order to have a precise road extraction, this method implements three stages including: classification of images based on maximum likelihood algorithm to categorize images into interested classes, modification process on classified images by connected component and morphological operators to extract pixels of desired objects by removing undesirable pixels of each class, and finally line extraction based on RANSAC algorithm. In order to evaluate performance of the proposed method, the generated results are compared with ground truth road map as a reference. The evaluation performance of the proposed method using representative test images show completeness values ranging between 77% and 93%.

  13. Comparison of methods of DNA extraction for real-time PCR in a model of pleural tuberculosis.

    PubMed

    Santos, Ana; Cremades, Rosa; Rodríguez, Juan Carlos; García-Pachón, Eduardo; Ruiz, Montserrat; Royo, Gloria

    2010-01-01

    Molecular methods have been reported to have different sensitivities in the diagnosis of pleural tuberculosis and this may in part be caused by the use of different methods of DNA extraction. Our study compares nine DNA extraction systems in an experimental model of pleural tuberculosis. An inoculum of Mycobacterium tuberculosis was added to 23 pleural liquid samples with different characteristics. DNA was subsequently extracted using nine different methods (seven manual and two automatic) for analysis with real-time PCR. Only two methods were able to detect the presence of M. tuberculosis DNA in all the samples: extraction using columns (Qiagen) and automated extraction with the TNAI system (Roche). The automatic method is more expensive, but requires less time. Almost all the false negatives were because of the difficulty involved in extracting M. tuberculosis DNA, as in general, all the methods studied are capable of eliminating inhibitory substances that block the amplification reaction. The method of M. tuberculosis DNA extraction used affects the results of the diagnosis of pleural tuberculosis by molecular methods. DNA extraction systems that have been shown to be effective in pleural liquid should be used.

  14. Simultaneous determination of dextromethorphan, dextrorphan, and guaifenesin in human plasma using semi-automated liquid/liquid extraction and gradient liquid chromatography tandem mass spectrometry.

    PubMed

    Eichhold, Thomas H; McCauley-Myers, David L; Khambe, Deepa A; Thompson, Gary A; Hoke, Steven H

    2007-01-17

    A method for the simultaneous determination of dextromethorphan (DEX), dextrorphan (DET), and guaifenesin (GG) in human plasma was developed, validated, and applied to determine plasma concentrations of these compounds in samples from six clinical pharmacokinetic (PK) studies. Semi-automated liquid handling systems were used to perform the majority of the sample manipulation including liquid/liquid extraction (LLE) of the analytes from human plasma. Stable-isotope-labeled analogues were utilized as internal standards (ISTDs) for each analyte to facilitate accurate and precise quantification. Extracts were analyzed using gradient liquid chromatography coupled with tandem mass spectrometry (LC-MS/MS). Use of semi-automated LLE with LC-MS/MS proved to be a very rugged and reliable approach for analysis of more than 6200 clinical study samples. The lower limit of quantification was validated at 0.010, 0.010, and 1.0 ng/mL of plasma for DEX, DET, and GG, respectively. Accuracy and precision of quality control (QC) samples for all three analytes met FDA Guidance criteria of +/-15% for average QC accuracy with coefficients of variation less than 15%. Data from the thorough evaluation of the method during development, validation, and application are presented to characterize selectivity, linearity, over-range sample analysis, accuracy, precision, autosampler carry-over, ruggedness, extraction efficiency, ionization suppression, and stability. Pharmacokinetic data are also provided to illustrate improvements in systemic drug and metabolite concentration-time profiles that were achieved by formulation optimization.

  15. Comparison of commercial systems for extraction of nucleic acids from DNA/RNA respiratory pathogens.

    PubMed

    Yang, Genyan; Erdman, Dean E; Kodani, Maja; Kools, John; Bowen, Michael D; Fields, Barry S

    2011-01-01

    This study compared six automated nucleic acid extraction systems and one manual kit for their ability to recover nucleic acids from human nasal wash specimens spiked with five respiratory pathogens, representing Gram-positive bacteria (Streptococcus pyogenes), Gram-negative bacteria (Legionella pneumophila), DNA viruses (adenovirus), segmented RNA viruses (human influenza virus A), and non-segmented RNA viruses (respiratory syncytial virus). The robots and kit evaluated represent major commercially available methods that are capable of simultaneous extraction of DNA and RNA from respiratory specimens, and included platforms based on magnetic-bead technology (KingFisher mL, Biorobot EZ1, easyMAG, KingFisher Flex, and MagNA Pure Compact) or glass fiber filter technology (Biorobot MDX and the manual kit Allprep). All methods yielded extracts free of cross-contamination and RT-PCR inhibition. All automated systems recovered L. pneumophila and adenovirus DNA equivalently. However, the MagNA Pure protocol demonstrated more than 4-fold higher DNA recovery from the S. pyogenes than other methods. The KingFisher mL and easyMAG protocols provided 1- to 3-log wider linearity and extracted 3- to 4-fold more RNA from the human influenza virus and respiratory syncytial virus. These findings suggest that systems differed in nucleic acid recovery, reproducibility, and linearity in a pathogen specific manner. Published by Elsevier B.V.

  16. High-throughput DNA extraction of forensic adhesive tapes.

    PubMed

    Forsberg, Christina; Jansson, Linda; Ansell, Ricky; Hedman, Johannes

    2016-09-01

    Tape-lifting has since its introduction in the early 2000's become a well-established sampling method in forensic DNA analysis. Sampling is quick and straightforward while the following DNA extraction is more challenging due to the "stickiness", rigidity and size of the tape. We have developed, validated and implemented a simple and efficient direct lysis DNA extraction protocol for adhesive tapes that requires limited manual labour. The method uses Chelex beads and is applied with SceneSafe FAST tape. This direct lysis protocol provided higher mean DNA yields than PrepFiler Express BTA on Automate Express, although the differences were not significant when using clothes worn in a controlled fashion as reference material (p=0.13 and p=0.34 for T-shirts and button-down shirts, respectively). Through in-house validation we show that the method is fit-for-purpose for application in casework, as it provides high DNA yields and amplifiability, as well as good reproducibility and DNA extract stability. After implementation in casework, the proportion of extracts with DNA concentrations above 0.01ng/μL increased from 71% to 76%. Apart from providing higher DNA yields compared with the previous method, the introduction of the developed direct lysis protocol also reduced the amount of manual labour by half and doubled the potential throughput for tapes at the laboratory. Generally, simplified manual protocols can serve as a cost-effective alternative to sophisticated automation solutions when the aim is to enable high-throughput DNA extraction of complex crime scene samples. Copyright © 2016 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.

  17. Semiautomated Device for Batch Extraction of Metabolites from Tissue Samples

    PubMed Central

    2012-01-01

    Metabolomics has become a mainstream analytical strategy for investigating metabolism. The quality of data derived from these studies is proportional to the consistency of the sample preparation. Although considerable research has been devoted to finding optimal extraction protocols, most of the established methods require extensive sample handling. Manual sample preparation can be highly effective in the hands of skilled technicians, but an automated tool for purifying metabolites from complex biological tissues would be of obvious utility to the field. Here, we introduce the semiautomated metabolite batch extraction device (SAMBED), a new tool designed to simplify metabolomics sample preparation. We discuss SAMBED’s design and show that SAMBED-based extractions are of comparable quality to extracts produced through traditional methods (13% mean coefficient of variation from SAMBED versus 16% from manual extractions). Moreover, we show that aqueous SAMBED-based methods can be completed in less than a quarter of the time required for manual extractions. PMID:22292466

  18. Automated in vivo identification of fungal infection on human scalp using optical coherence tomography and machine learning

    NASA Astrophysics Data System (ADS)

    Dubey, Kavita; Srivastava, Vishal; Singh Mehta, Dalip

    2018-04-01

    Early identification of fungal infection on the human scalp is crucial for avoiding hair loss. The diagnosis of fungal infection on the human scalp is based on a visual assessment by trained experts or doctors. Optical coherence tomography (OCT) has the ability to capture fungal infection information from the human scalp with a high resolution. In this study, we present a fully automated, non-contact, non-invasive optical method for rapid detection of fungal infections based on the extracted features from A-line and B-scan images of OCT. A multilevel ensemble machine model is designed to perform automated classification, which shows the superiority of our classifier to the best classifier based on the features extracted from OCT images. In this study, 60 samples (30 fungal, 30 normal) were imaged by OCT and eight features were extracted. The classification algorithm had an average sensitivity, specificity and accuracy of 92.30, 90.90 and 91.66%, respectively, for identifying fungal and normal human scalps. This remarkable classifying ability makes the proposed model readily applicable to classifying the human scalp.

  19. Automated feature extraction and classification from image sources

    USGS Publications Warehouse

    ,

    1995-01-01

    The U.S. Department of the Interior, U.S. Geological Survey (USGS), and Unisys Corporation have completed a cooperative research and development agreement (CRADA) to explore automated feature extraction and classification from image sources. The CRADA helped the USGS define the spectral and spatial resolution characteristics of airborne and satellite imaging sensors necessary to meet base cartographic and land use and land cover feature classification requirements and help develop future automated geographic and cartographic data production capabilities. The USGS is seeking a new commercial partner to continue automated feature extraction and classification research and development.

  20. Comparative evaluation of in-house manual, and commercial semi-automated and automated DNA extraction platforms in the sample preparation of human stool specimens for a Salmonella enterica 5'-nuclease assay.

    PubMed

    Schuurman, Tim; de Boer, Richard; Patty, Rachèl; Kooistra-Smid, Mirjam; van Zwet, Anton

    2007-12-01

    In the present study, three methods (NucliSens miniMAG [bioMérieux], MagNA Pure DNA Isolation Kit III Bacteria/Fungi [Roche], and a silica-guanidiniumthiocyanate {Si-GuSCN-F} procedure for extracting DNA from stool specimens were compared with regard to analytical performance (relative DNA recovery and down stream real-time PCR amplification of Salmonella enterica DNA), stability of the extracted DNA, hands-on time (HOT), total processing time (TPT), and costs. The Si-GuSCN-F procedure showed the highest analytical performance (relative recovery of 99%, S. enterica real-time PCR sensitivity of 91%) at the lowest associated costs per extraction (euro 4.28). However, this method did required the longest HOT (144 min) and subsequent TPT (176 min) when processing 24 extractions. Both miniMAG and MagNA Pure extraction showed similar performances at first (relative recoveries of 57% and 52%, S. enterica real-time PCR sensitivity of 85%). However, when difference in the observed Ct values after real-time PCR were taken into account, MagNA Pure resulted in a significant increase in Ct value compared to both miniMAG and Si-GuSCN-F (with on average +1.26 and +1.43 cycles). With regard to inhibition all methods showed relatively low inhibition rates (< 4%), with miniMAG providing the lowest rate (0.7%). Extracted DNA was stable for at least 1 year for all methods. HOT was lowest for MagNA Pure (60 min) and TPT was shortest for miniMAG (121 min). Costs, finally, were euro 4.28 for Si-GuSCN, euro 6.69 for MagNA Pure and euro 9.57 for miniMAG.

  1. Automated extraction method for the center line of spinal canal and its application to the spinal curvature quantification in torso X-ray CT images

    NASA Astrophysics Data System (ADS)

    Hayashi, Tatsuro; Zhou, Xiangrong; Chen, Huayue; Hara, Takeshi; Miyamoto, Kei; Kobayashi, Tatsunori; Yokoyama, Ryujiro; Kanematsu, Masayuki; Hoshi, Hiroaki; Fujita, Hiroshi

    2010-03-01

    X-ray CT images have been widely used in clinical routine in recent years. CT images scanned by a modern CT scanner can show the details of various organs and tissues. This means various organs and tissues can be simultaneously interpreted on CT images. However, CT image interpretation requires a lot of time and energy. Therefore, support for interpreting CT images based on image-processing techniques is expected. The interpretation of the spinal curvature is important for clinicians because spinal curvature is associated with various spinal disorders. We propose a quantification scheme of the spinal curvature based on the center line of spinal canal on CT images. The proposed scheme consists of four steps: (1) Automated extraction of the skeletal region based on CT number thresholding. (2) Automated extraction of the center line of spinal canal. (3) Generation of the median plane image of spine, which is reformatted based on the spinal canal. (4) Quantification of the spinal curvature. The proposed scheme was applied to 10 cases, and compared with the Cobb angle that is commonly used by clinicians. We found that a high-correlation (for the 95% confidence interval, lumbar lordosis: 0.81-0.99) between values obtained by the proposed (vector) method and Cobb angle. Also, the proposed method can provide the reproducible result (inter- and intra-observer variability: within 2°). These experimental results suggested a possibility that the proposed method was efficient for quantifying the spinal curvature on CT images.

  2. Screening for anabolic steroids in urine of forensic cases using fully automated solid phase extraction and LC-MS-MS.

    PubMed

    Andersen, David W; Linnet, Kristian

    2014-01-01

    A screening method for 18 frequently measured exogenous anabolic steroids and the testosterone/epitestosterone (T/E) ratio in forensic cases has been developed and validated. The method involves a fully automated sample preparation including enzyme treatment, addition of internal standards and solid phase extraction followed by analysis by liquid chromatography-tandem mass spectrometry (LC-MS-MS) using electrospray ionization with adduct formation for two compounds. Urine samples from 580 forensic cases were analyzed to determine the T/E ratio and occurrence of exogenous anabolic steroids. Extraction recoveries ranged from 77 to 95%, matrix effects from 48 to 78%, overall process efficiencies from 40 to 54% and the lower limit of identification ranged from 2 to 40 ng/mL. In the 580 urine samples analyzed from routine forensic cases, 17 (2.9%) were found positive for one or more anabolic steroids. Only seven different steroids including testosterone were found in the material, suggesting that only a small number of common steroids are likely to occur in a forensic context. The steroids were often in high concentrations (>100 ng/mL), and a combination of steroids and/or other drugs of abuse were seen in the majority of cases. The method presented serves as a fast and automated screening procedure, proving the suitability of LC-MS-MS for analyzing anabolic steroids. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  3. Automated prediction of protein function and detection of functional sites from structure.

    PubMed

    Pazos, Florencio; Sternberg, Michael J E

    2004-10-12

    Current structural genomics projects are yielding structures for proteins whose functions are unknown. Accordingly, there is a pressing requirement for computational methods for function prediction. Here we present PHUNCTIONER, an automatic method for structure-based function prediction using automatically extracted functional sites (residues associated to functions). The method relates proteins with the same function through structural alignments and extracts 3D profiles of conserved residues. Functional features to train the method are extracted from the Gene Ontology (GO) database. The method extracts these features from the entire GO hierarchy and hence is applicable across the whole range of function specificity. 3D profiles associated with 121 GO annotations were extracted. We tested the power of the method both for the prediction of function and for the extraction of functional sites. The success of function prediction by our method was compared with the standard homology-based method. In the zone of low sequence similarity (approximately 15%), our method assigns the correct GO annotation in 90% of the protein structures considered, approximately 20% higher than inheritance of function from the closest homologue.

  4. Full-text automated detection of surgical site infections secondary to neurosurgery in Rennes, France.

    PubMed

    Campillo-Gimenez, Boris; Garcelon, Nicolas; Jarno, Pascal; Chapplain, Jean Marc; Cuggia, Marc

    2013-01-01

    The surveillance of Surgical Site Infections (SSI) contributes to the management of risk in French hospitals. Manual identification of infections is costly, time-consuming and limits the promotion of preventive procedures by the dedicated teams. The introduction of alternative methods using automated detection strategies is promising to improve this surveillance. The present study describes an automated detection strategy for SSI in neurosurgery, based on textual analysis of medical reports stored in a clinical data warehouse. The method consists firstly, of enrichment and concept extraction from full-text reports using NOMINDEX, and secondly, text similarity measurement using a vector space model. The text detection was compared to the conventional strategy based on self-declaration and to the automated detection using the diagnosis-related group database. The text-mining approach showed the best detection accuracy, with recall and precision equal to 92% and 40% respectively, and confirmed the interest of reusing full-text medical reports to perform automated detection of SSI.

  5. A time-series method for automated measurement of changes in mitotic and interphase duration from time-lapse movies.

    PubMed

    Sigoillot, Frederic D; Huckins, Jeremy F; Li, Fuhai; Zhou, Xiaobo; Wong, Stephen T C; King, Randall W

    2011-01-01

    Automated time-lapse microscopy can visualize proliferation of large numbers of individual cells, enabling accurate measurement of the frequency of cell division and the duration of interphase and mitosis. However, extraction of quantitative information by manual inspection of time-lapse movies is too time-consuming to be useful for analysis of large experiments. Here we present an automated time-series approach that can measure changes in the duration of mitosis and interphase in individual cells expressing fluorescent histone 2B. The approach requires analysis of only 2 features, nuclear area and average intensity. Compared to supervised learning approaches, this method reduces processing time and does not require generation of training data sets. We demonstrate that this method is as sensitive as manual analysis in identifying small changes in interphase or mitotic duration induced by drug or siRNA treatment. This approach should facilitate automated analysis of high-throughput time-lapse data sets to identify small molecules or gene products that influence timing of cell division.

  6. Semi-automated extraction and characterization of Stromal Vascular Fraction using a new medical device.

    PubMed

    Hanke, Alexander; Prantl, Lukas; Wenzel, Carina; Nerlich, Michael; Brockhoff, Gero; Loibl, Markus; Gehmert, Sebastian

    2016-01-01

    The stem cell rich Stromal Vascular Fraction (SVF) can be harvested by processing lipo-aspirate or fat tissue with an enzymatic digestion followed by centrifugation. To date neither a standardised extraction method for SVF nor a generally admitted protocol for cell application in patients exists. A novel commercially available semi-automated device for the extraction of SVF promises sterility, consistent results and usability in the clinical routine. The aim of this work was to compare the quantity and quality of the SVF between the new system and an established manual laboratory method. SVF was extracted from lipo-aspirate both by a prototype of the semi-automated UNiStation™ (NeoGenesis, Seoul, Korea) and by hand preparation with common laboratory equipment. Cell composition of the SVF was characterized by multi-parametric flow-cytometry (FACSCanto-II, BD Biosciences). The total cell number (quantity) of the SVF was determined as well the percentage of cells expressing the stem cell marker CD34, the leucocyte marker CD45 and the marker CD271 for highly proliferative stem cells (quality). Lipo-aspirate obtained from six patients was processed with both the novel device (d) and the hand preparation (h) which always resulted in a macroscopically visible SVF. However, there was a tendency of a fewer cell yield per gram of used lipo-aspirate with the device (d: 1.1×105±1.1×105 vs. h: 2.0×105±1.7×105; p = 0.06). Noteworthy, the percentage of CD34+ cells was significantly lower when using the device (d: 57.3% ±23.8% vs. h: 74.1% ±13.4%; p = 0.02) and CD45+ leukocyte counts tend to be higher when compared to the hand preparation (d: 20.7% ±15.8% vs. h: 9.8% ±7.1%; p = 0.07). The percentage of highly proliferative CD271+ cells was similar for both methods (d:12.9% ±9.6% vs. h: 13.4% ±11.6%; p = 0.74) and no differences were found for double positive cells of CD34+/CD45+ (d: 5.9% ±1.7% vs. h: 1.7% ±1.1%; p = 0.13), CD34+/CD271+ (d: 24.1% ±12.0% vs. h: 14.2% ±8.5%; p = 0.07). The semi-automated closed system provides a considerable amount of sterile SVF with high reproducibility. Furthermore, the SVF extracted by both methods showed a similar cell composition which is in accordance with the data from literature. This semi-automated device offers an opportunity to take research and application of the SVF one step further to the clinic.

  7. A System for Automated Extraction of Metadata from Scanned Documents using Layout Recognition and String Pattern Search Models.

    PubMed

    Misra, Dharitri; Chen, Siyuan; Thoma, George R

    2009-01-01

    One of the most expensive aspects of archiving digital documents is the manual acquisition of context-sensitive metadata useful for the subsequent discovery of, and access to, the archived items. For certain types of textual documents, such as journal articles, pamphlets, official government records, etc., where the metadata is contained within the body of the documents, a cost effective method is to identify and extract the metadata in an automated way, applying machine learning and string pattern search techniques.At the U. S. National Library of Medicine (NLM) we have developed an automated metadata extraction (AME) system that employs layout classification and recognition models with a metadata pattern search model for a text corpus with structured or semi-structured information. A combination of Support Vector Machine and Hidden Markov Model is used to create the layout recognition models from a training set of the corpus, following which a rule-based metadata search model is used to extract the embedded metadata by analyzing the string patterns within and surrounding each field in the recognized layouts.In this paper, we describe the design of our AME system, with focus on the metadata search model. We present the extraction results for a historic collection from the Food and Drug Administration, and outline how the system may be adapted for similar collections. Finally, we discuss some ongoing enhancements to our AME system.

  8. Validation of high-throughput measurement system with microwave-assisted extraction, fully automated sample preparation device, and gas chromatography-electron capture detector for determination of polychlorinated biphenyls in whale blubber.

    PubMed

    Fujita, Hiroyuki; Honda, Katsuhisa; Hamada, Noriaki; Yasunaga, Genta; Fujise, Yoshihiro

    2009-02-01

    Validation of a high-throughput measurement system with microwave-assisted extraction (MAE), fully automated sample preparation device (SPD), and gas chromatography-electron capture detector (GC-ECD) for the determination of polychlorinated biphenyls (PCBs) in minke whale blubber was performed. PCB congeners accounting for > 95% of the total PCBs burden in blubber were efficiently extracted with a small volume (20 mL) of n-hexane using MAE due to simultaneous saponification and extraction. Further, the crude extract obtained by MAE was rapidly purified and automatically substituted to a small volume (1 mL) of toluene using SPD without using concentrators. Furthermore, the concentration of PCBs in the purified and concentrated solution was accurately determined by GC-ECD. Moreover, the result of accuracy test using a certified material (SRM 1588b; Cod liver oil) showed good agreement with the NIST certified concentration values. In addition, the method quantification limit of total-PCB in whale blubbers was 41 ng g(-1). This new measurement system for PCBs takes only four hours. Consequently, it indicated this method is the most suitable for the monitoring and screening of PCBs in the conservation of the marine ecosystem and safe distribution of foods.

  9. Methods for automatically analyzing humpback song units.

    PubMed

    Rickwood, Peter; Taylor, Andrew

    2008-03-01

    This paper presents mathematical techniques for automatically extracting and analyzing bioacoustic signals. Automatic techniques are described for isolation of target signals from background noise, extraction of features from target signals and unsupervised classification (clustering) of the target signals based on these features. The only user-provided inputs, other than raw sound, is an initial set of signal processing and control parameters. Of particular note is that the number of signal categories is determined automatically. The techniques, applied to hydrophone recordings of humpback whales (Megaptera novaeangliae), produce promising initial results, suggesting that they may be of use in automated analysis of not only humpbacks, but possibly also in other bioacoustic settings where automated analysis is desirable.

  10. Vessel extraction in retinal images using automatic thresholding and Gabor Wavelet.

    PubMed

    Ali, Aziah; Hussain, Aini; Wan Zaki, Wan Mimi Diyana

    2017-07-01

    Retinal image analysis has been widely used for early detection and diagnosis of multiple systemic diseases. Accurate vessel extraction in retinal image is a crucial step towards a fully automated diagnosis system. This work affords an efficient unsupervised method for extracting blood vessels from retinal images by combining existing Gabor Wavelet (GW) method with automatic thresholding. Green channel image is extracted from color retinal image and used to produce Gabor feature image using GW. Both green channel image and Gabor feature image undergo vessel-enhancement step in order to highlight blood vessels. Next, the two vessel-enhanced images are transformed to binary images using automatic thresholding before combined to produce the final vessel output. Combining the images results in significant improvement of blood vessel extraction performance compared to using individual image. Effectiveness of the proposed method was proven via comparative analysis with existing methods validated using publicly available database, DRIVE.

  11. Cell nuclei segmentation in fluorescence microscopy images using inter- and intra-region discriminative information.

    PubMed

    Song, Yang; Cai, Weidong; Feng, David Dagan; Chen, Mei

    2013-01-01

    Automated segmentation of cell nuclei in microscopic images is critical to high throughput analysis of the ever increasing amount of data. Although cell nuclei are generally visually distinguishable for human, automated segmentation faces challenges when there is significant intensity inhomogeneity among cell nuclei or in the background. In this paper, we propose an effective method for automated cell nucleus segmentation using a three-step approach. It first obtains an initial segmentation by extracting salient regions in the image, then reduces false positives using inter-region feature discrimination, and finally refines the boundary of the cell nuclei using intra-region contrast information. This method has been evaluated on two publicly available datasets of fluorescence microscopic images with 4009 cells, and has achieved superior performance compared to popular state of the art methods using established metrics.

  12. Metal-organic framework mixed-matrix disks: Versatile supports for automated solid-phase extraction prior to chromatographic separation.

    PubMed

    Ghani, Milad; Font Picó, Maria Francesca; Salehinia, Shima; Palomino Cabello, Carlos; Maya, Fernando; Berlier, Gloria; Saraji, Mohammad; Cerdà, Víctor; Turnes Palomino, Gemma

    2017-03-10

    We present for the first time the application of metal-organic framework (MOF) mixed-matrix disks (MMD) for the automated flow-through solid-phase extraction (SPE) of environmental pollutants. Zirconium terephthalate UiO-66 and UiO-66-NH 2 MOFs with different size (90, 200 and 300nm) have been incorporated into mechanically stable polyvinylidene difluoride (PVDF) disks. The performance of the MOF-MMDs for automated SPE of seven substituted phenols prior to HPLC analysis has been evaluated using the sequential injection analysis technique. MOF-MMDs enabled the simultaneous extraction of phenols with the concomitant size exclusion of molecules of larger size. The best extraction performance was obtained using a MOF-MMD containing 90nm UiO-66-NH 2 crystals. Using the selected MOF-MMD, detection limits ranging from 0.1 to 0.2μgL -1 were obtained. Relative standard deviations ranged from 3.9 to 5.3% intra-day, and 4.7-5.7% inter-day. Membrane batch-to-batch reproducibility was from 5.2 to 6.4%. Three different groundwater samples were analyzed with the proposed method using MOF-MMDs, obtaining recoveries ranging from 90 to 98% for all tested analytes. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. Direct analysis of textile dyes from trace fibers by automated microfluidics extraction system coupled with Q-TOF mass spectrometer for forensic applications.

    PubMed

    Sultana, Nadia; Gunning, Sean; Furst, Stephen J; Garrard, Kenneth P; Dow, Thomas A; Vinueza, Nelson R

    2018-05-19

    Textile fiber is a common form of transferable trace evidence at the crime scene. Different techniques such as microscopy or spectroscopy are currently being used for trace fiber analysis. Dye characterization in trace fiber adds an important molecular specificity during the analysis. In this study, we performed a direct trace fiber analysis method via dye characterization by a novel automated microfluidics device (MFD) dye extraction system coupled with a quadrupole-time-of-flight (Q-TOF) mass spectrometer (MS). The MFD system used an in-house made automated procedure which requires only 10μL of organic solvent for the extraction. The total extraction and identification time by the system is under 12min. A variety of sulfonated azo and anthraquinone dyes were analyzed from ∼1mm length nylon fiber samples. This methodology successfully characterized multiple dyes (≥3 dyes) from a single fiber thread. Additionally, it was possible to do dye characterization from single fibers with a diameter of ∼10μm. The MFD-MS system was used for elemental composition and isotopic distribution analysis where MFD-MS/MS was used for structural characterization of dyes on fibers. Copyright © 2018 Elsevier B.V. All rights reserved.

  14. Optimal Geometrical Set for Automated Marker Placement to Virtualized Real-Time Facial Emotions

    PubMed Central

    Maruthapillai, Vasanthan; Murugappan, Murugappan

    2016-01-01

    In recent years, real-time face recognition has been a major topic of interest in developing intelligent human-machine interaction systems. Over the past several decades, researchers have proposed different algorithms for facial expression recognition, but there has been little focus on detection in real-time scenarios. The present work proposes a new algorithmic method of automated marker placement used to classify six facial expressions: happiness, sadness, anger, fear, disgust, and surprise. Emotional facial expressions were captured using a webcam, while the proposed algorithm placed a set of eight virtual markers on each subject’s face. Facial feature extraction methods, including marker distance (distance between each marker to the center of the face) and change in marker distance (change in distance between the original and new marker positions), were used to extract three statistical features (mean, variance, and root mean square) from the real-time video sequence. The initial position of each marker was subjected to the optical flow algorithm for marker tracking with each emotional facial expression. Finally, the extracted statistical features were mapped into corresponding emotional facial expressions using two simple non-linear classifiers, K-nearest neighbor and probabilistic neural network. The results indicate that the proposed automated marker placement algorithm effectively placed eight virtual markers on each subject’s face and gave a maximum mean emotion classification rate of 96.94% using the probabilistic neural network. PMID:26859884

  15. Optimal Geometrical Set for Automated Marker Placement to Virtualized Real-Time Facial Emotions.

    PubMed

    Maruthapillai, Vasanthan; Murugappan, Murugappan

    2016-01-01

    In recent years, real-time face recognition has been a major topic of interest in developing intelligent human-machine interaction systems. Over the past several decades, researchers have proposed different algorithms for facial expression recognition, but there has been little focus on detection in real-time scenarios. The present work proposes a new algorithmic method of automated marker placement used to classify six facial expressions: happiness, sadness, anger, fear, disgust, and surprise. Emotional facial expressions were captured using a webcam, while the proposed algorithm placed a set of eight virtual markers on each subject's face. Facial feature extraction methods, including marker distance (distance between each marker to the center of the face) and change in marker distance (change in distance between the original and new marker positions), were used to extract three statistical features (mean, variance, and root mean square) from the real-time video sequence. The initial position of each marker was subjected to the optical flow algorithm for marker tracking with each emotional facial expression. Finally, the extracted statistical features were mapped into corresponding emotional facial expressions using two simple non-linear classifiers, K-nearest neighbor and probabilistic neural network. The results indicate that the proposed automated marker placement algorithm effectively placed eight virtual markers on each subject's face and gave a maximum mean emotion classification rate of 96.94% using the probabilistic neural network.

  16. AUTOMATED SOLID PHASE EXTRACTION GC/MS FOR ANALYSIS OF SEMIVOLATILES IN WATER AND SEDIMENTS

    EPA Science Inventory

    Data is presented on the development of a new automated system combining solid phase extraction (SPE) with GC/MS spectrometry for the single-run analysis of water samples containing a broad range of organic compounds. The system uses commercially available automated in-line sampl...

  17. Queries over Unstructured Data: Probabilistic Methods to the Rescue

    NASA Astrophysics Data System (ADS)

    Sarawagi, Sunita

    Unstructured data like emails, addresses, invoices, call transcripts, reviews, and press releases are now an integral part of any large enterprise. A challenge of modern business intelligence applications is analyzing and querying data seamlessly across structured and unstructured sources. This requires the development of automated techniques for extracting structured records from text sources and resolving entity mentions in data from various sources. The success of any automated method for extraction and integration depends on how effectively it unifies diverse clues in the unstructured source and in existing structured databases. We argue that statistical learning techniques like Conditional Random Fields (CRFs) provide a accurate, elegant and principled framework for tackling these tasks. Given the inherent noise in real-world sources, it is important to capture the uncertainty of the above operations via imprecise data models. CRFs provide a sound probability distribution over extractions but are not easy to represent and query in a relational framework. We present methods of approximating this distribution to query-friendly row and column uncertainty models. Finally, we present models for representing the uncertainty of de-duplication and algorithms for various Top-K count queries on imprecise duplicates.

  18. Classification of the Gabon SAR Mosaic Using a Wavelet Based Rule Classifier

    NASA Technical Reports Server (NTRS)

    Simard, Marc; Saatchi, Sasan; DeGrandi, Gianfranco

    2000-01-01

    A method is developed for semi-automated classification of SAR images of the tropical forest. Information is extracted using the wavelet transform (WT). The transform allows for extraction of structural information in the image as a function of scale. In order to classify the SAR image, a Desicion Tree Classifier is used. The method of pruning is used to optimize classification rate versus tree size. The results give explicit insight on the type of information useful for a given class.

  19. Automated torso organ segmentation from 3D CT images using structured perceptron and dual decomposition

    NASA Astrophysics Data System (ADS)

    Nimura, Yukitaka; Hayashi, Yuichiro; Kitasaka, Takayuki; Mori, Kensaku

    2015-03-01

    This paper presents a method for torso organ segmentation from abdominal CT images using structured perceptron and dual decomposition. A lot of methods have been proposed to enable automated extraction of organ regions from volumetric medical images. However, it is necessary to adjust empirical parameters of them to obtain precise organ regions. This paper proposes an organ segmentation method using structured output learning. Our method utilizes a graphical model and binary features which represent the relationship between voxel intensities and organ labels. Also we optimize the weights of the graphical model by structured perceptron and estimate the best organ label for a given image by dynamic programming and dual decomposition. The experimental result revealed that the proposed method can extract organ regions automatically using structured output learning. The error of organ label estimation was 4.4%. The DICE coefficients of left lung, right lung, heart, liver, spleen, pancreas, left kidney, right kidney, and gallbladder were 0.91, 0.95, 0.77, 0.81, 0.74, 0.08, 0.83, 0.84, and 0.03, respectively.

  20. Fully automated determination of nicotine and its major metabolites in whole blood by means of a DBS online-SPE LC-HR-MS/MS approach for sports drug testing.

    PubMed

    Tretzel, Laura; Thomas, Andreas; Piper, Thomas; Hedeland, Mikael; Geyer, Hans; Schänzer, Wilhelm; Thevis, Mario

    2016-05-10

    Dried blood spots (DBS) represent a sample matrix collected under minimal-invasive, straightforward and robust conditions. DBS specimens have been shown to provide appropriate test material for different analytical disciplines, e.g., preclinical drug development, therapeutic drug monitoring, forensic toxicology and diagnostic analysis of metabolic disorders in newborns. However, the sample preparation has occasionally been reported as laborious and time consuming. In order to minimize the manual workload and to substantiate the suitability of DBS for high sample-throughput, the automation of sample preparation processes is of paramount interest. In the current study, the development and validation of a fully automated DBS extraction method coupled to online solid-phase extraction using the example of nicotine, its major metabolites nornicotine, cotinine and trans-3'-hydroxycotinine and the tobacco alkaloids anabasine and anatabine is presented, based on the rationale that the use of nicotine-containing products for performance-enhancing purposes has been monitored by the World Anti-Doping Agency (WADA) for several years. Automation-derived DBS sample extracts were directed online to liquid chromatography high resolution/high mass accuracy tandem mass spectrometry, and target analytes were determined with support of four deuterated internal standards. Validation of the method yielded precise (CV <7.5% for intraday and <12.3% for interday measurements) and linear (r(2)>0.998) results. The limit of detection was established at 5 ng mL(-1) for all studied compounds, the extraction recovery ranged from 25 to 44%, and no matrix effects were observed. To exemplify the applicability of the DBS online-SPE LC-MS/MS approach for sports drug testing purposes, the method was applied to authentic DBS samples obtained from smokers, snus users, and e-cigarette users. Statistical evaluation of the obtained results indicated differences in metabolic behavior depending on the route of administration (inhalative versus buccal absorption) in terms of the ratio of nicotine and nornicotine. Copyright © 2016 Elsevier B.V. All rights reserved.

  1. Refining Automatically Extracted Knowledge Bases Using Crowdsourcing.

    PubMed

    Li, Chunhua; Zhao, Pengpeng; Sheng, Victor S; Xian, Xuefeng; Wu, Jian; Cui, Zhiming

    2017-01-01

    Machine-constructed knowledge bases often contain noisy and inaccurate facts. There exists significant work in developing automated algorithms for knowledge base refinement. Automated approaches improve the quality of knowledge bases but are far from perfect. In this paper, we leverage crowdsourcing to improve the quality of automatically extracted knowledge bases. As human labelling is costly, an important research challenge is how we can use limited human resources to maximize the quality improvement for a knowledge base. To address this problem, we first introduce a concept of semantic constraints that can be used to detect potential errors and do inference among candidate facts. Then, based on semantic constraints, we propose rank-based and graph-based algorithms for crowdsourced knowledge refining, which judiciously select the most beneficial candidate facts to conduct crowdsourcing and prune unnecessary questions. Our experiments show that our method improves the quality of knowledge bases significantly and outperforms state-of-the-art automatic methods under a reasonable crowdsourcing cost.

  2. Automated lung tumor segmentation for whole body PET volume based on novel downhill region growing

    NASA Astrophysics Data System (ADS)

    Ballangan, Cherry; Wang, Xiuying; Eberl, Stefan; Fulham, Michael; Feng, Dagan

    2010-03-01

    We propose an automated lung tumor segmentation method for whole body PET images based on a novel downhill region growing (DRG) technique, which regards homogeneous tumor hotspots as 3D monotonically decreasing functions. The method has three major steps: thoracic slice extraction with K-means clustering of the slice features; hotspot segmentation with DRG; and decision tree analysis based hotspot classification. To overcome the common problem of leakage into adjacent hotspots in automated lung tumor segmentation, DRG employs the tumors' SUV monotonicity features. DRG also uses gradient magnitude of tumors' SUV to improve tumor boundary definition. We used 14 PET volumes from patients with primary NSCLC for validation. The thoracic region extraction step achieved good and consistent results for all patients despite marked differences in size and shape of the lungs and the presence of large tumors. The DRG technique was able to avoid the problem of leakage into adjacent hotspots and produced a volumetric overlap fraction of 0.61 +/- 0.13 which outperformed four other methods where the overlap fraction varied from 0.40 +/- 0.24 to 0.59 +/- 0.14. Of the 18 tumors in 14 NSCLC studies, 15 lesions were classified correctly, 2 were false negative and 15 were false positive.

  3. Automated Control of the Organic and Inorganic Composition of Aloe vera Extracts Using (1)H NMR Spectroscopy.

    PubMed

    Monakhova, Yulia B; Randel, Gabriele; Diehl, Bernd W K

    2016-09-01

    Recent classification of Aloe vera whole-leaf extract by the International Agency for Research and Cancer as a possible carcinogen to humans as well as the continuous adulteration of A. vera's authentic material have generated renewed interest in controlling A. vera. The existing NMR spectroscopic method for the analysis of A. vera, which is based on a routine developed at Spectral Service, was extended. Apart from aloverose, glucose, malic acid, lactic acid, citric acid, whole-leaf material (WLM), acetic acid, fumaric acid, sodium benzoate, and potassium sorbate, the quantification of Mg(2+), Ca(2+), and fructose is possible with the addition of a Cs-EDTA solution to sample. The proposed methodology was automated, which includes phasing, baseline-correction, deconvolution (based on the Lorentzian function), integration, quantification, and reporting. The NMR method was applied to 41 A. vera preparations in the form of liquid A. vera juice and solid A. vera powder. The advantages of the new NMR methodology over the previous method were discussed. Correlation between the new and standard NMR methodologies was significant for aloverose, glucose, malic acid, lactic acid, citric acid, and WLM (P < 0.0001, R(2) = 0.99). NMR was found to be suitable for the automated simultaneous quantitative determination of 13 parameters in A. vera.

  4. Automation of ⁹⁹Tc extraction by LOV prior ICP-MS detection: application to environmental samples.

    PubMed

    Rodríguez, Rogelio; Leal, Luz; Miranda, Silvia; Ferrer, Laura; Avivar, Jessica; García, Ariel; Cerdà, Víctor

    2015-02-01

    A new, fast, automated and inexpensive sample pre-treatment method for (99)Tc determination by inductively coupled plasma-mass spectrometry (ICP-MS) detection is presented. The miniaturized approach is based on a lab-on-valve (LOV) system, allowing automatic separation and preconcentration of (99)Tc. Selectivity is provided by the solid phase extraction system used (TEVA resin) which retains selectively pertechnetate ion in diluted nitric acid solution. The proposed system has some advantages such as minimization of sample handling, reduction of reagents volume, improvement of intermediate precision and sample throughput, offering a significant decrease of both time and cost per analysis in comparison to other flow techniques and batch methods. The proposed LOV system has been successfully applied to different samples of environmental interest (water and soil) with satisfactory recoveries, between 94% and 98%. The detection limit (LOD) of the developed method is 0.005 ng. The high durability of the resin and its low amount (32 mg), its good intermediate precision (RSD 3.8%) and repeatability (RSD 2%) and its high extraction frequency (up to 5 h(-1)) makes this method an inexpensive, high precision and fast tool for monitoring (99)Tc in environmental samples. Copyright © 2014 Elsevier B.V. All rights reserved.

  5. Design and Development of the Terrain Information Extraction System

    DTIC Science & Technology

    1990-09-04

    system successfully demonstrated relief measurement and orthophoto production, automated feature extraction has remained "the major problem of today’s...the hierarchical relaxation correlation method developed by Helava Associates, Inc. and digital orthophoto production. To achieve this high accuracy...image memory transfer rates will be achieved by using data blocks or "image tiles ." Further, an image fringe loading module will be implemented which

  6. EEG feature selection method based on decision tree.

    PubMed

    Duan, Lijuan; Ge, Hui; Ma, Wei; Miao, Jun

    2015-01-01

    This paper aims to solve automated feature selection problem in brain computer interface (BCI). In order to automate feature selection process, we proposed a novel EEG feature selection method based on decision tree (DT). During the electroencephalogram (EEG) signal processing, a feature extraction method based on principle component analysis (PCA) was used, and the selection process based on decision tree was performed by searching the feature space and automatically selecting optimal features. Considering that EEG signals are a series of non-linear signals, a generalized linear classifier named support vector machine (SVM) was chosen. In order to test the validity of the proposed method, we applied the EEG feature selection method based on decision tree to BCI Competition II datasets Ia, and the experiment showed encouraging results.

  7. Automated solid-phase extraction-liquid chromatography-tandem mass spectrometry analysis of 11-nor-Delta9-tetrahydrocannabinol-9-carboxylic acid in human urine specimens: application to a high-throughput urine analysis laboratory.

    PubMed

    Robandt, P V; Klette, K L; Sibum, M

    2009-10-01

    An automated solid-phase extraction coupled with liquid chromatography and tandem mass spectrometry (SPE-LC-MS-MS) method for the analysis of 11-nor-Delta(9)-tetrahydrocannabinol-9-carboxylic acid (THC-COOH) in human urine specimens was developed. The method was linear (R(2) = 0.9986) to 1000 ng/mL with no carryover evidenced at 2000 ng/mL. Limits of quantification and detection were found to be 2 ng/mL. Interrun precision was evaluated at the 15 ng/mL level over nine batches spanning 15 days (n = 45). The coefficient of variation (%CV) was found to be 5.5% over the course of the validation. Intrarun precision of a 15 ng/mL control (n = 5) ranged from 0.58% CV to 7.4% CV for the same set of analytical batches. Interference was tested using (+/-)-11-hydroxy-Delta(9)-tetrahydrocannabinol, cannabidiol, (-)-Delta(8)-tetrahydrocannabinol, and cannabinol. One hundred and nineteen specimens previously found to contain THC-COOH by a previously validated gas chromatographic mass spectrometry (GC-MS) procedure were compared to the SPE-LC-MS-MS method. Excellent agreement was found (R(2) = 0.9925) for the parallel comparison study. The automated SPE procedure eliminates the human factors of specimen handling, extraction, and derivatization, thereby reducing labor costs and rework resulting from human error or technique issues. Additionally, method runtime is greatly reduced (e.g., during parallel studies the SPE-LC-MS-MS instrument was often finished with analysis by the time the technician finished the offline SPE and derivatization procedure prior to the GC-MS analysis).

  8. In vivo automated quantification of quality of apples during storage using optical coherence tomography images

    NASA Astrophysics Data System (ADS)

    Srivastava, Vishal; Dalal, Devjyoti; Kumar, Anuj; Prakash, Surya; Dalal, Krishna

    2018-06-01

    Moisture content is an important feature of fruits and vegetables. As 80% of apple content is water, so decreasing the moisture content will degrade the quality of apples (Golden Delicious). The computational and texture features of the apples were extracted from optical coherence tomography (OCT) images. A support vector machine with a Gaussian kernel model was used to perform automated classification. To evaluate the quality of wax coated apples during storage in vivo, our proposed method opens up the possibility of fully automated quantitative analysis based on the morphological features of apples. Our results demonstrate that the analysis of the computational and texture features of OCT images may be a good non-destructive method for the assessment of the quality of apples.

  9. Image segmentation and dynamic lineage analysis in single-cell fluorescence microscopy.

    PubMed

    Wang, Quanli; Niemi, Jarad; Tan, Chee-Meng; You, Lingchong; West, Mike

    2010-01-01

    An increasingly common component of studies in synthetic and systems biology is analysis of dynamics of gene expression at the single-cell level, a context that is heavily dependent on the use of time-lapse movies. Extracting quantitative data on the single-cell temporal dynamics from such movies remains a major challenge. Here, we describe novel methods for automating key steps in the analysis of single-cell, fluorescent images-segmentation and lineage reconstruction-to recognize and track individual cells over time. The automated analysis iteratively combines a set of extended morphological methods for segmentation, and uses a neighborhood-based scoring method for frame-to-frame lineage linking. Our studies with bacteria, budding yeast and human cells, demonstrate the portability and usability of these methods, whether using phase, bright field or fluorescent images. These examples also demonstrate the utility of our integrated approach in facilitating analyses of engineered and natural cellular networks in diverse settings. The automated methods are implemented in freely available, open-source software.

  10. Validation of a DNA IQ-based extraction method for TECAN robotic liquid handling workstations for processing casework.

    PubMed

    Frégeau, Chantal J; Lett, C Marc; Fourney, Ron M

    2010-10-01

    A semi-automated DNA extraction process for casework samples based on the Promega DNA IQ™ system was optimized and validated on TECAN Genesis 150/8 and Freedom EVO robotic liquid handling stations configured with fixed tips and a TECAN TE-Shake™ unit. The use of an orbital shaker during the extraction process promoted efficiency with respect to DNA capture, magnetic bead/DNA complex washes and DNA elution. Validation studies determined the reliability and limitations of this shaker-based process. Reproducibility with regards to DNA yields for the tested robotic workstations proved to be excellent and not significantly different than that offered by the manual phenol/chloroform extraction. DNA extraction of animal:human blood mixtures contaminated with soil demonstrated that a human profile was detectable even in the presence of abundant animal blood. For exhibits containing small amounts of biological material, concordance studies confirmed that DNA yields for this shaker-based extraction process are equivalent or greater to those observed with phenol/chloroform extraction as well as our original validated automated magnetic bead percolation-based extraction process. Our data further supports the increasing use of robotics for the processing of casework samples. Crown Copyright © 2009. Published by Elsevier Ireland Ltd. All rights reserved.

  11. An improved high-throughput lipid extraction method for the analysis of human brain lipids.

    PubMed

    Abbott, Sarah K; Jenner, Andrew M; Mitchell, Todd W; Brown, Simon H J; Halliday, Glenda M; Garner, Brett

    2013-03-01

    We have developed a protocol suitable for high-throughput lipidomic analysis of human brain samples. The traditional Folch extraction (using chloroform and glass-glass homogenization) was compared to a high-throughput method combining methyl-tert-butyl ether (MTBE) extraction with mechanical homogenization utilizing ceramic beads. This high-throughput method significantly reduced sample handling time and increased efficiency compared to glass-glass homogenizing. Furthermore, replacing chloroform with MTBE is safer (less carcinogenic/toxic), with lipids dissolving in the upper phase, allowing for easier pipetting and the potential for automation (i.e., robotics). Both methods were applied to the analysis of human occipital cortex. Lipid species (including ceramides, sphingomyelins, choline glycerophospholipids, ethanolamine glycerophospholipids and phosphatidylserines) were analyzed via electrospray ionization mass spectrometry and sterol species were analyzed using gas chromatography mass spectrometry. No differences in lipid species composition were evident when the lipid extraction protocols were compared, indicating that MTBE extraction with mechanical bead homogenization provides an improved method for the lipidomic profiling of human brain tissue.

  12. General methodology for simultaneous representation and discrimination of multiple object classes

    NASA Astrophysics Data System (ADS)

    Talukder, Ashit; Casasent, David P.

    1998-03-01

    We address a new general method for linear and nonlinear feature extraction for simultaneous representation and classification. We call this approach the maximum representation and discrimination feature (MRDF) method. We develop a novel nonlinear eigenfeature extraction technique to represent data with closed-form solutions and use it to derive a nonlinear MRDF algorithm. Results of the MRDF method on synthetic databases are shown and compared with results from standard Fukunaga-Koontz transform and Fisher discriminant function methods. The method is also applied to an automated product inspection problem and for classification and pose estimation of two similar objects under 3D aspect angle variations.

  13. Pharmacokinetic Studies of Chinese Medicinal Herbs Using an Automated Blood Sampling System and Liquid Chromatography-mass Spectrometry.

    PubMed

    Wu, Yu-Tse; Wu, Ming-Tsang; Lin, Chia-Chun; Chien, Chao-Feng; Tsai, Tung-Hu

    2012-01-01

    The safety of herbal products is one of the major concerns for the modernization of traditional Chinese medicine, and pharmacokinetic data of medicinal herbs guide us to design the rational use of the herbal formula. This article reviews the advantages of the automated blood sampling (ABS) systems for pharmacokinetic studies. In addition, three commonly used sample preparative methods, protein precipitation, liquid-liquid extraction and solid-phase extraction, are introduced. Furthermore, the definition, causes and evaluation of matrix effects in liquid chromatography-mass spectrometry (LC/MS) analysis are demonstrated. Finally, we present our previous works as practical examples of the application of ABS systems and LC/MS for the pharmacokinetic studies of Chinese medicinal herbs.

  14. Extraction of Blebs in Human Embryonic Stem Cell Videos.

    PubMed

    Guan, Benjamin X; Bhanu, Bir; Talbot, Prue; Weng, Nikki Jo-Hao

    2016-01-01

    Blebbing is an important biological indicator in determining the health of human embryonic stem cells (hESC). Especially, areas of a bleb sequence in a video are often used to distinguish two cell blebbing behaviors in hESC: dynamic and apoptotic blebbings. This paper analyzes various segmentation methods for bleb extraction in hESC videos and introduces a bio-inspired score function to improve the performance in bleb extraction. Full bleb formation consists of bleb expansion and retraction. Blebs change their size and image properties dynamically in both processes and between frames. Therefore, adaptive parameters are needed for each segmentation method. A score function derived from the change of bleb area and orientation between consecutive frames is proposed which provides adaptive parameters for bleb extraction in videos. In comparison to manual analysis, the proposed method provides an automated fast and accurate approach for bleb sequence extraction.

  15. Can we replace curation with information extraction software?

    PubMed

    Karp, Peter D

    2016-01-01

    Can we use programs for automated or semi-automated information extraction from scientific texts as practical alternatives to professional curation? I show that error rates of current information extraction programs are too high to replace professional curation today. Furthermore, current IEP programs extract single narrow slivers of information, such as individual protein interactions; they cannot extract the large breadth of information extracted by professional curators for databases such as EcoCyc. They also cannot arbitrate among conflicting statements in the literature as curators can. Therefore, funding agencies should not hobble the curation efforts of existing databases on the assumption that a problem that has stymied Artificial Intelligence researchers for more than 60 years will be solved tomorrow. Semi-automated extraction techniques appear to have significantly more potential based on a review of recent tools that enhance curator productivity. But a full cost-benefit analysis for these tools is lacking. Without such analysis it is possible to expend significant effort developing information-extraction tools that automate small parts of the overall curation workflow without achieving a significant decrease in curation costs.Database URL. © The Author(s) 2016. Published by Oxford University Press.

  16. A System for Automated Extraction of Metadata from Scanned Documents using Layout Recognition and String Pattern Search Models

    PubMed Central

    Misra, Dharitri; Chen, Siyuan; Thoma, George R.

    2010-01-01

    One of the most expensive aspects of archiving digital documents is the manual acquisition of context-sensitive metadata useful for the subsequent discovery of, and access to, the archived items. For certain types of textual documents, such as journal articles, pamphlets, official government records, etc., where the metadata is contained within the body of the documents, a cost effective method is to identify and extract the metadata in an automated way, applying machine learning and string pattern search techniques. At the U. S. National Library of Medicine (NLM) we have developed an automated metadata extraction (AME) system that employs layout classification and recognition models with a metadata pattern search model for a text corpus with structured or semi-structured information. A combination of Support Vector Machine and Hidden Markov Model is used to create the layout recognition models from a training set of the corpus, following which a rule-based metadata search model is used to extract the embedded metadata by analyzing the string patterns within and surrounding each field in the recognized layouts. In this paper, we describe the design of our AME system, with focus on the metadata search model. We present the extraction results for a historic collection from the Food and Drug Administration, and outline how the system may be adapted for similar collections. Finally, we discuss some ongoing enhancements to our AME system. PMID:21179386

  17. Joint recognition and discrimination in nonlinear feature space

    NASA Astrophysics Data System (ADS)

    Talukder, Ashit; Casasent, David P.

    1997-09-01

    A new general method for linear and nonlinear feature extraction is presented. It is novel since it provides both representation and discrimination while most other methods are concerned with only one of these issues. We call this approach the maximum representation and discrimination feature (MRDF) method and show that the Bayes classifier and the Karhunen- Loeve transform are special cases of it. We refer to our nonlinear feature extraction technique as nonlinear eigen- feature extraction. It is new since it has a closed-form solution and produces nonlinear decision surfaces with higher rank than do iterative methods. Results on synthetic databases are shown and compared with results from standard Fukunaga- Koontz transform and Fisher discriminant function methods. The method is also applied to an automated product inspection problem (discrimination) and to the classification and pose estimation of two similar objects (representation and discrimination).

  18. Vertebra identification using template matching modelmp and K-means clustering.

    PubMed

    Larhmam, Mohamed Amine; Benjelloun, Mohammed; Mahmoudi, Saïd

    2014-03-01

    Accurate vertebra detection and segmentation are essential steps for automating the diagnosis of spinal disorders. This study is dedicated to vertebra alignment measurement, the first step in a computer-aided diagnosis tool for cervical spine trauma. Automated vertebral segment alignment determination is a challenging task due to low contrast imaging and noise. A software tool for segmenting vertebrae and detecting subluxations has clinical significance. A robust method was developed and tested for cervical vertebra identification and segmentation that extracts parameters used for vertebra alignment measurement. Our contribution involves a novel combination of a template matching method and an unsupervised clustering algorithm. In this method, we build a geometric vertebra mean model. To achieve vertebra detection, manual selection of the region of interest is performed initially on the input image. Subsequent preprocessing is done to enhance image contrast and detect edges. Candidate vertebra localization is then carried out by using a modified generalized Hough transform (GHT). Next, an adapted cost function is used to compute local voted centers and filter boundary data. Thereafter, a K-means clustering algorithm is applied to obtain clusters distribution corresponding to the targeted vertebrae. These clusters are combined with the vote parameters to detect vertebra centers. Rigid segmentation is then carried out by using GHT parameters. Finally, cervical spine curves are extracted to measure vertebra alignment. The proposed approach was successfully applied to a set of 66 high-resolution X-ray images. Robust detection was achieved in 97.5 % of the 330 tested cervical vertebrae. An automated vertebral identification method was developed and demonstrated to be robust to noise and occlusion. This work presents a first step toward an automated computer-aided diagnosis system for cervical spine trauma detection.

  19. Development and validation of an automated unit for the extraction of radiocaesium from seawater.

    PubMed

    Bokor, Ilonka; Sdraulig, Sandra; Jenkinson, Peter; Madamperuma, Janaka; Martin, Paul

    2016-01-01

    An automated unit was developed for the in-situ extraction of radiocaesium ((137)Cs and (134)Cs) from large volumes of seawater to achieve very low detection limits. The unit was designed for monitoring of Australian ocean and coastal waters, including at ports visited by nuclear-powered warships. The unit is housed within a robust case, and is easily transported and operated. It contains four filter cartridges connected in series. The first two cartridges are used to remove any suspended material that may be present in the seawater, while the last two cartridges are coated with potassium copper hexacyanoferrate for caesium extraction. Once the extraction is completed the coated cartridges are ashed. The ash is transferred to a small petri dish for counting of (137)Cs and (134)Cs by high resolution gamma spectrometry for a minimum of 24 h. The extraction method was validated for the following criteria: selectivity, trueness, precision, linearity, limit of detection and traceability. The validation showed the unit to be fit for purpose with the method capable of achieving low detection limits required for environmental samples. The results for the environmental measurements in Australian seawater correlate well with those reported in the Worldwide Marine Radioactivity Study (WOMARS). The cost of preparation and running the system is low and waste generation is minimal. Crown Copyright © 2015. Published by Elsevier Ltd. All rights reserved.

  20. Note: An automated image analysis method for high-throughput classification of surface-bound bacterial cell motions.

    PubMed

    Shen, Simon; Syal, Karan; Tao, Nongjian; Wang, Shaopeng

    2015-12-01

    We present a Single-Cell Motion Characterization System (SiCMoCS) to automatically extract bacterial cell morphological features from microscope images and use those features to automatically classify cell motion for rod shaped motile bacterial cells. In some imaging based studies, bacteria cells need to be attached to the surface for time-lapse observation of cellular processes such as cell membrane-protein interactions and membrane elasticity. These studies often generate large volumes of images. Extracting accurate bacterial cell morphology features from these images is critical for quantitative assessment. Using SiCMoCS, we demonstrated simultaneous and automated motion tracking and classification of hundreds of individual cells in an image sequence of several hundred frames. This is a significant improvement from traditional manual and semi-automated approaches to segmenting bacterial cells based on empirical thresholds, and a first attempt to automatically classify bacterial motion types for motile rod shaped bacterial cells, which enables rapid and quantitative analysis of various types of bacterial motion.

  1. Fast and comprehensive analysis of secondary metabolites in cocoa products using ultra high-performance liquid chromatography directly after pressurized liquid extraction.

    PubMed

    Damm, Irina; Enger, Eileen; Chrubasik-Hausmann, Sigrun; Schieber, Andreas; Zimmermann, Benno F

    2016-08-01

    Fast methods for the extraction and analysis of various secondary metabolites from cocoa products were developed and optimized regarding speed and separation efficiency. Extraction by pressurized liquid extraction is automated and the extracts are analyzed by rapid reversed-phase ultra high-performance liquid chromatography and normal-phase high-performance liquid chromatography methods. After extraction, no further sample treatment is required before chromatographic analysis. The analytes comprise monomeric and oligomeric flavanols, flavonols, methylxanthins, N-phenylpropenoyl amino acids, and phenolic acids. Polyphenols and N-phenylpropenoyl amino acids are separated in a single run of 33 min, procyanidins are analyzed by normal-phase high-performance liquid chromatography within 16 min, and methylxanthins require only 6 min total run time. A fourth method is suitable for phenolic acids, but only protocatechuic acid was found in relevant quantities. The optimized methods were validated and applied to 27 dark chocolates, one milk chocolate, two cocoa powders and two food supplements based on cocoa extract. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  2. Toward automated analysis of particle holograms

    NASA Technical Reports Server (NTRS)

    Caulfield, H. J.

    1987-01-01

    A preliminary study of approaches for extracting and analyzing data from particle holograms is discussed. It concludes that: (1) for thin spherical particles, out-of-focus methods are optimum; (2) for thin nonspherical particles, out-of-focus methods are useful but must be supplemented by in-focus methods; (3) a complex method of projection and back projection can remove out-of-focus data for deep particles.

  3. Evaluation of automated sample preparation, retention time locked gas chromatography-mass spectrometry and data analysis methods for the metabolomic study of Arabidopsis species.

    PubMed

    Gu, Qun; David, Frank; Lynen, Frédéric; Rumpel, Klaus; Dugardeyn, Jasper; Van Der Straeten, Dominique; Xu, Guowang; Sandra, Pat

    2011-05-27

    In this paper, automated sample preparation, retention time locked gas chromatography-mass spectrometry (GC-MS) and data analysis methods for the metabolomics study were evaluated. A miniaturized and automated derivatisation method using sequential oximation and silylation was applied to a polar extract of 4 types (2 types×2 ages) of Arabidopsis thaliana, a popular model organism often used in plant sciences and genetics. Automation of the derivatisation process offers excellent repeatability, and the time between sample preparation and analysis was short and constant, reducing artifact formation. Retention time locked (RTL) gas chromatography-mass spectrometry was used, resulting in reproducible retention times and GC-MS profiles. Two approaches were used for data analysis. XCMS followed by principal component analysis (approach 1) and AMDIS deconvolution combined with a commercially available program (Mass Profiler Professional) followed by principal component analysis (approach 2) were compared. Several features that were up- or down-regulated in the different types were detected. Copyright © 2011 Elsevier B.V. All rights reserved.

  4. Deriving pathway maps from automated text analysis using a grammar-based approach.

    PubMed

    Olsson, Björn; Gawronska, Barbara; Erlendsson, Björn

    2006-04-01

    We demonstrate how automated text analysis can be used to support the large-scale analysis of metabolic and regulatory pathways by deriving pathway maps from textual descriptions found in the scientific literature. The main assumption is that correct syntactic analysis combined with domain-specific heuristics provides a good basis for relation extraction. Our method uses an algorithm that searches through the syntactic trees produced by a parser based on a Referent Grammar formalism, identifies relations mentioned in the sentence, and classifies them with respect to their semantic class and epistemic status (facts, counterfactuals, hypotheses). The semantic categories used in the classification are based on the relation set used in KEGG (Kyoto Encyclopedia of Genes and Genomes), so that pathway maps using KEGG notation can be automatically generated. We present the current version of the relation extraction algorithm and an evaluation based on a corpus of abstracts obtained from PubMed. The results indicate that the method is able to combine a reasonable coverage with high accuracy. We found that 61% of all sentences were parsed, and 97% of the parse trees were judged to be correct. The extraction algorithm was tested on a sample of 300 parse trees and was found to produce correct extractions in 90.5% of the cases.

  5. Evaluation of an alternative extraction procedure for enterotoxin determination in dairy products.

    PubMed

    Meyrand, A; Atrache, V; Bavai, C; Montet, M P; Vernozy-Rozand, C

    1999-06-01

    A concentration protocol based on trichloroacetic acid precipitation was evaluated and compared with the reference method using dialysis concentration. Different quantities of purified staphylococcal enterotoxins were added to pasteurized Camembert-type cheeses. Detection of enterotoxins in these cheeses was performed using an automated detection system. Raw goat milk Camembert-type cheeses involved in a staphylococcal food poisoning were also tested. Both enterotoxin extraction methods allowed detection of the lowest enterotoxin concentration level used in this study (0.5 ng g-1). Compared with the dialysis concentration method, TCA precipitation of staphylococcal enterotoxins was 'user-friendly' and less time-consuming. These results suggest that TCA precipitation is a rapid (1 h), simple and reliable method of extracting enterotoxin from food which gives excellent recovery from dairy products.

  6. Comparison of five methods for extraction of Legionella pneumophila from respiratory specimens.

    PubMed

    Wilson, Deborah; Yen-Lieberman, Belinda; Reischl, Udo; Warshawsky, Ilka; Procop, Gary W

    2004-12-01

    The efficiencies of five commercially available nucleic acid extraction methods were evaluated for the recovery of a standardized inoculum of Legionella pneumophila in respiratory specimens (sputum and bronchoalveolar lavage [BAL] specimens). The concentrations of Legionella DNA recovered from sputa with the automated MagNA Pure (526,200 CFU/ml) and NucliSens (171,800 CFU/ml) extractors were greater than those recovered with the manual methods (i.e., Roche High Pure kit [133,900 CFU/ml], QIAamp DNA Mini kit [46,380 CFU/ml], and ViralXpress kit [13,635 CFU/ml]). The rank order was the same for extracts from BAL specimens, except that for this specimen type the QIAamp DNA Mini kit recovered more than the Roche High Pure kit.

  7. Automated structure determination of proteins with the SAIL-FLYA NMR method.

    PubMed

    Takeda, Mitsuhiro; Ikeya, Teppei; Güntert, Peter; Kainosho, Masatsune

    2007-01-01

    The labeling of proteins with stable isotopes enhances the NMR method for the determination of 3D protein structures in solution. Stereo-array isotope labeling (SAIL) provides an optimal stereospecific and regiospecific pattern of stable isotopes that yields sharpened lines, spectral simplification without loss of information, and the ability to collect rapidly and evaluate fully automatically the structural restraints required to solve a high-quality solution structure for proteins up to twice as large as those that can be analyzed using conventional methods. Here, we describe a protocol for the preparation of SAIL proteins by cell-free methods, including the preparation of S30 extract and their automated structure analysis using the FLYA algorithm and the program CYANA. Once efficient cell-free expression of the unlabeled or uniformly labeled target protein has been achieved, the NMR sample preparation of a SAIL protein can be accomplished in 3 d. A fully automated FLYA structure calculation can be completed in 1 d on a powerful computer system.

  8. A novel flow injection chemiluminescence method for automated and miniaturized determination of phenols in smoked food samples.

    PubMed

    Vakh, Christina; Evdokimova, Ekaterina; Pochivalov, Aleksei; Moskvin, Leonid; Bulatov, Andrey

    2017-12-15

    An easily performed fully automated and miniaturized flow injection chemiluminescence (CL) method for determination of phenols in smoked food samples has been proposed. This method includes the ultrasound assisted solid-liquid extraction coupled with gas-diffusion separation of phenols from smoked food sample and analytes absorption into a NaOH solution in a specially designed gas-diffusion cell. The flow system was designed to focus on automation and miniaturization with minimal sample and reagent consumption by inexpensive instrumentation. The luminol - N-bromosuccinimide system in an alkaline medium was used for the CL determination of phenols. The limit of detection of the proposed procedure was 3·10 -8 ·molL -1 (0.01mgkg -1 ) in terms of phenol. The presented method demonstrated to be a good tool for easy, rapid and cost-effective point-of-need screening phenols in smoked food samples. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. Bacterial and fungal DNA extraction from blood samples: automated protocols.

    PubMed

    Lorenz, Michael G; Disqué, Claudia; Mühl, Helge

    2015-01-01

    Automation in DNA isolation is a necessity for routine practice employing molecular diagnosis of infectious agents. To this end, the development of automated systems for the molecular diagnosis of microorganisms directly in blood samples is at its beginning. Important characteristics of systems demanded for routine use include high recovery of microbial DNA, DNA-free containment for the reduction of DNA contamination from exogenous sources, DNA-free reagents and consumables, ideally a walkaway system, and economical pricing of the equipment and consumables. Such full automation of DNA extraction evaluated and in use for sepsis diagnostics is yet not available. Here, we present protocols for the semiautomated isolation of microbial DNA from blood culture and low- and high-volume blood samples. The protocols include a manual pretreatment step followed by automated extraction and purification of microbial DNA.

  10. A gradient-based approach for automated crest-line detection and analysis of sand dune patterns on planetary surfaces

    NASA Astrophysics Data System (ADS)

    Lancaster, N.; LeBlanc, D.; Bebis, G.; Nicolescu, M.

    2015-12-01

    Dune-field patterns are believed to behave as self-organizing systems, but what causes the patterns to form is still poorly understood. The most obvious (and in many cases the most significant) aspect of a dune system is the pattern of dune crest lines. Extracting meaningful features such as crest length, orientation, spacing, bifurcations, and merging of crests from image data can reveal important information about the specific dune-field morphological properties, development, and response to changes in boundary conditions, but manual methods are labor-intensive and time-consuming. We are developing the capability to recognize and characterize patterns of sand dunes on planetary surfaces. Our goal is to develop a robust methodology and the necessary algorithms for automated or semi-automated extraction of dune morphometric information from image data. Our main approach uses image processing methods to extract gradient information from satellite images of dune fields. Typically, the gradients have a dominant magnitude and orientation. In many cases, the images have two major dominant gradient orientations, for the sunny and shaded side of the dunes. A histogram of the gradient orientations is used to determine the dominant orientation. A threshold is applied to the image based on gradient orientations which agree with the dominant orientation. The contours of the binary image can then be used to determine the dune crest-lines, based on pixel intensity values. Once the crest-lines have been extracted, the morphological properties can be computed. We have tested our approach on a variety of images of linear and crescentic (transverse) dunes and compared dune detection algorithms with manually-digitized dune crest lines, achieving true positive values of 0.57-0.99; and false positives values of 0.30-0.67, indicating that out approach is generally robust.

  11. Fuzzy Emotional Semantic Analysis and Automated Annotation of Scene Images

    PubMed Central

    Cao, Jianfang; Chen, Lichao

    2015-01-01

    With the advances in electronic and imaging techniques, the production of digital images has rapidly increased, and the extraction and automated annotation of emotional semantics implied by images have become issues that must be urgently addressed. To better simulate human subjectivity and ambiguity for understanding scene images, the current study proposes an emotional semantic annotation method for scene images based on fuzzy set theory. A fuzzy membership degree was calculated to describe the emotional degree of a scene image and was implemented using the Adaboost algorithm and a back-propagation (BP) neural network. The automated annotation method was trained and tested using scene images from the SUN Database. The annotation results were then compared with those based on artificial annotation. Our method showed an annotation accuracy rate of 91.2% for basic emotional values and 82.4% after extended emotional values were added, which correspond to increases of 5.5% and 8.9%, respectively, compared with the results from using a single BP neural network algorithm. Furthermore, the retrieval accuracy rate based on our method reached approximately 89%. This study attempts to lay a solid foundation for the automated emotional semantic annotation of more types of images and therefore is of practical significance. PMID:25838818

  12. Efficacy Evaluation of Different Wavelet Feature Extraction Methods on Brain MRI Tumor Detection

    NASA Astrophysics Data System (ADS)

    Nabizadeh, Nooshin; John, Nigel; Kubat, Miroslav

    2014-03-01

    Automated Magnetic Resonance Imaging brain tumor detection and segmentation is a challenging task. Among different available methods, feature-based methods are very dominant. While many feature extraction techniques have been employed, it is still not quite clear which of feature extraction methods should be preferred. To help improve the situation, we present the results of a study in which we evaluate the efficiency of using different wavelet transform features extraction methods in brain MRI abnormality detection. Applying T1-weighted brain image, Discrete Wavelet Transform (DWT), Discrete Wavelet Packet Transform (DWPT), Dual Tree Complex Wavelet Transform (DTCWT), and Complex Morlet Wavelet Transform (CMWT) methods are applied to construct the feature pool. Three various classifiers as Support Vector Machine, K Nearest Neighborhood, and Sparse Representation-Based Classifier are applied and compared for classifying the selected features. The results show that DTCWT and CMWT features classified with SVM, result in the highest classification accuracy, proving of capability of wavelet transform features to be informative in this application.

  13. Computational methods for evaluation of cell-based data assessment--Bioconductor.

    PubMed

    Le Meur, Nolwenn

    2013-02-01

    Recent advances in miniaturization and automation of technologies have enabled cell-based assay high-throughput screening, bringing along new challenges in data analysis. Automation, standardization, reproducibility have become requirements for qualitative research. The Bioconductor community has worked in that direction proposing several R packages to handle high-throughput data including flow cytometry (FCM) experiment. Altogether, these packages cover the main steps of a FCM analysis workflow, that is, data management, quality assessment, normalization, outlier detection, automated gating, cluster labeling, and feature extraction. Additionally, the open-source philosophy of R and Bioconductor, which offers room for new development, continuously drives research and improvement of theses analysis methods, especially in the field of clustering and data mining. This review presents the principal FCM packages currently available in R and Bioconductor, their advantages and their limits. Copyright © 2012 Elsevier Ltd. All rights reserved.

  14. Automated and sensitive method for the determination of formoterol in human plasma by high-performance liquid chromatography and electrochemical detection.

    PubMed

    Campestrini, J; Lecaillon, J B; Godbillon, J

    1997-12-19

    An automated high-performance liquid chromatography (HPLC) method for the determination of formoterol in human plasma with improved sensitivity has been developed and validated. Formoterol and CGP 47086, the internal standard, were extracted from plasma (1 ml) using a cation-exchange solid-phase extraction (SPE) cartridge. The compounds were eluted with pH 6 buffer solution-methanol (70:30, v/v) and the eluate was further diluted with water. An aliquot of the extract solution was injected and analyzed by HPLC. The extraction, dilution, injection and chromatographic analysis were combined and automated using the automate (ASPEC) system. The chromatographic separations were achieved on a 5 microm, Hypersil ODS analytical column (200 mm x 3 mm I.D.), using (pH 6 phosphate buffer, 0.035 M + 20 mg/l EDTA)-MeOH-CH3CN (70:25:5, v/v/v) as the mobile phase at a flow-rate of 0.4 ml/min. The analytes were detected with electrochemical detection at an operating potential of +0.63 V. Intra-day accuracy and precision were assessed from the relative recoveries of calibration/quality control plasma samples in the concentration range of 7.14 to 238 pmol/l of formoterol base. The accuracy over the entire concentration range varied from 81 to 105%, and the precision (C.V.) ranged from 3 to 14%. Inter-day accuracy and precision were assessed in the concentration range of 11.9 to 238 pmol/l of formoterol base in plasma. The accuracy over the entire concentration range varied from 98 to 109%, and precision ranged from 8 to 19%. At the limit of quantitation (LOQ) of 11.9 pmol/l for inter-day measurements, the recovery value was 109% and C.V. was 19%. As shown from intra-day accuracy and precision results, favorable conditions (a newly used column, a newly washed detector cell and moderate residual cell current level) allowed us to reach a LOQ of 7.14 pmol/l of formoterol base (3 pg/ml of formoterol fumarate dihydrate). Improvement of the limit of detection by a factor of about 10 was reached as compared to the previously described methods. The method has been applied for quantifying formoterol in plasma after 120 microg drug inhalation to volunteers. Formoterol was still measurable at 24 h post-dosing in most subjects and a slow elimination of formoterol from plasma beyond 6-8 h after inhalation was demonstrated for the first time thanks to the sensitivity of the method.

  15. Automated detection of new impact sites on Martian surface from HiRISE images

    NASA Astrophysics Data System (ADS)

    Xin, Xin; Di, Kaichang; Wang, Yexin; Wan, Wenhui; Yue, Zongyu

    2017-10-01

    In this study, an automated method for Martian new impact site detection from single images is presented. It first extracts dark areas in full high resolution image, then detects new impact craters within dark areas using a cascade classifier which combines local binary pattern features and Haar-like features trained by an AdaBoost machine learning algorithm. Experimental results using 100 HiRISE images show that the overall detection rate of proposed method is 84.5%, with a true positive rate of 86.9%. The detection rate and true positive rate in the flat regions are 93.0% and 91.5%, respectively.

  16. Complete automation of solid-phase extraction with subsequent liquid chromatography-tandem mass spectrometry for the quantification of benzoylecgonine, m-hydroxybenzoylecgonine, p-hydroxybenzoylecgonine, and norbenzoylecgonine in urine--application to a high-throughput urine analysis laboratory.

    PubMed

    Robandt, Paul P; Reda, Louis J; Klette, Kevin L

    2008-10-01

    A fully automated system utilizing a liquid handler and an online solid-phase extraction (SPE) device coupled with liquid chromatography-tandem mass spectrometry (LC-MS-MS) was designed to process, detect, and quantify benzoylecgonine (BZE), meta-hydroxybenzoylecgonine (m-OH BZE), para-hydroxybenzoylecgonine (p-OH BZE), and norbenzoylecgonine (nor-BZE) metabolites in human urine. The method was linear for BZE, m-OH BZE, and p-OH BZE from 1.2 to 10,000 ng/mL with limits of detection (LOD) and quantification (LOQ) of 1.2 ng/mL. Nor-BZE was linear from 5 to 10,000 ng/mL with an LOD and LOQ of 1.2 and 5 ng/mL, respectively. The intrarun precision measured as the coefficient of variation of 10 replicates of a 100 ng/mL control was less than 2.6%, and the interrun precision for 5 replicates of the same control across 8 batches was less than 4.8% for all analytes. No assay interference was noted from controls containing cocaine, cocaethylene, and ecgonine methyl ester. Excellent data concordance (R2 > 0.994) was found for direct comparison of the automated SPE-LC-MS-MS procedure and an existing gas chromatography-MS procedure using 94 human urine samples previously determined to be positive for BZE. The automated specimen handling and SPE procedure, when compared to the traditional extraction schema, eliminates the human factors of specimen handling, processing, extraction, and derivatization, thereby reducing labor costs and rework resulting from batch handling issues, and may reduce the number of fume hoods required in the laboratory.

  17. Semi-automatic mapping of geological Structures using UAV-based photogrammetric data: An image analysis approach

    NASA Astrophysics Data System (ADS)

    Vasuki, Yathunanthan; Holden, Eun-Jung; Kovesi, Peter; Micklethwaite, Steven

    2014-08-01

    Recent advances in data acquisition technologies, such as Unmanned Aerial Vehicles (UAVs), have led to a growing interest in capturing high-resolution rock surface images. However, due to the large volumes of data that can be captured in a short flight, efficient analysis of this data brings new challenges, especially the time it takes to digitise maps and extract orientation data. We outline a semi-automated method that allows efficient mapping of geological faults using photogrammetric data of rock surfaces, which was generated from aerial photographs collected by a UAV. Our method harnesses advanced automated image analysis techniques and human data interaction to rapidly map structures and then calculate their dip and dip directions. Geological structures (faults, joints and fractures) are first detected from the primary photographic dataset and the equivalent three dimensional (3D) structures are then identified within a 3D surface model generated by structure from motion (SfM). From this information the location, dip and dip direction of the geological structures are calculated. A structure map generated by our semi-automated method obtained a recall rate of 79.8% when compared against a fault map produced using expert manual digitising and interpretation methods. The semi-automated structure map was produced in 10 min whereas the manual method took approximately 7 h. In addition, the dip and dip direction calculation, using our automated method, shows a mean±standard error of 1.9°±2.2° and 4.4°±2.6° respectively with field measurements. This shows the potential of using our semi-automated method for accurate and efficient mapping of geological structures, particularly from remote, inaccessible or hazardous sites.

  18. Automated detection of abnormalities in paranasal sinus on dental panoramic radiographs by using contralateral subtraction technique based on mandible contour

    NASA Astrophysics Data System (ADS)

    Mori, Shintaro; Hara, Takeshi; Tagami, Motoki; Muramatsu, Chicako; Kaneda, Takashi; Katsumata, Akitoshi; Fujita, Hiroshi

    2013-02-01

    Inflammation in paranasal sinus sometimes becomes chronic to take long terms for the treatment. The finding is important for the early treatment, but general dentists may not recognize the findings because they focus on teeth treatments. The purpose of this study was to develop a computer-aided detection (CAD) system for the inflammation in paranasal sinus on dental panoramic radiographs (DPRs) by using the mandible contour and to demonstrate the potential usefulness of the CAD system by means of receiver operating characteristic analysis. The detection scheme consists of 3 steps: 1) Contour extraction of mandible, 2) Contralateral subtraction, and 3) Automated detection. The Canny operator and active contour model were applied to extract the edge at the first step. At the subtraction step, the right region of the extracted contour image was flipped to compare with the left region. Mutual information between two selected regions was obtained to estimate the shift parameters of image registration. The subtraction images were generated based on the shift parameter. Rectangle regions of left and right paranasal sinus on the subtraction image were determined based on the size of mandible. The abnormal side of the regions was determined by taking the difference between the averages of each region. Thirteen readers were responded to all cases without and with the automated results. The averaged AUC of all readers was increased from 0.69 to 0.73 with statistical significance (p=0.032) when the automated detection results were provided. In conclusion, the automated detection method based on contralateral subtraction technique improves readers' interpretation performance of inflammation in paranasal sinus on DPRs.

  19. Comparative performance evaluation of automated segmentation methods of hippocampus from magnetic resonance images of temporal lobe epilepsy patients

    PubMed Central

    Hosseini, Mohammad-Parsa; Nazem-Zadeh, Mohammad-Reza; Pompili, Dario; Jafari-Khouzani, Kourosh; Elisevich, Kost; Soltanian-Zadeh, Hamid

    2016-01-01

    Purpose: Segmentation of the hippocampus from magnetic resonance (MR) images is a key task in the evaluation of mesial temporal lobe epilepsy (mTLE) patients. Several automated algorithms have been proposed although manual segmentation remains the benchmark. Choosing a reliable algorithm is problematic since structural definition pertaining to multiple edges, missing and fuzzy boundaries, and shape changes varies among mTLE subjects. Lack of statistical references and guidance for quantifying the reliability and reproducibility of automated techniques has further detracted from automated approaches. The purpose of this study was to develop a systematic and statistical approach using a large dataset for the evaluation of automated methods and establish a method that would achieve results better approximating those attained by manual tracing in the epileptogenic hippocampus. Methods: A template database of 195 (81 males, 114 females; age range 32–67 yr, mean 49.16 yr) MR images of mTLE patients was used in this study. Hippocampal segmentation was accomplished manually and by two well-known tools (FreeSurfer and hammer) and two previously published methods developed at their institution [Automatic brain structure segmentation (ABSS) and LocalInfo]. To establish which method was better performing for mTLE cases, several voxel-based, distance-based, and volume-based performance metrics were considered. Statistical validations of the results using automated techniques were compared with the results of benchmark manual segmentation. Extracted metrics were analyzed to find the method that provided a more similar result relative to the benchmark. Results: Among the four automated methods, ABSS generated the most accurate results. For this method, the Dice coefficient was 5.13%, 14.10%, and 16.67% higher, Hausdorff was 22.65%, 86.73%, and 69.58% lower, precision was 4.94%, −4.94%, and 12.35% higher, and the root mean square (RMS) was 19.05%, 61.90%, and 65.08% lower than LocalInfo, FreeSurfer, and hammer, respectively. The Bland–Altman similarity analysis revealed a low bias for the ABSS and LocalInfo techniques compared to the others. Conclusions: The ABSS method for automated hippocampal segmentation outperformed other methods, best approximating what could be achieved by manual tracing. This study also shows that four categories of input data can cause automated segmentation methods to fail. They include incomplete studies, artifact, low signal-to-noise ratio, and inhomogeneity. Different scanner platforms and pulse sequences were considered as means by which to improve reliability of the automated methods. Other modifications were specially devised to enhance a particular method assessed in this study. PMID:26745947

  20. Deep SOMs for automated feature extraction and classification from big data streaming

    NASA Astrophysics Data System (ADS)

    Sakkari, Mohamed; Ejbali, Ridha; Zaied, Mourad

    2017-03-01

    In this paper, we proposed a deep self-organizing map model (Deep-SOMs) for automated features extracting and learning from big data streaming which we benefit from the framework Spark for real time streams and highly parallel data processing. The SOMs deep architecture is based on the notion of abstraction (patterns automatically extract from the raw data, from the less to more abstract). The proposed model consists of three hidden self-organizing layers, an input and an output layer. Each layer is made up of a multitude of SOMs, each map only focusing at local headmistress sub-region from the input image. Then, each layer trains the local information to generate more overall information in the higher layer. The proposed Deep-SOMs model is unique in terms of the layers architecture, the SOMs sampling method and learning. During the learning stage we use a set of unsupervised SOMs for feature extraction. We validate the effectiveness of our approach on large data sets such as Leukemia dataset and SRBCT. Results of comparison have shown that the Deep-SOMs model performs better than many existing algorithms for images classification.

  1. Solid-Phase Extraction Strategies to Surmount Body Fluid Sample Complexity in High-Throughput Mass Spectrometry-Based Proteomics

    PubMed Central

    Bladergroen, Marco R.; van der Burgt, Yuri E. M.

    2015-01-01

    For large-scale and standardized applications in mass spectrometry- (MS-) based proteomics automation of each step is essential. Here we present high-throughput sample preparation solutions for balancing the speed of current MS-acquisitions and the time needed for analytical workup of body fluids. The discussed workflows reduce body fluid sample complexity and apply for both bottom-up proteomics experiments and top-down protein characterization approaches. Various sample preparation methods that involve solid-phase extraction (SPE) including affinity enrichment strategies have been automated. Obtained peptide and protein fractions can be mass analyzed by direct infusion into an electrospray ionization (ESI) source or by means of matrix-assisted laser desorption ionization (MALDI) without further need of time-consuming liquid chromatography (LC) separations. PMID:25692071

  2. AUTOMATED ANALYSIS OF AQUEOUS SAMPLES CONTAINING PESTICIDES, ACIDIC/BASIC/NEUTRAL SEMIVOLATILES AND VOLATILE ORGANIC COMPOUNDS BY SOLID PHASE EXTRACTION COUPLED IN-LINE TO LARGE VOLUME INJECTION GC/MS

    EPA Science Inventory

    Data is presented on the development of a new automated system combining solid phase extraction (SPE) with GC/MS spectrometry for the single-run analysis of water samples containing a broad range of organic compounds. The system uses commercially available automated in-line 10-m...

  3. Automated detection of discourse segment and experimental types from the text of cancer pathway results sections.

    PubMed

    Burns, Gully A P C; Dasigi, Pradeep; de Waard, Anita; Hovy, Eduard H

    2016-01-01

    Automated machine-reading biocuration systems typically use sentence-by-sentence information extraction to construct meaning representations for use by curators. This does not directly reflect the typical discourse structure used by scientists to construct an argument from the experimental data available within a article, and is therefore less likely to correspond to representations typically used in biomedical informatics systems (let alone to the mental models that scientists have). In this study, we develop Natural Language Processing methods to locate, extract, and classify the individual passages of text from articles' Results sections that refer to experimental data. In our domain of interest (molecular biology studies of cancer signal transduction pathways), individual articles may contain as many as 30 small-scale individual experiments describing a variety of findings, upon which authors base their overall research conclusions. Our system automatically classifies discourse segments in these texts into seven categories (fact, hypothesis, problem, goal, method, result, implication) with an F-score of 0.68. These segments describe the essential building blocks of scientific discourse to (i) provide context for each experiment, (ii) report experimental details and (iii) explain the data's meaning in context. We evaluate our system on text passages from articles that were curated in molecular biology databases (the Pathway Logic Datum repository, the Molecular Interaction MINT and INTACT databases) linking individual experiments in articles to the type of assay used (coprecipitation, phosphorylation, translocation etc.). We use supervised machine learning techniques on text passages containing unambiguous references to experiments to obtain baseline F1 scores of 0.59 for MINT, 0.71 for INTACT and 0.63 for Pathway Logic. Although preliminary, these results support the notion that targeting information extraction methods to experimental results could provide accurate, automated methods for biocuration. We also suggest the need for finer-grained curation of experimental methods used when constructing molecular biology databases. © The Author(s) 2016. Published by Oxford University Press.

  4. Role of Gist and PHOG Features in Computer-Aided Diagnosis of Tuberculosis without Segmentation

    PubMed Central

    Chauhan, Arun; Chauhan, Devesh; Rout, Chittaranjan

    2014-01-01

    Purpose Effective diagnosis of tuberculosis (TB) relies on accurate interpretation of radiological patterns found in a chest radiograph (CXR). Lack of skilled radiologists and other resources, especially in developing countries, hinders its efficient diagnosis. Computer-aided diagnosis (CAD) methods provide second opinion to the radiologists for their findings and thereby assist in better diagnosis of cancer and other diseases including TB. However, existing CAD methods for TB are based on the extraction of textural features from manually or semi-automatically segmented CXRs. These methods are prone to errors and cannot be implemented in X-ray machines for automated classification. Methods Gabor, Gist, histogram of oriented gradients (HOG), and pyramid histogram of oriented gradients (PHOG) features extracted from the whole image can be implemented into existing X-ray machines to discriminate between TB and non-TB CXRs in an automated manner. Localized features were extracted for the above methods using various parameters, such as frequency range, blocks and region of interest. The performance of these features was evaluated against textural features. Two digital CXR image datasets (8-bit DA and 14-bit DB) were used for evaluating the performance of these features. Results Gist (accuracy 94.2% for DA, 86.0% for DB) and PHOG (accuracy 92.3% for DA, 92.0% for DB) features provided better results for both the datasets. These features were implemented to develop a MATLAB toolbox, TB-Xpredict, which is freely available for academic use at http://sourceforge.net/projects/tbxpredict/. This toolbox provides both automated training and prediction modules and does not require expertise in image processing for operation. Conclusion Since the features used in TB-Xpredict do not require segmentation, the toolbox can easily be implemented in X-ray machines. This toolbox can effectively be used for the mass screening of TB in high-burden areas with improved efficiency. PMID:25390291

  5. Disposable and removable nucleic acid extraction and purification cartridges for automated flow-through systems

    DOEpatents

    Regan, John Frederick

    2014-09-09

    Removable cartridges are used on automated flow-through systems for the purpose of extracting and purifying genetic material from complex matrices. Different types of cartridges are paired with specific automated protocols to concentrate, extract, and purifying pathogenic or human genetic material. Their flow-through nature allows large quantities sample to be processed. Matrices may be filtered using size exclusion and/or affinity filters to concentrate the pathogen of interest. Lysed material is ultimately passed through a filter to remove the insoluble material before the soluble genetic material is delivered past a silica-like membrane that binds the genetic material, where it is washed, dried, and eluted. Cartridges are inserted into the housing areas of flow-through automated instruments, which are equipped with sensors to ensure proper placement and usage of the cartridges. Properly inserted cartridges create fluid- and air-tight seals with the flow lines of an automated instrument.

  6. Integrating the Allen Brain Institute Cell Types Database into Automated Neuroscience Workflow.

    PubMed

    Stockton, David B; Santamaria, Fidel

    2017-10-01

    We developed software tools to download, extract features, and organize the Cell Types Database from the Allen Brain Institute (ABI) in order to integrate its whole cell patch clamp characterization data into the automated modeling/data analysis cycle. To expand the potential user base we employed both Python and MATLAB. The basic set of tools downloads selected raw data and extracts cell, sweep, and spike features, using ABI's feature extraction code. To facilitate data manipulation we added a tool to build a local specialized database of raw data plus extracted features. Finally, to maximize automation, we extended our NeuroManager workflow automation suite to include these tools plus a separate investigation database. The extended suite allows the user to integrate ABI experimental and modeling data into an automated workflow deployed on heterogeneous computer infrastructures, from local servers, to high performance computing environments, to the cloud. Since our approach is focused on workflow procedures our tools can be modified to interact with the increasing number of neuroscience databases being developed to cover all scales and properties of the nervous system.

  7. Automated video feature extraction : workshop summary report October 10-11 2012.

    DOT National Transportation Integrated Search

    2012-12-01

    This report summarizes a 2-day workshop on automated video feature extraction. Discussion focused on the Naturalistic Driving : Study, funded by the second Strategic Highway Research Program, and also involved the companion roadway inventory dataset....

  8. Comparative performance evaluation of automated segmentation methods of hippocampus from magnetic resonance images of temporal lobe epilepsy patients.

    PubMed

    Hosseini, Mohammad-Parsa; Nazem-Zadeh, Mohammad-Reza; Pompili, Dario; Jafari-Khouzani, Kourosh; Elisevich, Kost; Soltanian-Zadeh, Hamid

    2016-01-01

    Segmentation of the hippocampus from magnetic resonance (MR) images is a key task in the evaluation of mesial temporal lobe epilepsy (mTLE) patients. Several automated algorithms have been proposed although manual segmentation remains the benchmark. Choosing a reliable algorithm is problematic since structural definition pertaining to multiple edges, missing and fuzzy boundaries, and shape changes varies among mTLE subjects. Lack of statistical references and guidance for quantifying the reliability and reproducibility of automated techniques has further detracted from automated approaches. The purpose of this study was to develop a systematic and statistical approach using a large dataset for the evaluation of automated methods and establish a method that would achieve results better approximating those attained by manual tracing in the epileptogenic hippocampus. A template database of 195 (81 males, 114 females; age range 32-67 yr, mean 49.16 yr) MR images of mTLE patients was used in this study. Hippocampal segmentation was accomplished manually and by two well-known tools (FreeSurfer and hammer) and two previously published methods developed at their institution [Automatic brain structure segmentation (ABSS) and LocalInfo]. To establish which method was better performing for mTLE cases, several voxel-based, distance-based, and volume-based performance metrics were considered. Statistical validations of the results using automated techniques were compared with the results of benchmark manual segmentation. Extracted metrics were analyzed to find the method that provided a more similar result relative to the benchmark. Among the four automated methods, ABSS generated the most accurate results. For this method, the Dice coefficient was 5.13%, 14.10%, and 16.67% higher, Hausdorff was 22.65%, 86.73%, and 69.58% lower, precision was 4.94%, -4.94%, and 12.35% higher, and the root mean square (RMS) was 19.05%, 61.90%, and 65.08% lower than LocalInfo, FreeSurfer, and hammer, respectively. The Bland-Altman similarity analysis revealed a low bias for the ABSS and LocalInfo techniques compared to the others. The ABSS method for automated hippocampal segmentation outperformed other methods, best approximating what could be achieved by manual tracing. This study also shows that four categories of input data can cause automated segmentation methods to fail. They include incomplete studies, artifact, low signal-to-noise ratio, and inhomogeneity. Different scanner platforms and pulse sequences were considered as means by which to improve reliability of the automated methods. Other modifications were specially devised to enhance a particular method assessed in this study.

  9. a New Multi-Spectral Threshold Normalized Difference Water Index Mst-Ndwi Water Extraction Method - a Case Study in Yanhe Watershed

    NASA Astrophysics Data System (ADS)

    Zhou, Y.; Zhao, H.; Hao, H.; Wang, C.

    2018-05-01

    Accurate remote sensing water extraction is one of the primary tasks of watershed ecological environment study. Since the Yanhe water system has typical characteristics of a small water volume and narrow river channel, which leads to the difficulty for conventional water extraction methods such as Normalized Difference Water Index (NDWI). A new Multi-Spectral Threshold segmentation of the NDWI (MST-NDWI) water extraction method is proposed to achieve the accurate water extraction in Yanhe watershed. In the MST-NDWI method, the spectral characteristics of water bodies and typical backgrounds on the Landsat/TM images have been evaluated in Yanhe watershed. The multi-spectral thresholds (TM1, TM4, TM5) based on maximum-likelihood have been utilized before NDWI water extraction to realize segmentation for a division of built-up lands and small linear rivers. With the proposed method, a water map is extracted from the Landsat/TM images in 2010 in China. An accuracy assessment is conducted to compare the proposed method with the conventional water indexes such as NDWI, Modified NDWI (MNDWI), Enhanced Water Index (EWI), and Automated Water Extraction Index (AWEI). The result shows that the MST-NDWI method generates better water extraction accuracy in Yanhe watershed and can effectively diminish the confusing background objects compared to the conventional water indexes. The MST-NDWI method integrates NDWI and Multi-Spectral Threshold segmentation algorithms, with richer valuable information and remarkable results in accurate water extraction in Yanhe watershed.

  10. Automated data extraction from general practice records in an Australian setting: trends in influenza-like illness in sentinel general practices and emergency departments.

    PubMed

    Liljeqvist, Gösta T H; Staff, Michael; Puech, Michele; Blom, Hans; Torvaldsen, Siranda

    2011-06-06

    Influenza intelligence in New South Wales (NSW), Australia is derived mainly from emergency department (ED) presentations and hospital and intensive care admissions, which represent only a portion of influenza-like illness (ILI) in the population. A substantial amount of the remaining data lies hidden in general practice (GP) records. Previous attempts in Australia to gather ILI data from GPs have given them extra work. We explored the possibility of applying automated data extraction from GP records in sentinel surveillance in an Australian setting.The two research questions asked in designing the study were: Can syndromic ILI data be extracted automatically from routine GP data? How do ILI trends in sentinel general practice compare with ILI trends in EDs? We adapted a software program already capable of automated data extraction to identify records of patients with ILI in routine electronic GP records in two of the most commonly used commercial programs. This tool was applied in sentinel sites to gather retrospective data for May-October 2007-2009 and in real-time for the same interval in 2010. The data were compared with that provided by the Public Health Real-time Emergency Department Surveillance System (PHREDSS) and with ED data for the same periods. The GP surveillance tool identified seasonal trends in ILI both retrospectively and in near real-time. The curve of seasonal ILI was more responsive and less volatile than that of PHREDSS on a local area level. The number of weekly ILI presentations ranged from 8 to 128 at GP sites and from 0 to 18 in EDs in non-pandemic years. Automated data extraction from routine GP records offers a means to gather data without introducing any additional work for the practitioner. Adding this method to current surveillance programs will enhance their ability to monitor ILI and to detect early warning signals of new ILI events.

  11. Automated solid-phase extraction-liquid chromatography-tandem mass spectrometry analysis of 6-acetylmorphine in human urine specimens: application for a high-throughput urine analysis laboratory.

    PubMed

    Robandt, P V; Bui, H M; Scancella, J M; Klette, K L

    2010-10-01

    An automated solid-phase extraction-liquid chromatography- tandem mass spectrometry (SPE-LC-MS-MS) method using the Spark Holland Symbiosis Pharma SPE-LC coupled to a Waters Quattro Micro MS-MS was developed for the analysis of 6-acetylmorphine (6-AM) in human urine specimens. The method was linear (R² = 0.9983) to 100 ng/mL, with no carryover at 200 ng/mL. Limits of quantification and detection were found to be 2 ng/mL. Interrun precision calculated as percent coefficient of variation (%CV) and evaluated by analyzing five specimens at 10 ng/mL over nine batches (n = 45) was 3.6%. Intrarun precision evaluated from 0 to 100 ng/mL ranged from 1.0 to 4.4%CV. Other opioids (codeine, morphine, oxycodone, oxymorphone, hydromorphone, hydrocodone, and norcodeine) did not interfere in the detection, quantification, or chromatography of 6-AM or the deuterated internal standard. The quantified values for 41 authentic human urine specimens previously found to contain 6-AM by a validated gas chromatography (GC)-MS method were compared to those obtained by the SPE-LC-MS-MS method. The SPE-LC-MS-MS procedure eliminates the human factors of specimen handling, extraction, and derivatization, thereby reducing labor costs and rework resulting from human error or technique issues. The time required for extraction and analysis was reduced by approximately 50% when compared to a validated 6-AM procedure using manual SPE and GC-MS analysis.

  12. Automated ancillary cancer history classification for mesothelioma patients from free-text clinical reports

    PubMed Central

    Wilson, Richard A.; Chapman, Wendy W.; DeFries, Shawn J.; Becich, Michael J.; Chapman, Brian E.

    2010-01-01

    Background: Clinical records are often unstructured, free-text documents that create information extraction challenges and costs. Healthcare delivery and research organizations, such as the National Mesothelioma Virtual Bank, require the aggregation of both structured and unstructured data types. Natural language processing offers techniques for automatically extracting information from unstructured, free-text documents. Methods: Five hundred and eight history and physical reports from mesothelioma patients were split into development (208) and test sets (300). A reference standard was developed and each report was annotated by experts with regard to the patient’s personal history of ancillary cancer and family history of any cancer. The Hx application was developed to process reports, extract relevant features, perform reference resolution and classify them with regard to cancer history. Two methods, Dynamic-Window and ConText, for extracting information were evaluated. Hx’s classification responses using each of the two methods were measured against the reference standard. The average Cohen’s weighted kappa served as the human benchmark in evaluating the system. Results: Hx had a high overall accuracy, with each method, scoring 96.2%. F-measures using the Dynamic-Window and ConText methods were 91.8% and 91.6%, which were comparable to the human benchmark of 92.8%. For the personal history classification, Dynamic-Window scored highest with 89.2% and for the family history classification, ConText scored highest with 97.6%, in which both methods were comparable to the human benchmark of 88.3% and 97.2%, respectively. Conclusion: We evaluated an automated application’s performance in classifying a mesothelioma patient’s personal and family history of cancer from clinical reports. To do so, the Hx application must process reports, identify cancer concepts, distinguish the known mesothelioma from ancillary cancers, recognize negation, perform reference resolution and determine the experiencer. Results indicated that both information extraction methods tested were dependant on the domain-specific lexicon and negation extraction. We showed that the more general method, ConText, performed as well as our task-specific method. Although Dynamic- Window could be modified to retrieve other concepts, ConText is more robust and performs better on inconclusive concepts. Hx could greatly improve and expedite the process of extracting data from free-text, clinical records for a variety of research or healthcare delivery organizations. PMID:21031012

  13. Automated Diagnosis of Glaucoma Using Empirical Wavelet Transform and Correntropy Features Extracted From Fundus Images.

    PubMed

    Maheshwari, Shishir; Pachori, Ram Bilas; Acharya, U Rajendra

    2017-05-01

    Glaucoma is an ocular disorder caused due to increased fluid pressure in the optic nerve. It damages the optic nerve and subsequently causes loss of vision. The available scanning methods are Heidelberg retinal tomography, scanning laser polarimetry, and optical coherence tomography. These methods are expensive and require experienced clinicians to use them. So, there is a need to diagnose glaucoma accurately with low cost. Hence, in this paper, we have presented a new methodology for an automated diagnosis of glaucoma using digital fundus images based on empirical wavelet transform (EWT). The EWT is used to decompose the image, and correntropy features are obtained from decomposed EWT components. These extracted features are ranked based on t value feature selection algorithm. Then, these features are used for the classification of normal and glaucoma images using least-squares support vector machine (LS-SVM) classifier. The LS-SVM is employed for classification with radial basis function, Morlet wavelet, and Mexican-hat wavelet kernels. The classification accuracy of the proposed method is 98.33% and 96.67% using threefold and tenfold cross validation, respectively.

  14. Two Automated Techniques for Carotid Lumen Diameter Measurement: Regional versus Boundary Approaches.

    PubMed

    Araki, Tadashi; Kumar, P Krishna; Suri, Harman S; Ikeda, Nobutaka; Gupta, Ajay; Saba, Luca; Rajan, Jeny; Lavra, Francesco; Sharma, Aditya M; Shafique, Shoaib; Nicolaides, Andrew; Laird, John R; Suri, Jasjit S

    2016-07-01

    The degree of stenosis in the carotid artery can be predicted using automated carotid lumen diameter (LD) measured from B-mode ultrasound images. Systolic velocity-based methods for measurement of LD are subjective. With the advancement of high resolution imaging, image-based methods have started to emerge. However, they require robust image analysis for accurate LD measurement. This paper presents two different algorithms for automated segmentation of the lumen borders in carotid ultrasound images. Both algorithms are modeled as a two stage process. Stage one consists of a global-based model using scale-space framework for the extraction of the region of interest. This stage is common to both algorithms. Stage two is modeled using a local-based strategy that extracts the lumen interfaces. At this stage, the algorithm-1 is modeled as a region-based strategy using a classification framework, whereas the algorithm-2 is modeled as a boundary-based approach that uses the level set framework. Two sets of databases (DB), Japan DB (JDB) (202 patients, 404 images) and Hong Kong DB (HKDB) (50 patients, 300 images) were used in this study. Two trained neuroradiologists performed manual LD tracings. The mean automated LD measured was 6.35 ± 0.95 mm for JDB and 6.20 ± 1.35 mm for HKDB. The precision-of-merit was: 97.4 % and 98.0 % w.r.t to two manual tracings for JDB and 99.7 % and 97.9 % w.r.t to two manual tracings for HKDB. Statistical tests such as ANOVA, Chi-Squared, T-test, and Mann-Whitney test were conducted to show the stability and reliability of the automated techniques.

  15. Automated Assessment of Child Vocalization Development Using LENA

    ERIC Educational Resources Information Center

    Richards, Jeffrey A.; Xu, Dongxin; Gilkerson, Jill; Yapanel, Umit; Gray, Sharmistha; Paul, Terrance

    2017-01-01

    Purpose: To produce a novel, efficient measure of children's expressive vocal development on the basis of automatic vocalization assessment (AVA), child vocalizations were automatically identified and extracted from audio recordings using Language Environment Analysis (LENA) System technology. Method: Assessment was based on full-day audio…

  16. Refining Automatically Extracted Knowledge Bases Using Crowdsourcing

    PubMed Central

    Xian, Xuefeng; Cui, Zhiming

    2017-01-01

    Machine-constructed knowledge bases often contain noisy and inaccurate facts. There exists significant work in developing automated algorithms for knowledge base refinement. Automated approaches improve the quality of knowledge bases but are far from perfect. In this paper, we leverage crowdsourcing to improve the quality of automatically extracted knowledge bases. As human labelling is costly, an important research challenge is how we can use limited human resources to maximize the quality improvement for a knowledge base. To address this problem, we first introduce a concept of semantic constraints that can be used to detect potential errors and do inference among candidate facts. Then, based on semantic constraints, we propose rank-based and graph-based algorithms for crowdsourced knowledge refining, which judiciously select the most beneficial candidate facts to conduct crowdsourcing and prune unnecessary questions. Our experiments show that our method improves the quality of knowledge bases significantly and outperforms state-of-the-art automatic methods under a reasonable crowdsourcing cost. PMID:28588611

  17. Dried Blood Spot Proteomics: Surface Extraction of Endogenous Proteins Coupled with Automated Sample Preparation and Mass Spectrometry Analysis

    NASA Astrophysics Data System (ADS)

    Martin, Nicholas J.; Bunch, Josephine; Cooper, Helen J.

    2013-08-01

    Dried blood spots offer many advantages as a sample format including ease and safety of transport and handling. To date, the majority of mass spectrometry analyses of dried blood spots have focused on small molecules or hemoglobin. However, dried blood spots are a potentially rich source of protein biomarkers, an area that has been overlooked. To address this issue, we have applied an untargeted bottom-up proteomics approach to the analysis of dried blood spots. We present an automated and integrated method for extraction of endogenous proteins from the surface of dried blood spots and sample preparation via trypsin digestion by use of the Advion Biosciences Triversa Nanomate robotic platform. Liquid chromatography tandem mass spectrometry of the resulting digests enabled identification of 120 proteins from a single dried blood spot. The proteins identified cross a concentration range of four orders of magnitude. The method is evaluated and the results discussed in terms of the proteins identified and their potential use as biomarkers in screening programs.

  18. An Automated Method for Landmark Identification and Finite-Element Modeling of the Lumbar Spine.

    PubMed

    Campbell, Julius Quinn; Petrella, Anthony J

    2015-11-01

    The purpose of this study was to develop a method for the automated creation of finite-element models of the lumbar spine. Custom scripts were written to extract bone landmarks of lumbar vertebrae and assemble L1-L5 finite-element models. End-plate borders, ligament attachment points, and facet surfaces were identified. Landmarks were identified to maintain mesh correspondence between meshes for later use in statistical shape modeling. 90 lumbar vertebrae were processed creating 18 subject-specific finite-element models. Finite-element model surfaces and ligament attachment points were reproduced within 1e-5 mm of the bone surface, including the critical contact surfaces of the facets. Element quality exceeded specifications in 97% of elements for the 18 models created. The current method is capable of producing subject-specific finite-element models of the lumbar spine with good accuracy, quality, and robustness. The automated methods developed represent advancement in the state of the art of subject-specific lumbar spine modeling to a scale not possible with prior manual and semiautomated methods.

  19. An automated online turboflow cleanup LC/MS/MS method for the determination of 11 plasticizers in beverages and milk.

    PubMed

    Ates, Ebru; Mittendorf, Klaus; Senyuva, Hamide

    2013-01-01

    An automated sample preparation technique involving cleanup and analytical separation in a single operation using an online coupled TurboFlow (RP-LC system) is reported. This method eliminates time-consuming sample preparation steps that can be potential sources for cross-contamination in the analysis of plasticizers. Using TurboFlow chromatography, liquid samples were injected directly into the automated system without previous extraction or cleanup. Special cleanup columns enabled specific binding of target compounds; higher MW compounds, i.e., fats and proteins, and other matrix interferences with different chemical properties were removed to waste, prior to LC/MS/MS. Systematic stepwise method development using this new technology in the food safety area is described. Selection of optimum columns and mobile phases for loading onto the cleanup column followed by transfer onto the analytical column and MS detection are critical method parameters. The method was optimized for the assay of 10 phthalates (dimethyl, diethyl, dipropyl, butyl benzyl, diisobutyl, dicyclohexyl, dihexyl, diethylhexyl, diisononyl, and diisododecyl) and one adipate (diethylhexyl) in beverages and milk.

  20. Longitudinal Analysis of New Information Types in Clinical Notes

    PubMed Central

    Zhang, Rui; Pakhomov, Serguei; Melton, Genevieve B.

    2014-01-01

    It is increasingly recognized that redundant information in clinical notes within electronic health record (EHR) systems is ubiquitous, significant, and may negatively impact the secondary use of these notes for research and patient care. We investigated several automated methods to identify redundant versus relevant new information in clinical reports. These methods may provide a valuable approach to extract clinically pertinent information and further improve the accuracy of clinical information extraction systems. In this study, we used UMLS semantic types to extract several types of new information, including problems, medications, and laboratory information. Automatically identified new information highly correlated with manual reference standard annotations. Methods to identify different types of new information can potentially help to build up more robust information extraction systems for clinical researchers as well as aid clinicians and researchers in navigating clinical notes more effectively and quickly identify information pertaining to changes in health states. PMID:25717418

  1. A thesis on the Development of an Automated SWIFT Edge Detection Algorithm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Trujillo, Christopher J.

    Throughout the world, scientists and engineers such as those at Los Alamos National Laboratory, perform research and testing unique only to applications aimed towards advancing technology, and understanding the nature of materials. With this testing, comes a need for advanced methods of data acquisition and most importantly, a means of analyzing and extracting the necessary information from such acquired data. In this thesis, I aim to produce an automated method implementing advanced image processing techniques and tools to analyze SWIFT image datasets for Detonator Technology at Los Alamos National Laboratory. Such an effective method for edge detection and point extractionmore » can prove to be advantageous in analyzing such unique datasets and provide for consistency in producing results.« less

  2. ACCELERATED SOLVENT EXTRACTION COMBINED WITH ...

    EPA Pesticide Factsheets

    A research project was initiated to address a recurring problem of elevated detection limits above required risk-based concentrations for the determination of semivolatile organic compounds in high moisture content solid samples. This project was initiated, in cooperation with the EPA Region 1 Laboratory, under the Regional Methods Program administered through the ORD Office of Science Policy. The aim of the project was to develop an approach for the rapid removal of water in high moisture content solids (e.g., wetland sediments) in preparation for analysis via Method 8270. Alternative methods for water removal have been investigated to enhance compound solid concentrations and improve extraction efficiency, with the use of pressure filtration providing a high-throughput alternative for removal of the majority of free water in sediments and sludges. In order to eliminate problems with phase separation during extraction of solids using Accelerated Solvent Extraction, a variation of a water-isopropanol extraction method developed at the USGS National Water Quality Laboratory in Denver, CO is being employed. The concentrations of target compounds in water-isopropanol extraction fluids are subsequently analyzed using an automated Solid Phase Extraction (SPE)-GC/MS method developed in our laboratory. The coupled approaches for dewatering, extraction, and target compound identification-quantitation provide a useful alternative to enhance sample throughput for Me

  3. Automated Fault Interpretation and Extraction using Improved Supplementary Seismic Datasets

    NASA Astrophysics Data System (ADS)

    Bollmann, T. A.; Shank, R.

    2017-12-01

    During the interpretation of seismic volumes, it is necessary to interpret faults along with horizons of interest. With the improvement of technology, the interpretation of faults can be expedited with the aid of different algorithms that create supplementary seismic attributes, such as semblance and coherency. These products highlight discontinuities, but still need a large amount of human interaction to interpret faults and are plagued by noise and stratigraphic discontinuities. Hale (2013) presents a method to improve on these datasets by creating what is referred to as a Fault Likelihood volume. In general, these volumes contain less noise and do not emphasize stratigraphic features. Instead, planar features within a specified strike and dip range are highlighted. Once a satisfactory Fault Likelihood Volume is created, extraction of fault surfaces is much easier. The extracted fault surfaces are then exported to interpretation software for QC. Numerous software packages have implemented this methodology with varying results. After investigating these platforms, we developed a preferred Automated Fault Interpretation workflow.

  4. Automated microfluidic devices integrating solid-phase extraction, fluorescent labeling, and microchip electrophoresis for preterm birth biomarker analysis.

    PubMed

    Sahore, Vishal; Sonker, Mukul; Nielsen, Anna V; Knob, Radim; Kumar, Suresh; Woolley, Adam T

    2018-01-01

    We have developed multichannel integrated microfluidic devices for automated preconcentration, labeling, purification, and separation of preterm birth (PTB) biomarkers. We fabricated multilayer poly(dimethylsiloxane)-cyclic olefin copolymer (PDMS-COC) devices that perform solid-phase extraction (SPE) and microchip electrophoresis (μCE) for automated PTB biomarker analysis. The PDMS control layer had a peristaltic pump and pneumatic valves for flow control, while the PDMS fluidic layer had five input reservoirs connected to microchannels and a μCE system. The COC layers had a reversed-phase octyl methacrylate porous polymer monolith for SPE and fluorescent labeling of PTB biomarkers. We determined μCE conditions for two PTB biomarkers, ferritin (Fer) and corticotropin-releasing factor (CRF). We used these integrated microfluidic devices to preconcentrate and purify off-chip-labeled Fer and CRF in an automated fashion. Finally, we performed a fully automated on-chip analysis of unlabeled PTB biomarkers, involving SPE, labeling, and μCE separation with 1 h total analysis time. These integrated systems have strong potential to be combined with upstream immunoaffinity extraction, offering a compact sample-to-answer biomarker analysis platform. Graphical abstract Pressure-actuated integrated microfluidic devices have been developed for automated solid-phase extraction, fluorescent labeling, and microchip electrophoresis of preterm birth biomarkers.

  5. Automated mini-column solid-phase extraction cleanup for high-throughput analysis of chemical contaminants in foods by low-pressure gas chromatography – tandem mass spectrometry

    USDA-ARS?s Scientific Manuscript database

    This study demonstrated the application of an automated high-throughput mini-cartridge solid-phase extraction (mini-SPE) cleanup for the rapid low-pressure gas chromatography – tandem mass spectrometry (LPGC-MS/MS) analysis of pesticides and environmental contaminants in QuEChERS extracts of foods. ...

  6. Protocol: high throughput silica-based purification of RNA from Arabidopsis seedlings in a 96-well format

    PubMed Central

    2011-01-01

    The increasing popularity of systems-based approaches to plant research has resulted in a demand for high throughput (HTP) methods to be developed. RNA extraction from multiple samples in an experiment is a significant bottleneck in performing systems-level genomic studies. Therefore we have established a high throughput method of RNA extraction from Arabidopsis thaliana to facilitate gene expression studies in this widely used plant model. We present optimised manual and automated protocols for the extraction of total RNA from 9-day-old Arabidopsis seedlings in a 96 well plate format using silica membrane-based methodology. Consistent and reproducible yields of high quality RNA are isolated averaging 8.9 μg total RNA per sample (~20 mg plant tissue). The purified RNA is suitable for subsequent qPCR analysis of the expression of over 500 genes in triplicate from each sample. Using the automated procedure, 192 samples (2 × 96 well plates) can easily be fully processed (samples homogenised, RNA purified and quantified) in less than half a day. Additionally we demonstrate that plant samples can be stored in RNAlater at -20°C (but not 4°C) for 10 months prior to extraction with no significant effect on RNA yield or quality. Additionally, disrupted samples can be stored in the lysis buffer at -20°C for at least 6 months prior to completion of the extraction procedure providing a flexible sampling and storage scheme to facilitate complex time series experiments. PMID:22136293

  7. Protocol: high throughput silica-based purification of RNA from Arabidopsis seedlings in a 96-well format.

    PubMed

    Salvo-Chirnside, Eliane; Kane, Steven; Kerr, Lorraine E

    2011-12-02

    The increasing popularity of systems-based approaches to plant research has resulted in a demand for high throughput (HTP) methods to be developed. RNA extraction from multiple samples in an experiment is a significant bottleneck in performing systems-level genomic studies. Therefore we have established a high throughput method of RNA extraction from Arabidopsis thaliana to facilitate gene expression studies in this widely used plant model. We present optimised manual and automated protocols for the extraction of total RNA from 9-day-old Arabidopsis seedlings in a 96 well plate format using silica membrane-based methodology. Consistent and reproducible yields of high quality RNA are isolated averaging 8.9 μg total RNA per sample (~20 mg plant tissue). The purified RNA is suitable for subsequent qPCR analysis of the expression of over 500 genes in triplicate from each sample. Using the automated procedure, 192 samples (2 × 96 well plates) can easily be fully processed (samples homogenised, RNA purified and quantified) in less than half a day. Additionally we demonstrate that plant samples can be stored in RNAlater at -20°C (but not 4°C) for 10 months prior to extraction with no significant effect on RNA yield or quality. Additionally, disrupted samples can be stored in the lysis buffer at -20°C for at least 6 months prior to completion of the extraction procedure providing a flexible sampling and storage scheme to facilitate complex time series experiments.

  8. Semi-automated solid phase extraction method for the mass spectrometric quantification of 12 specific metabolites of organophosphorus pesticides, synthetic pyrethroids, and select herbicides in human urine.

    PubMed

    Davis, Mark D; Wade, Erin L; Restrepo, Paula R; Roman-Esteva, William; Bravo, Roberto; Kuklenyik, Peter; Calafat, Antonia M

    2013-06-15

    Organophosphate and pyrethroid insecticides and phenoxyacetic acid herbicides represent important classes of pesticides applied in commercial and residential settings. Interest in assessing the extent of human exposure to these pesticides exists because of their widespread use and their potential adverse health effects. An analytical method for measuring 12 biomarkers of several of these pesticides in urine has been developed. The target analytes were extracted from one milliliter of urine by a semi-automated solid phase extraction technique, separated from each other and from other urinary biomolecules by reversed-phase high performance liquid chromatography, and detected using tandem mass spectrometry with isotope dilution quantitation. This method can be used to measure all the target analytes in one injection with similar repeatability and detection limits of previous methods which required more than one injection. Each step of the procedure was optimized to produce a robust, reproducible, accurate, precise and efficient method. The required selectivity and sensitivity for trace-level analysis (e.g., limits of detection below 0.5ng/mL) was achieved using a narrow diameter analytical column, higher than unit mass resolution for certain analytes, and stable isotope labeled internal standards. The method was applied to the analysis of 55 samples collected from adult anonymous donors with no known exposure to the target pesticides. This efficient and cost-effective method is adequate to handle the large number of samples required for national biomonitoring surveys. Published by Elsevier B.V.

  9. Automated railroad reconstruction from remote sensing image based on texture filter

    NASA Astrophysics Data System (ADS)

    Xiao, Jie; Lu, Kaixia

    2018-03-01

    Techniques of remote sensing have been improved incredibly in recent years and very accurate results and high resolution images can be acquired. There exist possible ways to use such data to reconstruct railroads. In this paper, an automated railroad reconstruction method from remote sensing images based on Gabor filter was proposed. The method is divided in three steps. Firstly, the edge-oriented railroad characteristics (such as line features) in a remote sensing image are detected using Gabor filter. Secondly, two response images with the filtering orientations perpendicular to each other are fused to suppress the noise and acquire a long stripe smooth region of railroads. Thirdly, a set of smooth regions can be extracted by firstly computing global threshold for the previous result image using Otsu's method and then converting it to a binary image based on the previous threshold. This workflow is tested on a set of remote sensing images and was found to deliver very accurate results in a quickly and highly automated manner.

  10. SU-G-206-01: A Fully Automated CT Tool to Facilitate Phantom Image QA for Quantitative Imaging in Clinical Trials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wahi-Anwar, M; Lo, P; Kim, H

    Purpose: The use of Quantitative Imaging (QI) methods in Clinical Trials requires both verification of adherence to a specified protocol and an assessment of scanner performance under that protocol, which are currently accomplished manually. This work introduces automated phantom identification and image QA measure extraction towards a fully-automated CT phantom QA system to perform these functions and facilitate the use of Quantitative Imaging methods in clinical trials. Methods: This study used a retrospective cohort of CT phantom scans from existing clinical trial protocols - totaling 84 phantoms, across 3 phantom types using various scanners and protocols. The QA system identifiesmore » the input phantom scan through an ensemble of threshold-based classifiers. Each classifier - corresponding to a phantom type - contains a template slice, which is compared to the input scan on a slice-by-slice basis, resulting in slice-wise similarity metric values for each slice compared. Pre-trained thresholds (established from a training set of phantom images matching the template type) are used to filter the similarity distribution, and the slice with the most optimal local mean similarity, with local neighboring slices meeting the threshold requirement, is chosen as the classifier’s matched slice (if it existed). The classifier with the matched slice possessing the most optimal local mean similarity is then chosen as the ensemble’s best matching slice. If the best matching slice exists, image QA algorithm and ROIs corresponding to the matching classifier extracted the image QA measures. Results: Automated phantom identification performed with 84.5% accuracy and 88.8% sensitivity on 84 phantoms. Automated image quality measurements (following standard protocol) on identified water phantoms (n=35) matched user QA decisions with 100% accuracy. Conclusion: We provide a fullyautomated CT phantom QA system consistent with manual QA performance. Further work will include parallel component to automatically verify image acquisition parameters and automated adherence to specifications. Institutional research agreement, Siemens Healthcare; Past recipient, research grant support, Siemens Healthcare; Consultant, Toshiba America Medical Systems; Consultant, Samsung Electronics; NIH Grant support from: U01 CA181156.« less

  11. Automated retinal vessel type classification in color fundus images

    NASA Astrophysics Data System (ADS)

    Yu, H.; Barriga, S.; Agurto, C.; Nemeth, S.; Bauman, W.; Soliz, P.

    2013-02-01

    Automated retinal vessel type classification is an essential first step toward machine-based quantitative measurement of various vessel topological parameters and identifying vessel abnormalities and alternations in cardiovascular disease risk analysis. This paper presents a new and accurate automatic artery and vein classification method developed for arteriolar-to-venular width ratio (AVR) and artery and vein tortuosity measurements in regions of interest (ROI) of 1.5 and 2.5 optic disc diameters from the disc center, respectively. This method includes illumination normalization, automatic optic disc detection and retinal vessel segmentation, feature extraction, and a partial least squares (PLS) classification. Normalized multi-color information, color variation, and multi-scale morphological features are extracted on each vessel segment. We trained the algorithm on a set of 51 color fundus images using manually marked arteries and veins. We tested the proposed method in a previously unseen test data set consisting of 42 images. We obtained an area under the ROC curve (AUC) of 93.7% in the ROI of AVR measurement and 91.5% of AUC in the ROI of tortuosity measurement. The proposed AV classification method has the potential to assist automatic cardiovascular disease early detection and risk analysis.

  12. INVESTIGATION OF ARSENIC SPECIATION ON DRINKING WATER TREATMENT MEDIA UTILIZING AUTOMATED SEQUENTIAL CONTINUOUS FLOW EXTRACTION WITH IC-ICP-MS DETECTION

    EPA Science Inventory

    Three treatment media, used for the removal of arsenic from drinking water, were sequentially extracted using 10mM MgCl2 (pH 8), 10mM NaH2PO4 (pH 7) followed by 10mM (NH4)2C2O4 (pH 3). The media were extracted using an on-line automated continuous extraction system which allowed...

  13. Digital Correlation In Laser-Speckle Velocimetry

    NASA Technical Reports Server (NTRS)

    Gilbert, John A.; Mathys, Donald R.

    1992-01-01

    Periodic recording helps to eliminate spurious results. Improved digital-correlation process extracts velocity field of two-dimensional flow from laser-speckle images of seed particles distributed sparsely in flow. Method which involves digital correlation of images recorded at unequal intervals, completely automated and has potential to be fastest yet.

  14. New feature extraction method for classification of agricultural products from x-ray images

    NASA Astrophysics Data System (ADS)

    Talukder, Ashit; Casasent, David P.; Lee, Ha-Woon; Keagy, Pamela M.; Schatzki, Thomas F.

    1999-01-01

    Classification of real-time x-ray images of randomly oriented touching pistachio nuts is discussed. The ultimate objective is the development of a system for automated non- invasive detection of defective product items on a conveyor belt. We discuss the extraction of new features that allow better discrimination between damaged and clean items. This feature extraction and classification stage is the new aspect of this paper; our new maximum representation and discrimination between damaged and clean items. This feature extraction and classification stage is the new aspect of this paper; our new maximum representation and discriminating feature (MRDF) extraction method computes nonlinear features that are used as inputs to a new modified k nearest neighbor classifier. In this work the MRDF is applied to standard features. The MRDF is robust to various probability distributions of the input class and is shown to provide good classification and new ROC data.

  15. Feature Extraction and Selection Strategies for Automated Target Recognition

    NASA Technical Reports Server (NTRS)

    Greene, W. Nicholas; Zhang, Yuhan; Lu, Thomas T.; Chao, Tien-Hsin

    2010-01-01

    Several feature extraction and selection methods for an existing automatic target recognition (ATR) system using JPLs Grayscale Optical Correlator (GOC) and Optimal Trade-Off Maximum Average Correlation Height (OT-MACH) filter were tested using MATLAB. The ATR system is composed of three stages: a cursory region of-interest (ROI) search using the GOC and OT-MACH filter, a feature extraction and selection stage, and a final classification stage. Feature extraction and selection concerns transforming potential target data into more useful forms as well as selecting important subsets of that data which may aide in detection and classification. The strategies tested were built around two popular extraction methods: Principal Component Analysis (PCA) and Independent Component Analysis (ICA). Performance was measured based on the classification accuracy and free-response receiver operating characteristic (FROC) output of a support vector machine(SVM) and a neural net (NN) classifier.

  16. Feature extraction and selection strategies for automated target recognition

    NASA Astrophysics Data System (ADS)

    Greene, W. Nicholas; Zhang, Yuhan; Lu, Thomas T.; Chao, Tien-Hsin

    2010-04-01

    Several feature extraction and selection methods for an existing automatic target recognition (ATR) system using JPLs Grayscale Optical Correlator (GOC) and Optimal Trade-Off Maximum Average Correlation Height (OT-MACH) filter were tested using MATLAB. The ATR system is composed of three stages: a cursory regionof- interest (ROI) search using the GOC and OT-MACH filter, a feature extraction and selection stage, and a final classification stage. Feature extraction and selection concerns transforming potential target data into more useful forms as well as selecting important subsets of that data which may aide in detection and classification. The strategies tested were built around two popular extraction methods: Principal Component Analysis (PCA) and Independent Component Analysis (ICA). Performance was measured based on the classification accuracy and free-response receiver operating characteristic (FROC) output of a support vector machine(SVM) and a neural net (NN) classifier.

  17. A New Method for Automated Identification and Morphometry of Myelinated Fibers Through Light Microscopy Image Analysis.

    PubMed

    Novas, Romulo Bourget; Fazan, Valeria Paula Sassoli; Felipe, Joaquim Cezar

    2016-02-01

    Nerve morphometry is known to produce relevant information for the evaluation of several phenomena, such as nerve repair, regeneration, implant, transplant, aging, and different human neuropathies. Manual morphometry is laborious, tedious, time consuming, and subject to many sources of error. Therefore, in this paper, we propose a new method for the automated morphometry of myelinated fibers in cross-section light microscopy images. Images from the recurrent laryngeal nerve of adult rats and the vestibulocochlear nerve of adult guinea pigs were used herein. The proposed pipeline for fiber segmentation is based on the techniques of competitive clustering and concavity analysis. The evaluation of the proposed method for segmentation of images was done by comparing the automatic segmentation with the manual segmentation. To further evaluate the proposed method considering morphometric features extracted from the segmented images, the distributions of these features were tested for statistical significant difference. The method achieved a high overall sensitivity and very low false-positive rates per image. We detect no statistical difference between the distribution of the features extracted from the manual and the pipeline segmentations. The method presented a good overall performance, showing widespread potential in experimental and clinical settings allowing large-scale image analysis and, thus, leading to more reliable results.

  18. Comparison of DNA extraction methods for human gut microbial community profiling.

    PubMed

    Lim, Mi Young; Song, Eun-Ji; Kim, Sang Ho; Lee, Jangwon; Nam, Young-Do

    2018-03-01

    The human gut harbors a vast range of microbes that have significant impact on health and disease. Therefore, gut microbiome profiling holds promise for use in early diagnosis and precision medicine development. Accurate profiling of the highly complex gut microbiome requires DNA extraction methods that provide sufficient coverage of the original community as well as adequate quality and quantity. We tested nine different DNA extraction methods using three commercial kits (TianLong Stool DNA/RNA Extraction Kit (TS), QIAamp DNA Stool Mini Kit (QS), and QIAamp PowerFecal DNA Kit (QP)) with or without additional bead-beating step using manual or automated methods and compared them in terms of DNA extraction ability from human fecal sample. All methods produced DNA in sufficient concentration and quality for use in sequencing, and the samples were clustered according to the DNA extraction method. Inclusion of bead-beating step especially resulted in higher degrees of microbial diversity and had the greatest effect on gut microbiome composition. Among the samples subjected to bead-beating method, TS kit samples were more similar to QP kit samples than QS kit samples. Our results emphasize the importance of mechanical disruption step for a more comprehensive profiling of the human gut microbiome. Copyright © 2017 The Authors. Published by Elsevier GmbH.. All rights reserved.

  19. Facial recognition techniques applied to the automated registration of patients in the emergency treatment of head injuries.

    PubMed

    Gooroochurn, M; Kerr, D; Bouazza-Marouf, K; Ovinis, M

    2011-02-01

    This paper describes the development of a registration framework for image-guided solutions to the automation of certain routine neurosurgical procedures. The registration process aligns the pose of the patient in the preoperative space to that of the intraoperative space. Computerized tomography images are used in the preoperative (planning) stage, whilst white light (TV camera) images are used to capture the intraoperative pose. Craniofacial landmarks, rather than artificial markers, are used as the registration basis for the alignment. To create further synergy between the user and the image-guided system, automated methods for extraction of these landmarks have been developed. The results obtained from the application of a polynomial neural network classifier based on Gabor features for the detection and localization of the selected craniofacial landmarks, namely the ear tragus and eye corners in the white light modality are presented. The robustness of the classifier to variations in intensity and noise is analysed. The results show that such a classifier gives good performance for the extraction of craniofacial landmarks.

  20. Constraint factor graph cut-based active contour method for automated cellular image segmentation in RNAi screening.

    PubMed

    Chen, C; Li, H; Zhou, X; Wong, S T C

    2008-05-01

    Image-based, high throughput genome-wide RNA interference (RNAi) experiments are increasingly carried out to facilitate the understanding of gene functions in intricate biological processes. Automated screening of such experiments generates a large number of images with great variations in image quality, which makes manual analysis unreasonably time-consuming. Therefore, effective techniques for automatic image analysis are urgently needed, in which segmentation is one of the most important steps. This paper proposes a fully automatic method for cells segmentation in genome-wide RNAi screening images. The method consists of two steps: nuclei and cytoplasm segmentation. Nuclei are extracted and labelled to initialize cytoplasm segmentation. Since the quality of RNAi image is rather poor, a novel scale-adaptive steerable filter is designed to enhance the image in order to extract long and thin protrusions on the spiky cells. Then, constraint factor GCBAC method and morphological algorithms are combined to be an integrated method to segment tight clustered cells. Compared with the results obtained by using seeded watershed and the ground truth, that is, manual labelling results by experts in RNAi screening data, our method achieves higher accuracy. Compared with active contour methods, our method consumes much less time. The positive results indicate that the proposed method can be applied in automatic image analysis of multi-channel image screening data.

  1. Iterative variational mode decomposition based automated detection of glaucoma using fundus images.

    PubMed

    Maheshwari, Shishir; Pachori, Ram Bilas; Kanhangad, Vivek; Bhandary, Sulatha V; Acharya, U Rajendra

    2017-09-01

    Glaucoma is one of the leading causes of permanent vision loss. It is an ocular disorder caused by increased fluid pressure within the eye. The clinical methods available for the diagnosis of glaucoma require skilled supervision. They are manual, time consuming, and out of reach of common people. Hence, there is a need for an automated glaucoma diagnosis system for mass screening. In this paper, we present a novel method for an automated diagnosis of glaucoma using digital fundus images. Variational mode decomposition (VMD) method is used in an iterative manner for image decomposition. Various features namely, Kapoor entropy, Renyi entropy, Yager entropy, and fractal dimensions are extracted from VMD components. ReliefF algorithm is used to select the discriminatory features and these features are then fed to the least squares support vector machine (LS-SVM) for classification. Our proposed method achieved classification accuracies of 95.19% and 94.79% using three-fold and ten-fold cross-validation strategies, respectively. This system can aid the ophthalmologists in confirming their manual reading of classes (glaucoma or normal) using fundus images. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. Extraction of CT dose information from DICOM metadata: automated Matlab-based approach.

    PubMed

    Dave, Jaydev K; Gingold, Eric L

    2013-01-01

    The purpose of this study was to extract exposure parameters and dose-relevant indexes of CT examinations from information embedded in DICOM metadata. DICOM dose report files were identified and retrieved from a PACS. An automated software program was used to extract from these files information from the structured elements in the DICOM metadata relevant to exposure. Extracting information from DICOM metadata eliminated potential errors inherent in techniques based on optical character recognition, yielding 100% accuracy.

  3. Automated classification of optical coherence tomography images of human atrial tissue

    NASA Astrophysics Data System (ADS)

    Gan, Yu; Tsay, David; Amir, Syed B.; Marboe, Charles C.; Hendon, Christine P.

    2016-10-01

    Tissue composition of the atria plays a critical role in the pathology of cardiovascular disease, tissue remodeling, and arrhythmogenic substrates. Optical coherence tomography (OCT) has the ability to capture the tissue composition information of the human atria. In this study, we developed a region-based automated method to classify tissue compositions within human atria samples within OCT images. We segmented regional information without prior information about the tissue architecture and subsequently extracted features within each segmented region. A relevance vector machine model was used to perform automated classification. Segmentation of human atrial ex vivo datasets was correlated with trichrome histology and our classification algorithm had an average accuracy of 80.41% for identifying adipose, myocardium, fibrotic myocardium, and collagen tissue compositions.

  4. AUTOMATED PRESSURIZED FLUID EXTRACTION WITH IN-LINE CLEAN-UP FOR DETERMINATION OF PESTICIDES IN COMPOSITE DIETS.

    EPA Science Inventory

    USEPA's National Exposure Research Laboratory conducts research to measure the exposure of individuals to chemical pollutants through the diet, as well as other media. In support of this research, methods are being evaluated for determination of organophosphate pesticides in co...

  5. Tapping into the Hexagon spy imagery database: A new automated pipeline for geomorphic change detection

    NASA Astrophysics Data System (ADS)

    Maurer, Joshua; Rupper, Summer

    2015-10-01

    Declassified historical imagery from the Hexagon spy satellite database has near-global coverage, yet remains a largely untapped resource for geomorphic change studies. Unavailable satellite ephemeris data make DEM (digital elevation model) extraction difficult in terms of time and accuracy. A new fully-automated pipeline for DEM extraction and image orthorectification is presented which yields accurate results and greatly increases efficiency over traditional photogrammetric methods, making the Hexagon image database much more appealing and accessible. A 1980 Hexagon DEM is extracted and geomorphic change computed for the Thistle Creek Landslide region in the Wasatch Range of North America to demonstrate an application of the new method. Surface elevation changes resulting from the landslide show an average elevation decrease of 14.4 ± 4.3 m in the source area, an increase of 17.6 ± 4.7 m in the deposition area, and a decrease of 30.2 ± 5.1 m resulting from a new roadcut. Two additional applications of the method include volume estimates of material excavated during the Mount St. Helens volcanic eruption and the volume of net ice loss over a 34-year period for glaciers in the Bhutanese Himalayas. These results show the value of Hexagon imagery in detecting and quantifying historical geomorphic change, especially in regions where other data sources are limited.

  6. Intelligent MRTD testing for thermal imaging system using ANN

    NASA Astrophysics Data System (ADS)

    Sun, Junyue; Ma, Dongmei

    2006-01-01

    The Minimum Resolvable Temperature Difference (MRTD) is the most widely accepted figure for describing the performance of a thermal imaging system. Many models have been proposed to predict it. The MRTD testing is a psychophysical task, for which biases are unavoidable. It requires laboratory conditions such as normal air condition and a constant temperature. It also needs expensive measuring equipments and takes a considerable period of time. Especially when measuring imagers of the same type, the test is time consuming. So an automated and intelligent measurement method should be discussed. This paper adopts the concept of automated MRTD testing using boundary contour system and fuzzy ARTMAP, but uses different methods. It describes an Automated MRTD Testing procedure basing on Back-Propagation Network. Firstly, we use frame grabber to capture the 4-bar target image data. Then according to image gray scale, we segment the image to get 4-bar place and extract feature vector representing the image characteristic and human detection ability. These feature sets, along with known target visibility, are used to train the ANN (Artificial Neural Networks). Actually it is a nonlinear classification (of input dimensions) of the image series using ANN. Our task is to justify if image is resolvable or uncertainty. Then the trained ANN will emulate observer performance in determining MRTD. This method can reduce the uncertainties between observers and long time dependent factors by standardization. This paper will introduce the feature extraction algorithm, demonstrate the feasibility of the whole process and give the accuracy of MRTD measurement.

  7. Evaluating Unsupervised Methods to Size and Classify Suspended Particles Using Digital Holography

    NASA Astrophysics Data System (ADS)

    Davies, E. J.; Buscombe, D.; Graham, G.; Nimmo-Smith, A.

    2013-12-01

    The use of digital holography to image suspended particles in-situ using submersible systems is on the ascendancy. Such systems allow visualization of the in-focus particles without the depth-of-field issues associated with conventional imaging. The size and concentration of all particles, and each individual particle, can be rapidly and automatically assessed. The automated methods by which to extract these quantities can be readily evaluated using manual measurements. These methods are not possible using instruments based on optical and acoustic (back- or forward-) scattering, so-called 'sediment surrogate' methods, which are sensitive to the bulk quantities of all suspended particles in a sample volume, and rely on mathematically inverting a measured signal to derive the property of interest. Depending on the intended application, the number of holograms required to elucidate a process could range from tens to millions. Therefore manual particle extraction is not feasible for most data-sets. This has created a pressing need among the growing community of holography users, for accurate, automated processing which is comparable in output to more well-established in-situ sizing techniques such as laser diffraction. Here we discuss the computational considerations required to focus and segment individual particles from raw digital holograms, and then size and classify these particles by type; all using unsupervised (automated) image processing. To do so, we draw upon imagery from both controlled laboratory conditions to near-shore coastal environments, using different holographic system designs, and constituting a significant variety in particle types, sizes and shapes. We evaluate the success of these techniques, and suggest directions for future developments.

  8. A data-driven multiplicative fault diagnosis approach for automation processes.

    PubMed

    Hao, Haiyang; Zhang, Kai; Ding, Steven X; Chen, Zhiwen; Lei, Yaguo

    2014-09-01

    This paper presents a new data-driven method for diagnosing multiplicative key performance degradation in automation processes. Different from the well-established additive fault diagnosis approaches, the proposed method aims at identifying those low-level components which increase the variability of process variables and cause performance degradation. Based on process data, features of multiplicative fault are extracted. To identify the root cause, the impact of fault on each process variable is evaluated in the sense of contribution to performance degradation. Then, a numerical example is used to illustrate the functionalities of the method and Monte-Carlo simulation is performed to demonstrate the effectiveness from the statistical viewpoint. Finally, to show the practical applicability, a case study on the Tennessee Eastman process is presented. Copyright © 2013. Published by Elsevier Ltd.

  9. Determination of 21 drugs in oral fluid using fully automated supported liquid extraction and UHPLC-MS/MS.

    PubMed

    Valen, Anja; Leere Øiestad, Åse Marit; Strand, Dag Helge; Skari, Ragnhild; Berg, Thomas

    2017-05-01

    Collection of oral fluid (OF) is easy and non-invasive compared to the collection of urine and blood, and interest in OF for drug screening and diagnostic purposes is increasing. A high-throughput ultra-high-performance liquid chromatography-tandem mass spectrometry method for determination of 21 drugs in OF using fully automated 96-well plate supported liquid extraction for sample preparation is presented. The method contains a selection of classic drugs of abuse, including amphetamines, cocaine, cannabis, opioids, and benzodiazepines. The method was fully validated for 200 μL OF/buffer mix using an Intercept OF sampling kit; validation included linearity, sensitivity, precision, accuracy, extraction recovery, matrix effects, stability, and carry-over. Inter-assay precision (RSD) and accuracy (relative error) were <15% and 13 to 5%, respectively, for all compounds at concentrations equal to or higher than the lower limit of quantification. Extraction recoveries were between 58 and 76% (RSD < 8%), except for tetrahydrocannabinol and three 7-amino benzodiazepine metabolites with recoveries between 23 and 33% (RSD between 51 and 52 % and 11 and 25%, respectively). Ion enhancement or ion suppression effects were observed for a few compounds; however, to a large degree they were compensated for by the internal standards used. Deuterium-labelled and 13 C-labelled internal standards were used for 8 and 11 of the compounds, respectively. In a comparison between Intercept and Quantisal OF kits, better recoveries and fewer matrix effects were observed for some compounds using Quantisal. The method is sensitive and robust for its purposes and has been used successfully since February 2015 for analysis of Intercept OF samples from 2600 cases in a 12-month period. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  10. Rapid determination of six carcinogenic primary aromatic amines in mainstream cigarette smoke by two-dimensional online solid phase extraction combined with liquid chromatography tandem mass spectrometry.

    PubMed

    Bie, Zhenying; Lu, Wei; Zhu, You; Chen, Yusong; Ren, Hubo; Ji, Lishun

    2017-01-27

    A fully automated, rapid, and reliable method for simultaneous determination of six carcinogenic primary aromatic amines (AAs), including o-toluidine (o-TOL), 2, 6-dimethylaniline (2, 6-DMA), o-anisidine (o-ASD), 1-naphthylamine (1-ANP), 2-naphthylamine (2-ANP), and 4-aminobiphenyl (4-ABP), in mainstream cigarette smoke was established. The proposed method was based on two-dimensional online solid phase extraction combined with liquid chromatography tandem mass spectrometry (SPE/LC-MS/MS). The particulate phase of the mainstream cigarette smoke was collected on a Cambridge filter pad and pretreated via ultrasonic extraction with 2% formic acid (FA), while the gas phase was trapped by 2% FA without pretreatment for determination. The two-dimensional online SPE comprised of two cartridges with different absorption characteristics was applied for sample pretreatment. Analysis was performed by liquid chromatography coupled with tandem mass spectrometry (LC-MS/MS) under multiple reaction monitoring mode. Each sample required about 0.5h for solid phase extraction and analysis. The limit of detections (LODs) for six AAs ranged from 0.04 to 0.58ng/cig and recoveries were within 84.5%-122.9%. The relative standard deviations of intra- and inter-day tests for 3R4F reference cigarette were less than 6% and 7%, respectively, while no more than 7% and 8% separately for a type of Virginia cigarette. The proposed method enabled minimum sample pretreatment, full automation, and high throughput with high selectivity, sensitivity, and accuracy. As a part of the validation procedure, fifteen brands of cigarettes were tested by the designed method. Copyright © 2016 Elsevier B.V. All rights reserved.

  11. Automated Nucleic Acid Extraction Systems for Detecting Cytomegalovirus and Epstein-Barr Virus Using Real-Time PCR: A Comparison Study Between the QIAsymphony RGQ and QIAcube Systems.

    PubMed

    Kim, Hanah; Hur, Mina; Kim, Ji Young; Moon, Hee Won; Yun, Yeo Min; Cho, Hyun Chan

    2017-03-01

    Cytomegalovirus (CMV) and Epstein-Barr virus (EBV) are increasingly important in immunocompromised patients. Nucleic acid extraction methods could affect the results of viral nucleic acid amplification tests. We compared two automated nucleic acid extraction systems for detecting CMV and EBV using real-time PCR assays. One hundred and fifty-three whole blood (WB) samples were tested for CMV detection, and 117 WB samples were tested for EBV detection. Viral nucleic acid was extracted in parallel by using QIAsymphony RGQ and QIAcube (Qiagen GmbH, Germany), and real-time PCR assays for CMV and EBV were performed with a Rotor-Gene Q real-time PCR cycler (Qiagen). Detection rates for CMV and EBV were compared, and agreements between the two systems were analyzed. The detection rate of CMV and EBV differed significantly between the QIAsymphony RGQ and QIAcube systems (CMV, 59.5% [91/153] vs 43.8% [67/153], P=0.0005; EBV, 59.0% [69/117] vs 42.7% [50/117], P=0.0008). The two systems showed moderate agreement for CMV and EBV detection (kappa=0.43 and 0.52, respectively). QIAsymphony RGQ showed a negligible correlation with QIAcube for quantitative EBV detection. QIAcube exhibited EBV PCR inhibition in 23.9% (28/117) of samples. Automated nucleic acid extraction systems have different performances and significantly affect the detection of viral pathogens. The QIAsymphony RGQ system appears to be superior to the QIAcube system for detecting CMV and EBV. A suitable sample preparation system should be considered for optimized nucleic acid amplification in clinical laboratories.

  12. High-volume extraction of nucleic acids by magnetic bead technology for ultrasensitive detection of bacteria in blood components.

    PubMed

    Störmer, Melanie; Kleesiek, Knut; Dreier, Jens

    2007-01-01

    Nucleic acid isolation, the most technically demanding and laborious procedure performed in molecular diagnostics, harbors the potential for improvements in automation. A recent development is the use of magnetic beads covered with nucleic acid-binding matrices. We adapted this technology with a broad-range 23S rRNA real-time reverse transcription (RT)-PCR assay for fast and sensitive detection of bacterial contamination of blood products. We investigated different protocols for an automated high-volume extraction method based on magnetic-separation technology for the extraction of bacterial nucleic acids from platelet concentrates (PCs). We added 2 model bacteria, Staphylococcus epidermidis and Escherichia coli, to a single pool of apheresis-derived, single-donor platelets and assayed the PCs by real-time RT-PCR analysis with an improved primer-probe system and locked nucleic acid technology. Co-amplification of human beta(2)-microglobulin mRNA served as an internal control (IC). We used probit analysis to calculate the minimum concentration of bacteria that would be detected with 95% confidence. For automated magnetic bead-based extraction technology with the real-time RT-PCR, the 95% detection limit was 29 x 10(3) colony-forming units (CFU)/L for S. epidermidis and 22 x 10(3) CFU/L for E. coli. No false-positive results occurred, either due to nucleic acid contamination of reagents or externally during testing of 1030 PCs. High-volume nucleic acid extraction improved the detection limit of the assay. The improvement of the primer-probe system and the integration of an IC make the RT-PCR assay appropriate for bacteria screening of platelets.

  13. Simultaneous extraction of centerlines, stenosis, and thrombus detection in renal CT angiography

    NASA Astrophysics Data System (ADS)

    Subramanyan, Krishna; Durgan, Jacob; Hodgkiss, Thomas D.; Chandra, Shalabh

    2004-05-01

    The Renal Artery Stenosis (RAS) is the major cause of renovascular hypertension and CT angiography has shown tremendous promise as a noninvasive method for reliably detecting renal artery stenosis. The purpose of this study was to validate the semi-automated methods to assist in extraction of renal branches and characterizing the associated renal artery stenosis. Automatically computed diagnostic images such as straight MIP, curved MPR, cross-sections, and diameters from multi-slice CT are presented and evaluated for its acceptance. We used vessel-tracking image processing methods to extract the aortic-renal vessel tree in a CT data in axial slice images. Next, from the topology and anatomy of the aortic vessel tree, the stenosis, and thrombus section and branching of the renal arteries are extracted. The results are presented in curved MPR and continuously variable MIP images. In this study, 15 patients were scanned with contrast on Mx8000 CT scanner (Philips Medical Systems), with 1.0 mm thickness, 0.5mm slice spacing, and 120kVp and a stack of 512x512x150 volume sets were reconstructed. The automated image processing took less than 50 seconds to compute the centerline and borders of the aortic/renal vessel tree. The overall assessment of manual and automatically generated stenosis yielded a weighted kappa statistic of 0.97 at right renal arteries, 0.94 at the left renal branches. The thrombus region contoured manually and semi-automatically agreed upon at 0.93. The manual time to process each case is approximately 25 to 30 minutes.

  14. A study on automated anatomical labeling to arteries concerning with colon from 3D abdominal CT images

    NASA Astrophysics Data System (ADS)

    Hoang, Bui Huy; Oda, Masahiro; Jiang, Zhengang; Kitasaka, Takayuki; Misawa, Kazunari; Fujiwara, Michitaka; Mori, Kensaku

    2011-03-01

    This paper presents an automated anatomical labeling method of arteries extracted from contrasted 3D CT images based on multi-class AdaBoost. In abdominal surgery, understanding of vasculature related to a target organ such as the colon is very important. Therefore, the anatomical structure of blood vessels needs to be understood by computers in a system supporting abdominal surgery. There are several researches on automated anatomical labeling, but there is no research on automated anatomical labeling to arteries concerning with the colon. The proposed method obtains a tree structure of arteries from the artery region and calculates features values of each branch. These feature values are thickness, curvature, direction, and running vectors of branch. Then, candidate arterial names are computed by classifiers that are trained to output artery names. Finally, a global optimization process is applied to the candidate arterial names to determine final names. Target arteries of this paper are nine lower abdominal arteries (AO, LCIA, RCIA, LEIA, REIA, SMA, IMA, LIIA, RIIA). We applied the proposed method to 14 cases of 3D abdominal contrasted CT images, and evaluated the results by leave-one-out scheme. The average precision and recall rates of the proposed method were 87.9% and 93.3%, respectively. The results of this method are applicable for anatomical name display of surgical simulation and computer aided surgery.

  15. System and method for automated object detection in an image

    DOEpatents

    Kenyon, Garrett T.; Brumby, Steven P.; George, John S.; Paiton, Dylan M.; Schultz, Peter F.

    2015-10-06

    A contour/shape detection model may use relatively simple and efficient kernels to detect target edges in an object within an image or video. A co-occurrence probability may be calculated for two or more edge features in an image or video using an object definition. Edge features may be differentiated between in response to measured contextual support, and prominent edge features may be extracted based on the measured contextual support. The object may then be identified based on the extracted prominent edge features.

  16. Fast automated dual-syringe based dispersive liquid-liquid microextraction coupled with gas chromatography-mass spectrometry for the determination of polycyclic aromatic hydrocarbons in environmental water samples.

    PubMed

    Guo, Liang; Tan, Shufang; Li, Xiao; Lee, Hian Kee

    2016-03-18

    An automated procedure, combining low density solvent based solvent demulsification dispersive liquid-liquid microextraction (DLLME) with gas chromatography-mass spectrometry analysis, was developed for the determination of polycyclic aromatic hydrocarbons (PAHs) in environmental water samples. Capitalizing on a two-rail commercial autosampler, fast solvent transfer using a large volume syringe dedicated to the DLLME process, and convenient extract collection using a small volume microsyringe for better GC performance were enabled. Extraction parameters including the type and volume of extraction solvent, the type and volume of dispersive solvent and demulsification solvent, extraction and demulsification time, and the speed of solvent injection were investigated and optimized. Under the optimized conditions, the linearity ranged from 0.1 to 50 μg/L, 0.2 to 50 μg/L, and 0.5 to 50 μg/L, depending on the analytes. Limits of detection were determined to be between 0.023 and 0.058 μg/L. The method was applied to determine PAHs in environmental water samples. Copyright © 2016 Elsevier B.V. All rights reserved.

  17. Automated feature detection and identification in digital point-ordered signals

    DOEpatents

    Oppenlander, Jane E.; Loomis, Kent C.; Brudnoy, David M.; Levy, Arthur J.

    1998-01-01

    A computer-based automated method to detect and identify features in digital point-ordered signals. The method is used for processing of non-destructive test signals, such as eddy current signals obtained from calibration standards. The signals are first automatically processed to remove noise and to determine a baseline. Next, features are detected in the signals using mathematical morphology filters. Finally, verification of the features is made using an expert system of pattern recognition methods and geometric criteria. The method has the advantage that standard features can be, located without prior knowledge of the number or sequence of the features. Further advantages are that standard features can be differentiated from irrelevant signal features such as noise, and detected features are automatically verified by parameters extracted from the signals. The method proceeds fully automatically without initial operator set-up and without subjective operator feature judgement.

  18. Evaluation and comparison of FTA card and CTAB DNA extraction methods for non-agricultural taxa.

    PubMed

    Siegel, Chloe S; Stevenson, Florence O; Zimmer, Elizabeth A

    2017-02-01

    An efficient, effective DNA extraction method is necessary for comprehensive analysis of plant genomes. This study analyzed the quality of DNA obtained using paper FTA cards prepared directly in the field when compared to the more traditional cetyltrimethylammonium bromide (CTAB)-based extraction methods from silica-dried samples. DNA was extracted using FTA cards according to the manufacturer's protocol. In parallel, CTAB-based extractions were done using the automated AutoGen DNA isolation system. DNA quality for both methods was determined for 15 non-agricultural species collected in situ, by gel separation, spectrophotometry, fluorometry, and successful amplification and sequencing of nuclear and chloroplast gene markers. The FTA card extraction method yielded less concentrated, but also less fragmented samples than the CTAB-based technique. The card-extracted samples provided DNA that could be successfully amplified and sequenced. The FTA cards are also useful because the collected samples do not require refrigeration, extensive laboratory expertise, or as many hazardous chemicals as extractions using the CTAB-based technique. The relative success of the FTA card method in our study suggested that this method could be a valuable tool for studies in plant population genetics and conservation biology that may involve screening of hundreds of individual plants. The FTA cards, like the silica gel samples, do not contain plant material capable of propagation, and therefore do not require permits from the U.S. Department of Agriculture (USDA) Animal and Plant Health Inspection Service (APHIS) for transportation.

  19. Rapid and automated analysis of aflatoxin M1 in milk and dairy products by online solid phase extraction coupled to ultra-high-pressure-liquid-chromatography tandem mass spectrometry.

    PubMed

    Campone, Luca; Piccinelli, Anna Lisa; Celano, Rita; Pagano, Imma; Russo, Mariateresa; Rastrelli, Luca

    2016-01-08

    This study reports a fast and automated analytical procedure for the analysis of aflatoxin M1 (AFM1) in milk and dairy products. The method is based on the simultaneous protein precipitation and AFM1 extraction, by salt-induced liquid-liquid extraction (SI-LLE), followed by an online solid-phase extraction (online SPE) coupled to ultra-high-pressure-liquid chromatography-tandem mass spectrometry (UHPLC-MS/MS) analysis to the automatic pre-concentration, clean up and sensitive and selective determination of AFM1. The main parameters affecting the extraction efficiency and accuracy of the analytical method were studied in detail. In the optimal conditions, acetonitrile and NaCl were used as extraction/denaturant solvent and salting-out agent in SI-LLE, respectively. After centrifugation, the organic phase (acetonitrile) was diluted with water (1:9 v/v) and purified (1mL) by online C18 cartridge coupled with an UHPLC column. Finally, selected reaction monitoring (SRM) acquisition mode was applied to the detection of AFM1. Validation studies were carried out on different dairy products (whole and skimmed cow milk, yogurt, goat milk, and powder infant formula), providing method quantification limits about 25 times lower than AFM1 maximum levels permitted by EU regulation 1881/2006 in milk and dairy products for direct human consumption. Recoveries (86-102%) and repeatability (RSD<3, n=6) meet the performance criteria required by EU regulation N. 401/2006 for the determination of the levels of mycotoxins in foodstuffs. Moreover, no matrix effects were observed in the different milk and dairy products studied. The proposed method improves the performance of AFM1 analysis in milk samples as AFM1 determination is performed with a degree of accuracy higher than the conventional methods. Other advantages are the reduction of sample preparation procedure, time and cost of the analysis, enabling high sample throughput that meet the current concerns of food safety and the public health protection. Copyright © 2015 Elsevier B.V. All rights reserved.

  20. A fully automated liquid–liquid extraction system utilizing interface detection

    PubMed Central

    Maslana, Eugene; Schmitt, Robert; Pan, Jeffrey

    2000-01-01

    The development of the Abbott Liquid-Liquid Extraction Station was a result of the need for an automated system to perform aqueous extraction on large sets of newly synthesized organic compounds used for drug discovery. The system utilizes a cylindrical laboratory robot to shuttle sample vials between two loading racks, two identical extraction stations, and a centrifuge. Extraction is performed by detecting the phase interface (by difference in refractive index) of the moving column of fluid drawn from the bottom of each vial containing a biphasic mixture. The integration of interface detection with fluid extraction maximizes sample throughput. Abbott-developed electronics process the detector signals. Sample mixing is performed by high-speed solvent injection. Centrifuging of the samples reduces interface emulsions. Operating software permits the user to program wash protocols with any one of six solvents per wash cycle with as many cycle repeats as necessary. Station capacity is eighty, 15 ml vials. This system has proven successful with a broad spectrum of both ethyl acetate and methylene chloride based chemistries. The development and characterization of this automated extraction system will be presented. PMID:18924693

  1. Total lipid extraction of homogenized and intact lean fish muscles using pressurized fluid extraction and batch extraction techniques.

    PubMed

    Isaac, Giorgis; Waldebäck, Monica; Eriksson, Ulla; Odham, Göran; Markides, Karin E

    2005-07-13

    The reliability and efficiency of pressurized fluid extraction (PFE) technique for the extraction of total lipid content from cod and the effect of sample treatment on the extraction efficiency have been evaluated. The results were compared with two liquid-liquid extraction methods, traditional and modified methods according to Jensen. Optimum conditions were found to be with 2-propanol/n-hexane (65:35, v/v) as a first and n-hexane/diethyl ether (90:10, v/v) as a second solvent, 115 degrees C, and 10 min of static time. PFE extracts were cleaned up using the same procedure as in the methods according to Jensen. When total lipid yields obtained from homogenized cod muscle using PFE were compared yields obtained with original and modified Jensen methods, PFE gave significantly higher yields, approximately 10% higher (t test, P < 0.05). Infrared and NMR spectroscopy suggested that the additional material that inflates the gravimetric results is rather homogeneous and is primarily consists of phospholipid with headgroups of inositidic and/or glycosidic nature. The comparative study demonstrated that PFE is an alternative suitable technique to extract total lipid content from homogenized cod (lean fish) and herring (fat fish) muscle showing a precision comparable to that obtained with the traditional and modified Jensen methods. Despite the necessary cleanup step, PFE showed important advantages in the solvent consumption was cut by approximately 50% and automated extraction was possible.

  2. Deep Learning for Automated Extraction of Primary Sites From Cancer Pathology Reports.

    PubMed

    Qiu, John X; Yoon, Hong-Jun; Fearn, Paul A; Tourassi, Georgia D

    2018-01-01

    Pathology reports are a primary source of information for cancer registries which process high volumes of free-text reports annually. Information extraction and coding is a manual, labor-intensive process. In this study, we investigated deep learning and a convolutional neural network (CNN), for extracting ICD-O-3 topographic codes from a corpus of breast and lung cancer pathology reports. We performed two experiments, using a CNN and a more conventional term frequency vector approach, to assess the effects of class prevalence and inter-class transfer learning. The experiments were based on a set of 942 pathology reports with human expert annotations as the gold standard. CNN performance was compared against a more conventional term frequency vector space approach. We observed that the deep learning models consistently outperformed the conventional approaches in the class prevalence experiment, resulting in micro- and macro-F score increases of up to 0.132 and 0.226, respectively, when class labels were well populated. Specifically, the best performing CNN achieved a micro-F score of 0.722 over 12 ICD-O-3 topography codes. Transfer learning provided a consistent but modest performance boost for the deep learning methods but trends were contingent on the CNN method and cancer site. These encouraging results demonstrate the potential of deep learning for automated abstraction of pathology reports.

  3. Fusion of monocular cues to detect man-made structures in aerial imagery

    NASA Technical Reports Server (NTRS)

    Shufelt, Jefferey; Mckeown, David M.

    1991-01-01

    The extraction of buildings from aerial imagery is a complex problem for automated computer vision. It requires locating regions in a scene that possess properties distinguishing them as man-made objects as opposed to naturally occurring terrain features. It is reasonable to assume that no single detection method can correctly delineate or verify buildings in every scene. A cooperative-methods paradigm is useful in approaching the building extraction problem. Using this paradigm, each extraction technique provides information which can be added or assimilated into an overall interpretation of the scene. Thus, the main objective is to explore the development of computer vision system that integrates the results of various scene analysis techniques into an accurate and robust interpretation of the underlying three dimensional scene. The problem of building hypothesis fusion in aerial imagery is discussed. Building extraction techniques are briefly surveyed, including four building extraction, verification, and clustering systems. A method for fusing the symbolic data generated by these systems is described, and applied to monocular image and stereo image data sets. Evaluation methods for the fusion results are described, and the fusion results are analyzed using these methods.

  4. A Comparison of Supervised Machine Learning Algorithms and Feature Vectors for MS Lesion Segmentation Using Multimodal Structural MRI

    PubMed Central

    Sweeney, Elizabeth M.; Vogelstein, Joshua T.; Cuzzocreo, Jennifer L.; Calabresi, Peter A.; Reich, Daniel S.; Crainiceanu, Ciprian M.; Shinohara, Russell T.

    2014-01-01

    Machine learning is a popular method for mining and analyzing large collections of medical data. We focus on a particular problem from medical research, supervised multiple sclerosis (MS) lesion segmentation in structural magnetic resonance imaging (MRI). We examine the extent to which the choice of machine learning or classification algorithm and feature extraction function impacts the performance of lesion segmentation methods. As quantitative measures derived from structural MRI are important clinical tools for research into the pathophysiology and natural history of MS, the development of automated lesion segmentation methods is an active research field. Yet, little is known about what drives performance of these methods. We evaluate the performance of automated MS lesion segmentation methods, which consist of a supervised classification algorithm composed with a feature extraction function. These feature extraction functions act on the observed T1-weighted (T1-w), T2-weighted (T2-w) and fluid-attenuated inversion recovery (FLAIR) MRI voxel intensities. Each MRI study has a manual lesion segmentation that we use to train and validate the supervised classification algorithms. Our main finding is that the differences in predictive performance are due more to differences in the feature vectors, rather than the machine learning or classification algorithms. Features that incorporate information from neighboring voxels in the brain were found to increase performance substantially. For lesion segmentation, we conclude that it is better to use simple, interpretable, and fast algorithms, such as logistic regression, linear discriminant analysis, and quadratic discriminant analysis, and to develop the features to improve performance. PMID:24781953

  5. A comparison of supervised machine learning algorithms and feature vectors for MS lesion segmentation using multimodal structural MRI.

    PubMed

    Sweeney, Elizabeth M; Vogelstein, Joshua T; Cuzzocreo, Jennifer L; Calabresi, Peter A; Reich, Daniel S; Crainiceanu, Ciprian M; Shinohara, Russell T

    2014-01-01

    Machine learning is a popular method for mining and analyzing large collections of medical data. We focus on a particular problem from medical research, supervised multiple sclerosis (MS) lesion segmentation in structural magnetic resonance imaging (MRI). We examine the extent to which the choice of machine learning or classification algorithm and feature extraction function impacts the performance of lesion segmentation methods. As quantitative measures derived from structural MRI are important clinical tools for research into the pathophysiology and natural history of MS, the development of automated lesion segmentation methods is an active research field. Yet, little is known about what drives performance of these methods. We evaluate the performance of automated MS lesion segmentation methods, which consist of a supervised classification algorithm composed with a feature extraction function. These feature extraction functions act on the observed T1-weighted (T1-w), T2-weighted (T2-w) and fluid-attenuated inversion recovery (FLAIR) MRI voxel intensities. Each MRI study has a manual lesion segmentation that we use to train and validate the supervised classification algorithms. Our main finding is that the differences in predictive performance are due more to differences in the feature vectors, rather than the machine learning or classification algorithms. Features that incorporate information from neighboring voxels in the brain were found to increase performance substantially. For lesion segmentation, we conclude that it is better to use simple, interpretable, and fast algorithms, such as logistic regression, linear discriminant analysis, and quadratic discriminant analysis, and to develop the features to improve performance.

  6. Simplified multiple headspace extraction gas chromatographic technique for determination of monomer solubility in water.

    PubMed

    Chai, X S; Schork, F J; DeCinque, Anthony

    2005-04-08

    This paper reports an improved headspace gas chromatographic (GC) technique for determination of monomer solubilities in water. The method is based on a multiple headspace extraction GC technique developed previously [X.S. Chai, Q.X. Hou, F.J. Schork, J. Appl. Polym. Sci., in press], but with the major modification in the method calibration technique. As a result, only a few iterations of headspace extraction and GC measurement are required, which avoids the "exhaustive" headspace extraction, and thus the experimental time for each analysis. For highly insoluble monomers, effort must be made to minimize adsorption in the headspace sampling channel, transportation conduit and capillary column by using higher operating temperature and a short capillary column in the headspace sampler and GC system. For highly water soluble monomers, a new calibration method is proposed. The combinations of these technique modifications results in a method that is simple, rapid and automated. While the current focus of the authors is on the determination of monomer solubility in aqueous solutions, the method should be applicable to determination of solubility of any organic in water.

  7. Spectral Analysis of Breast Cancer on Tissue Microarrays: Seeing Beyond Morphology

    DTIC Science & Technology

    2005-04-01

    Harvey N., Szymanski J.J., Bloch J.J., Mitchell M. investigation of image feature extraction by a genetic algorithm. Proc. SPIE 1999;3812:24-31. 11...automated feature extraction using multiple data sources. Proc. SPIE 2003;5099:190-200. 15 4 Spectral-Spatial Analysis of Urine Cytology Angeletti et al...Appendix Contents: 1. Harvey, N.R., Levenson, R.M., Rimm, D.L. (2003) Investigation of Automated Feature Extraction Techniques for Applications in

  8. Detection of cardiac activity using a 5.8 GHz radio frequency sensor.

    PubMed

    Vasu, V; Fox, N; Brabetz, T; Wren, M; Heneghan, C; Sezer, S

    2009-01-01

    A 5.8-GHz ISM-Band radio-frequency sensor has been developed for non-contact measurement of respiration and heart rate from stationary and semi-stationary subjects at a distance of 0.5 to 1.5 meters. We report on the accuracy of the heart rate measurements obtained using two algorithmic approaches, as compared to a reference heart rate obtained using a pulse oximeter. Simultaneous Photoplethysmograph (PPG) and non-contact sensor recordings were recorded over fifteen minute periods for ten healthy subjects (8M/2F, ages 29.6 + or - 5.6 yrs) One algorithm is based on automated detection of individual peaks associated with each cardiac cycle; a second algorithm extracts a heart rate over a 60-second period using spectral analysis. Peaks were also extracted manually for comparison with the automated method. The peak-detection methods were less accurate than the spectral methods, but suggest the possibility of acquiring beat by beat data; the spectral algorithms measured heart rate to within + or -10% for the ten subjects chosen. Non-contact measurement of heart rate will be useful in chronic disease monitoring for conditions such as heart failure and cardiovascular disease.

  9. Automated detection of extended sources in radio maps: progress from the SCORPIO survey

    NASA Astrophysics Data System (ADS)

    Riggi, S.; Ingallinera, A.; Leto, P.; Cavallaro, F.; Bufano, F.; Schillirò, F.; Trigilio, C.; Umana, G.; Buemi, C. S.; Norris, R. P.

    2016-08-01

    Automated source extraction and parametrization represents a crucial challenge for the next-generation radio interferometer surveys, such as those performed with the Square Kilometre Array (SKA) and its precursors. In this paper, we present a new algorithm, called CAESAR (Compact And Extended Source Automated Recognition), to detect and parametrize extended sources in radio interferometric maps. It is based on a pre-filtering stage, allowing image denoising, compact source suppression and enhancement of diffuse emission, followed by an adaptive superpixel clustering stage for final source segmentation. A parametrization stage provides source flux information and a wide range of morphology estimators for post-processing analysis. We developed CAESAR in a modular software library, also including different methods for local background estimation and image filtering, along with alternative algorithms for both compact and diffuse source extraction. The method was applied to real radio continuum data collected at the Australian Telescope Compact Array (ATCA) within the SCORPIO project, a pathfinder of the Evolutionary Map of the Universe (EMU) survey at the Australian Square Kilometre Array Pathfinder (ASKAP). The source reconstruction capabilities were studied over different test fields in the presence of compact sources, imaging artefacts and diffuse emission from the Galactic plane and compared with existing algorithms. When compared to a human-driven analysis, the designed algorithm was found capable of detecting known target sources and regions of diffuse emission, outperforming alternative approaches over the considered fields.

  10. Automated diagnosis of congestive heart failure using dual tree complex wavelet transform and statistical features extracted from 2s of ECG signals.

    PubMed

    Sudarshan, Vidya K; Acharya, U Rajendra; Oh, Shu Lih; Adam, Muhammad; Tan, Jen Hong; Chua, Chua Kuang; Chua, Kok Poo; Tan, Ru San

    2017-04-01

    Identification of alarming features in the electrocardiogram (ECG) signal is extremely significant for the prediction of congestive heart failure (CHF). ECG signal analysis carried out using computer-aided techniques can speed up the diagnosis process and aid in the proper management of CHF patients. Therefore, in this work, dual tree complex wavelets transform (DTCWT)-based methodology is proposed for an automated identification of ECG signals exhibiting CHF from normal. In the experiment, we have performed a DTCWT on ECG segments of 2s duration up to six levels to obtain the coefficients. From these DTCWT coefficients, statistical features are extracted and ranked using Bhattacharyya, entropy, minimum redundancy maximum relevance (mRMR), receiver-operating characteristics (ROC), Wilcoxon, t-test and reliefF methods. Ranked features are subjected to k-nearest neighbor (KNN) and decision tree (DT) classifiers for automated differentiation of CHF and normal ECG signals. We have achieved 99.86% accuracy, 99.78% sensitivity and 99.94% specificity in the identification of CHF affected ECG signals using 45 features. The proposed method is able to detect CHF patients accurately using only 2s of ECG signal length and hence providing sufficient time for the clinicians to further investigate on the severity of CHF and treatments. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. Single-trial event-related potential extraction through one-unit ICA-with-reference

    NASA Astrophysics Data System (ADS)

    Lih Lee, Wee; Tan, Tele; Falkmer, Torbjörn; Leung, Yee Hong

    2016-12-01

    Objective. In recent years, ICA has been one of the more popular methods for extracting event-related potential (ERP) at the single-trial level. It is a blind source separation technique that allows the extraction of an ERP without making strong assumptions on the temporal and spatial characteristics of an ERP. However, the problem with traditional ICA is that the extraction is not direct and is time-consuming due to the need for source selection processing. In this paper, the application of an one-unit ICA-with-Reference (ICA-R), a constrained ICA method, is proposed. Approach. In cases where the time-region of the desired ERP is known a priori, this time information is utilized to generate a reference signal, which is then used for guiding the one-unit ICA-R to extract the source signal of the desired ERP directly. Main results. Our results showed that, as compared to traditional ICA, ICA-R is a more effective method for analysing ERP because it avoids manual source selection and it requires less computation thus resulting in faster ERP extraction. Significance. In addition to that, since the method is automated, it reduces the risks of any subjective bias in the ERP analysis. It is also a potential tool for extracting the ERP in online application.

  12. Single-trial event-related potential extraction through one-unit ICA-with-reference.

    PubMed

    Lee, Wee Lih; Tan, Tele; Falkmer, Torbjörn; Leung, Yee Hong

    2016-12-01

    In recent years, ICA has been one of the more popular methods for extracting event-related potential (ERP) at the single-trial level. It is a blind source separation technique that allows the extraction of an ERP without making strong assumptions on the temporal and spatial characteristics of an ERP. However, the problem with traditional ICA is that the extraction is not direct and is time-consuming due to the need for source selection processing. In this paper, the application of an one-unit ICA-with-Reference (ICA-R), a constrained ICA method, is proposed. In cases where the time-region of the desired ERP is known a priori, this time information is utilized to generate a reference signal, which is then used for guiding the one-unit ICA-R to extract the source signal of the desired ERP directly. Our results showed that, as compared to traditional ICA, ICA-R is a more effective method for analysing ERP because it avoids manual source selection and it requires less computation thus resulting in faster ERP extraction. In addition to that, since the method is automated, it reduces the risks of any subjective bias in the ERP analysis. It is also a potential tool for extracting the ERP in online application.

  13. A novel method of measuring the melting point of animal fats.

    PubMed

    Lloyd, S S; Dawkins, S T; Dawkins, R L

    2014-10-01

    The melting point (TM) of fat is relevant to health, but available methods of determining TM are cumbersome. One of the standard methods of measuring TM for animal and vegetable fats is the slip point, also known as the open capillary method. This method is imprecise and not amenable to automation or mass testing. We have developed a technique for measuring TM of animal fat using the Rotor-Gene Q (Qiagen, Hilden, Germany). The assay has an intra-assay SD of 0.08°C. A single operator can extract and assay up to 250 samples of animal fat in 24 h, including the time to extract the fat from the adipose tissue. This technique will improve the quality of research into genetic and environmental contributions to fat composition of meat.

  14. On chip preconcentration and fluorescence labeling of model proteins by use of monolithic columns: device fabrication, optimization, and automation.

    PubMed

    Yang, Rui; Pagaduan, Jayson V; Yu, Ming; Woolley, Adam T

    2015-01-01

    Microfluidic systems with monolithic columns have been developed for preconcentration and on-chip labeling of model proteins. Monoliths were prepared in microchannels by photopolymerization, and their properties were optimized by varying the composition and concentration of the monomers to improve flow and extraction. On-chip labeling of proteins was achieved by driving solutions through the monolith by use of voltage then incubating fluorescent dye with protein retained on the monolith. Subsequently, the labeled proteins were eluted, by applying voltages to reservoirs on the microdevice, and then detected, by monitoring laser-induced fluorescence. Monoliths prepared from octyl methacrylate combine the best protein retention with the possibility of separate elution of unattached fluorescent label with 50% acetonitrile. Finally, automated on-chip extraction and fluorescence labeling of a model protein were successfully demonstrated. This method involves facile sample pretreatment, and therefore has potential for production of integrated bioanalysis microchips.

  15. Automated Fluid Feature Extraction from Transient Simulations

    NASA Technical Reports Server (NTRS)

    Haimes, Robert

    2000-01-01

    In the past, feature extraction and identification were interesting concepts, but not required in understanding the physics of a steady flow field. This is because the results of the more traditional tools like iso-surfaces, cuts and streamlines, were more interactive and easily abstracted so they could be represented to the investigator. These tools worked and properly conveyed the collected information at the expense of a great deal of interaction. For unsteady flow-fields, the investigator does not have the luxury of spending time scanning only one 'snap-shot' of the simulation. Automated assistance is required in pointing out areas of potential interest contained within the flow. This must not require a heavy compute burden (the visualization should not significantly slow down the solution procedure for co-processing environments like pV3). And methods must be developed to abstract the feature and display it in a manner that physically makes sense.

  16. Testing the event witnessing status of micro-bloggers from evidence in their micro-blogs

    PubMed Central

    2017-01-01

    This paper demonstrates a framework of processes for identifying potential witnesses of events from evidence they post to social media. The research defines original evidence models for micro-blog content sources, the relative uncertainty of different evidence types, and models for testing evidence by combination. Methods to filter and extract evidence using automated and semi-automated means are demonstrated using a Twitter case study event. Further, an implementation to test extracted evidence using Dempster Shafer Theory of Evidence are presented. The results indicate that the inclusion of evidence from micro-blog text and linked image content can increase the number of micro-bloggers identified at events, in comparison to the number of micro-bloggers identified from geotags alone. Additionally, the number of micro-bloggers that can be tested for evidence corroboration or conflict, is increased by incorporating evidence identified in their posting history. PMID:29232395

  17. Method for improving accuracy in full evaporation headspace analysis.

    PubMed

    Xie, Wei-Qi; Chai, Xin-Sheng

    2017-05-01

    We report a new headspace analytical method in which multiple headspace extraction is incorporated with the full evaporation technique. The pressure uncertainty caused by the solid content change in the samples has a great impact to the measurement accuracy in the conventional full evaporation headspace analysis. The results (using ethanol solution as the model sample) showed that the present technique is effective to minimize such a problem. The proposed full evaporation multiple headspace extraction analysis technique is also automated and practical, and which could greatly broaden the applications of the full-evaporation-based headspace analysis. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  18. A solvent-extraction module for cyclotron production of high-purity technetium-99m.

    PubMed

    Martini, Petra; Boschi, Alessandra; Cicoria, Gianfranco; Uccelli, Licia; Pasquali, Micòl; Duatti, Adriano; Pupillo, Gaia; Marengo, Mario; Loriggiola, Massimo; Esposito, Juan

    2016-12-01

    The design and fabrication of a fully-automated, remotely controlled module for the extraction and purification of technetium-99m (Tc-99m), produced by proton bombardment of enriched Mo-100 molybdenum metallic targets in a low-energy medical cyclotron, is here described. After dissolution of the irradiated solid target in hydrogen peroxide, Tc-99m was obtained under the chemical form of 99m TcO 4 - , in high radionuclidic and radiochemical purity, by solvent extraction with methyl ethyl ketone (MEK). The extraction process was accomplished inside a glass column-shaped vial especially designed to allow for an easy automation of the whole procedure. Recovery yields were always >90% of the loaded activity. The final pertechnetate saline solution Na 99m TcO 4 , purified using the automated module here described, is within the Pharmacopoeia quality control parameters and is therefore a valid alternative to generator-produced 99m Tc. The resulting automated module is cost-effective and easily replicable for in-house production of high-purity Tc-99m by cyclotrons. Copyright © 2016 Elsevier Ltd. All rights reserved.

  19. What Does Corpus Linguistics Have to Offer to Language Assessment?

    ERIC Educational Resources Information Center

    Xi, Xiaoming

    2017-01-01

    In recent years, continuing advances in technology have increased the capacity to automate the extraction of a range of linguistic features of texts and thus have provided the impetus for the substantial growth of corpus linguistics. While corpus linguistic tools and methods have been used extensively in second language learning research, they…

  20. Automated GC-MS analysis of free amino acids in biological fluids.

    PubMed

    Kaspar, Hannelore; Dettmer, Katja; Gronwald, Wolfram; Oefner, Peter J

    2008-07-15

    A gas chromatography-mass spectrometry (GC-MS) method was developed for the quantitative analysis of free amino acids as their propyl chloroformate derivatives in biological fluids. Derivatization with propyl chloroformate is carried out directly in the biological samples without prior protein precipitation or solid-phase extraction of the amino acids, thereby allowing automation of the entire procedure, including addition of reagents, extraction and injection into the GC-MS. The total analysis time was 30 min and 30 amino acids could be reliably quantified using 19 stable isotope-labeled amino acids as internal standards. Limits of detection (LOD) and lower limits of quantification (LLOQ) were in the range of 0.03-12 microM and 0.3-30 microM, respectively. The method was validated using a certified amino acid standard and reference plasma, and its applicability to different biological fluids was shown. Intra-day precision for the analysis of human urine, blood plasma, and cell culture medium was 2.0-8.8%, 0.9-8.3%, and 2.0-14.3%, respectively, while the inter-day precision for human urine was 1.5-14.1%.

  1. Automated tracking of a figure skater by using PTZ cameras

    NASA Astrophysics Data System (ADS)

    Haraguchi, Tomohiko; Taki, Tsuyoshi; Hasegawa, Junichi

    2009-08-01

    In this paper, a system for automated real-time tracking of a figure skater moving on an ice rink by using PTZ cameras is presented. This system is intended for support in training of skating, for example, as a tool for recording and evaluation of his/her motion performances. In the processing procedure of the system, an ice rink region is extracted first from a video image by region growing method, then one of hole components in the obtained rink region is extracted as a skater region. If there exists no hole component, a skater region is estimated from horizontal and vertical intensity projections of the rink region. Each camera is automatically panned and/or tilted so as to keep the skater region on almost the center of the image, and also zoomed so as to keep the height of the skater region within an appropriate range. In the experiments using 5 practical video images of skating, it was shown that the extraction rate of the skater region was almost 90%, and tracking with camera control was successfully done for almost all of the cases used here.

  2. Increasing productivity for the analysis of trace contaminants in food by gas chromatography-mass spectrometry using automated liner exchange, backflushing and heart-cutting.

    PubMed

    David, Frank; Tienpont, Bart; Devos, Christophe; Lerch, Oliver; Sandra, Pat

    2013-10-25

    Laboratories focusing on residue analysis in food are continuously seeking to increase sample throughput by minimizing sample preparation. Generic sample extraction methods such as QuEChERS lack selectivity and consequently extracts are not free from non-volatile material that contaminates the analytical system. Co-extracted matrix constituents interfere with target analytes, even if highly sensitive and selective GC-MS/MS is used. A number of GC approaches are described that can be used to increase laboratory productivity. These techniques include automated inlet liner exchange and column backflushing for preservation of the performance of the analytical system and heart-cutting two-dimensional GC for increasing sensitivity and selectivity. The application of these tools is illustrated by the analysis of pesticides in vegetables and fruits, PCBs in milk powder and coplanar PCBs in fish. It is demonstrated that considerable increase in productivity can be achieved by decreasing instrument down-time, while analytical performance is equal or better compared to conventional trace contaminant analysis. Copyright © 2013 Elsevier B.V. All rights reserved.

  3. A novel method for automated tracking and quantification of adult zebrafish behaviour during anxiety.

    PubMed

    Nema, Shubham; Hasan, Whidul; Bhargava, Anamika; Bhargava, Yogesh

    2016-09-15

    Behavioural neuroscience relies on software driven methods for behavioural assessment, but the field lacks cost-effective, robust, open source software for behavioural analysis. Here we propose a novel method which we called as ZebraTrack. It includes cost-effective imaging setup for distraction-free behavioural acquisition, automated tracking using open-source ImageJ software and workflow for extraction of behavioural endpoints. Our ImageJ algorithm is capable of providing control to users at key steps while maintaining automation in tracking without the need for the installation of external plugins. We have validated this method by testing novelty induced anxiety behaviour in adult zebrafish. Our results, in agreement with established findings, showed that during state-anxiety, zebrafish showed reduced distance travelled, increased thigmotaxis and freezing events. Furthermore, we proposed a method to represent both spatial and temporal distribution of choice-based behaviour which is currently not possible to represent using simple videograms. ZebraTrack method is simple and economical, yet robust enough to give results comparable with those obtained from costly proprietary software like Ethovision XT. We have developed and validated a novel cost-effective method for behavioural analysis of adult zebrafish using open-source ImageJ software. Copyright © 2016 Elsevier B.V. All rights reserved.

  4. Merging Dietary Assessment with the Adolescent Lifestyle

    PubMed Central

    Schap, TusaRebecca E; Zhu, Fengqing M; Delp, Edward J; Boushey, Carol J

    2013-01-01

    The use of image-based dietary assessment methods shows promise for improving dietary self-report among children. The Technology Assisted Dietary Assessment (TADA) food record application is a self-administered food record specifically designed to address the burden and human error associated with conventional methods of dietary assessment. Users would take images of foods and beverages at all eating occasions using a mobile telephone or mobile device with an integrated camera, (e.g., Apple iPhone, Google Nexus One, Apple iPod Touch). Once the images are taken, the images are transferred to a back-end server for automated analysis. The first step in this process is image analysis, i.e., segmentation, feature extraction, and classification, allows for automated food identification. Portion size estimation is also automated via segmentation and geometric shape template modeling. The results of the automated food identification and volume estimation can be indexed with the Food and Nutrient Database for Dietary Studies (FNDDS) to provide a detailed diet analysis for use in epidemiologic or intervention studies. Data collected during controlled feeding studies in a camp-like setting have allowed for formative evaluation and validation of the TADA food record application. This review summarizes the system design and the evidence-based development of image-based methods for dietary assessment among children. PMID:23489518

  5. Automatic nipple detection on 3D images of an automated breast ultrasound system (ABUS)

    NASA Astrophysics Data System (ADS)

    Javanshir Moghaddam, Mandana; Tan, Tao; Karssemeijer, Nico; Platel, Bram

    2014-03-01

    Recent studies have demonstrated that applying Automated Breast Ultrasound in addition to mammography in women with dense breasts can lead to additional detection of small, early stage breast cancers which are occult in corresponding mammograms. In this paper, we proposed a fully automatic method for detecting the nipple location in 3D ultrasound breast images acquired from Automated Breast Ultrasound Systems. The nipple location is a valuable landmark to report the position of possible abnormalities in a breast or to guide image registration. To detect the nipple location, all images were normalized. Subsequently, features have been extracted in a multi scale approach and classification experiments were performed using a gentle boost classifier to identify the nipple location. The method was applied on a dataset of 100 patients with 294 different 3D ultrasound views from Siemens and U-systems acquisition systems. Our database is a representative sample of cases obtained in clinical practice by four medical centers. The automatic method could accurately locate the nipple in 90% of AP (Anterior-Posterior) views and in 79% of the other views.

  6. Computer-Aided Diagnosis of Acute Lymphoblastic Leukaemia

    PubMed Central

    2018-01-01

    Leukaemia is a form of blood cancer which affects the white blood cells and damages the bone marrow. Usually complete blood count (CBC) and bone marrow aspiration are used to diagnose the acute lymphoblastic leukaemia. It can be a fatal disease if not diagnosed at the earlier stage. In practice, manual microscopic evaluation of stained sample slide is used for diagnosis of leukaemia. But manual diagnostic methods are time-consuming, less accurate, and prone to errors due to various human factors like stress, fatigue, and so forth. Therefore, different automated systems have been proposed to wrestle the glitches in the manual diagnostic methods. In recent past, some computer-aided leukaemia diagnosis methods are presented. These automated systems are fast, reliable, and accurate as compared to manual diagnosis methods. This paper presents review of computer-aided diagnosis systems regarding their methodologies that include enhancement, segmentation, feature extraction, classification, and accuracy. PMID:29681996

  7. Evaluation and comparison of FTA card and CTAB DNA extraction methods for non-agricultural taxa1

    PubMed Central

    Siegel, Chloe S.; Stevenson, Florence O.; Zimmer, Elizabeth A.

    2017-01-01

    Premise of the study: An efficient, effective DNA extraction method is necessary for comprehensive analysis of plant genomes. This study analyzed the quality of DNA obtained using paper FTA cards prepared directly in the field when compared to the more traditional cetyltrimethylammonium bromide (CTAB)–based extraction methods from silica-dried samples. Methods: DNA was extracted using FTA cards according to the manufacturer’s protocol. In parallel, CTAB-based extractions were done using the automated AutoGen DNA isolation system. DNA quality for both methods was determined for 15 non-agricultural species collected in situ, by gel separation, spectrophotometry, fluorometry, and successful amplification and sequencing of nuclear and chloroplast gene markers. Results: The FTA card extraction method yielded less concentrated, but also less fragmented samples than the CTAB-based technique. The card-extracted samples provided DNA that could be successfully amplified and sequenced. The FTA cards are also useful because the collected samples do not require refrigeration, extensive laboratory expertise, or as many hazardous chemicals as extractions using the CTAB-based technique. Discussion: The relative success of the FTA card method in our study suggested that this method could be a valuable tool for studies in plant population genetics and conservation biology that may involve screening of hundreds of individual plants. The FTA cards, like the silica gel samples, do not contain plant material capable of propagation, and therefore do not require permits from the U.S. Department of Agriculture (USDA) Animal and Plant Health Inspection Service (APHIS) for transportation. PMID:28224056

  8. Extracting Characteristics of the Study Subjects from Full-Text Articles

    PubMed Central

    Demner-Fushman, Dina; Mork, James G

    2015-01-01

    Characteristics of the subjects of biomedical research are important in determining if a publication describing the research is relevant to a search. To facilitate finding relevant publications, MEDLINE citations provide Medical Subject Headings that describe the subjects’ characteristics, such as their species, gender, and age. We seek to improve the recommendation of these headings by the Medical Text Indexer (MTI) that supports manual indexing of MEDLINE. To that end, we explore the potential of the full text of the publications. Using simple recall-oriented rule-based methods we determined that adding sentences extracted from the methods sections and captions to the abstracts prior to MTI processing significantly improved recall and F1 score with only a slight drop in precision. Improvements were also achieved in directly assigning several headings extracted from the full text. These results indicate the need for further development of automated methods capable of leveraging the full text for indexing. PMID:26958181

  9. A new method for stable lead isotope extraction from seawater.

    PubMed

    Zurbrick, Cheryl M; Gallon, Céline; Flegal, A Russell

    2013-10-24

    A new technique for stable lead (Pb) isotope extraction from seawater is established using Toyopearl AF-Chelate 650M(®) resin (Tosoh Bioscience LLC). This new method is advantageous because it is semi-automated and relatively fast; in addition it introduces a relatively low blank by minimizing the volume of chemicals used in the extraction. Subsequent analyses by HR ICP-MS have a good relative external precision (2σ) of 3.5‰ for (206)Pb/(207)Pb, while analyses by MC-ICP-MS have a better relative external precision of 0.6‰. However, Pb sample concentrations limit MC-ICP-MS analyses to (206)Pb, (207)Pb, and (208)Pb. The method was validated by processing the common Pb isotope reference material NIST SRM-981 and several GEOTRACES intercalibration samples, followed by analyses by HR ICP-MS, all of which showed good agreement with previously reported values. Copyright © 2013 Elsevier B.V. All rights reserved.

  10. Comparison of Boiling and Robotics Automation Method in DNA Extraction for Metagenomic Sequencing of Human Oral Microbes.

    PubMed

    Yamagishi, Junya; Sato, Yukuto; Shinozaki, Natsuko; Ye, Bin; Tsuboi, Akito; Nagasaki, Masao; Yamashita, Riu

    2016-01-01

    The rapid improvement of next-generation sequencing performance now enables us to analyze huge sample sets with more than ten thousand specimens. However, DNA extraction can still be a limiting step in such metagenomic approaches. In this study, we analyzed human oral microbes to compare the performance of three DNA extraction methods: PowerSoil (a method widely used in this field), QIAsymphony (a robotics method), and a simple boiling method. Dental plaque was initially collected from three volunteers in the pilot study and then expanded to 12 volunteers in the follow-up study. Bacterial flora was estimated by sequencing the V4 region of 16S rRNA following species-level profiling. Our results indicate that the efficiency of PowerSoil and QIAsymphony was comparable to the boiling method. Therefore, the boiling method may be a promising alternative because of its simplicity, cost effectiveness, and short handling time. Moreover, this method was reliable for estimating bacterial species and could be used in the future to examine the correlation between oral flora and health status. Despite this, differences in the efficiency of DNA extraction for various bacterial species were observed among the three methods. Based on these findings, there is no "gold standard" for DNA extraction. In future, we suggest that the DNA extraction method should be selected on a case-by-case basis considering the aims and specimens of the study.

  11. Lung dynamic MRI deblurring using low-rank decomposition and dictionary learning.

    PubMed

    Gou, Shuiping; Wang, Yueyue; Wu, Jiaolong; Lee, Percy; Sheng, Ke

    2015-04-01

    Lung dynamic MRI (dMRI) has emerged to be an appealing tool to quantify lung motion for both planning and treatment guidance purposes. However, this modality can result in blurry images due to intrinsically low signal-to-noise ratio in the lung and spatial/temporal interpolation. The image blurring could adversely affect the image processing that depends on the availability of fine landmarks. The purpose of this study is to reduce dMRI blurring using image postprocessing. To enhance the image quality and exploit the spatiotemporal continuity of dMRI sequences, a low-rank decomposition and dictionary learning (LDDL) method was employed to deblur lung dMRI and enhance the conspicuity of lung blood vessels. Fifty frames of continuous 2D coronal dMRI frames using a steady state free precession sequence were obtained from five subjects including two healthy volunteer and three lung cancer patients. In LDDL, the lung dMRI was decomposed into sparse and low-rank components. Dictionary learning was employed to estimate the blurring kernel based on the whole image, low-rank or sparse component of the first image in the lung MRI sequence. Deblurring was performed on the whole image sequences using deconvolution based on the estimated blur kernel. The deblurring results were quantified using an automated blood vessel extraction method based on the classification of Hessian matrix filtered images. Accuracy of automated extraction was calculated using manual segmentation of the blood vessels as the ground truth. In the pilot study, LDDL based on the blurring kernel estimated from the sparse component led to performance superior to the other ways of kernel estimation. LDDL consistently improved image contrast and fine feature conspicuity of the original MRI without introducing artifacts. The accuracy of automated blood vessel extraction was on average increased by 16% using manual segmentation as the ground truth. Image blurring in dMRI images can be effectively reduced using a low-rank decomposition and dictionary learning method using kernels estimated by the sparse component.

  12. Automated Ontology Generation Using Spatial Reasoning

    NASA Astrophysics Data System (ADS)

    Coalter, Alton; Leopold, Jennifer L.

    Recently there has been much interest in using ontologies to facilitate knowledge representation, integration, and reasoning. Correspondingly, the extent of the information embodied by an ontology is increasing beyond the conventional is_a and part_of relationships. To address these requirements, a vast amount of digitally available information may need to be considered when building ontologies, prompting a desire for software tools to automate at least part of the process. The main efforts in this direction have involved textual information retrieval and extraction methods. For some domains extension of the basic relationships could be enhanced further by the analysis of 2D and/or 3D images. For this type of media, image processing algorithms are more appropriate than textual analysis methods. Herein we present an algorithm that, given a collection of 3D image files, utilizes Qualitative Spatial Reasoning (QSR) to automate the creation of an ontology for the objects represented by the images, relating the objects in terms of is_a and part_of relationships and also through unambiguous Relational Connection Calculus (RCC) relations.

  13. Ingenious Snake: An Adaptive Multi-Class Contours Extraction

    NASA Astrophysics Data System (ADS)

    Li, Baolin; Zhou, Shoujun

    2018-04-01

    Active contour model (ACM) plays an important role in computer vision and medical image application. The traditional ACMs were used to extract single-class of object contours. While, simultaneous extraction of multi-class of interesting contours (i.e., various contours with closed- or open-ended) have not been solved so far. Therefore, a novel ACM model named “Ingenious Snake” is proposed to adaptively extract these interesting contours. In the first place, the ridge-points are extracted based on the local phase measurement of gradient vector flow field; the consequential ridgelines initialization are automated with high speed. Secondly, the contours’ deformation and evolvement are implemented with the ingenious snake. In the experiments, the result from initialization, deformation and evolvement are compared with the existing methods. The quantitative evaluation of the structure extraction is satisfying with respect of effectiveness and accuracy.

  14. Extraction of Total Nucleic Acids From Ticks for the Detection of Bacterial and Viral Pathogens

    PubMed Central

    Crowder, Chris D.; Rounds, Megan A.; Phillipson, Curtis A.; Picuri, John M.; Matthews, Heather E.; Halverson, Justina; Schutzer, Steven E.; Ecker, David J.; Eshoo, Mark W.

    2010-01-01

    Ticks harbor numerous bacterial, protozoal, and viral pathogens that can cause serious infections in humans and domestic animals. Active surveillance of the tick vector can provide insight into the frequency and distribution of important pathogens in the environment. Nucleic-acid based detection of tick-borne bacterial, protozoan, and viral pathogens requires the extraction of both DNA and RNA (total nucleic acids) from ticks. Traditional methods for nucleic acid extraction are limited to extraction of either DNA or the RNA from a sample. Here we present a simple bead-beating based protocol for extraction of DNA and RNA from a single tick and show detection of Borrelia burgdorferi and Powassan virus from individual, infected Ixodes scapularis ticks. We determined expected yields for total nucleic acids by this protocol for a variety of adult tick species. The method is applicable to a variety of arthropod vectors, including fleas and mosquitoes, and was partially automated on a liquid handling robot. PMID:20180313

  15. Automated identification of Monogeneans using digital image processing and K-nearest neighbour approaches.

    PubMed

    Yousef Kalafi, Elham; Tan, Wooi Boon; Town, Christopher; Dhillon, Sarinder Kaur

    2016-12-22

    Monogeneans are flatworms (Platyhelminthes) that are primarily found on gills and skin of fishes. Monogenean parasites have attachment appendages at their haptoral regions that help them to move about the body surface and feed on skin and gill debris. Haptoral attachment organs consist of sclerotized hard parts such as hooks, anchors and marginal hooks. Monogenean species are differentiated based on their haptoral bars, anchors, marginal hooks, reproductive parts' (male and female copulatory organs) morphological characters and soft anatomical parts. The complex structure of these diagnostic organs and also their overlapping in microscopic digital images are impediments for developing fully automated identification system for monogeneans (LNCS 7666:256-263, 2012), (ISDA; 457-462, 2011), (J Zoolog Syst Evol Res 52(2): 95-99. 2013;). In this study images of hard parts of the haptoral organs such as bars and anchors are used to develop a fully automated identification technique for monogenean species identification by implementing image processing techniques and machine learning methods. Images of four monogenean species namely Sinodiplectanotrema malayanus, Trianchoratus pahangensis, Metahaliotrema mizellei and Metahaliotrema sp. (undescribed) were used to develop an automated technique for identification. K-nearest neighbour (KNN) was applied to classify the monogenean specimens based on the extracted features. 50% of the dataset was used for training and the other 50% was used as testing for system evaluation. Our approach demonstrated overall classification accuracy of 90%. In this study Leave One Out (LOO) cross validation is used for validation of our system and the accuracy is 91.25%. The methods presented in this study facilitate fast and accurate fully automated classification of monogeneans at the species level. In future studies more classes will be included in the model, the time to capture the monogenean images will be reduced and improvements in extraction and selection of features will be implemented.

  16. BIANCA (Brain Intensity AbNormality Classification Algorithm): A new tool for automated segmentation of white matter hyperintensities.

    PubMed

    Griffanti, Ludovica; Zamboni, Giovanna; Khan, Aamira; Li, Linxin; Bonifacio, Guendalina; Sundaresan, Vaanathi; Schulz, Ursula G; Kuker, Wilhelm; Battaglini, Marco; Rothwell, Peter M; Jenkinson, Mark

    2016-11-01

    Reliable quantification of white matter hyperintensities of presumed vascular origin (WMHs) is increasingly needed, given the presence of these MRI findings in patients with several neurological and vascular disorders, as well as in elderly healthy subjects. We present BIANCA (Brain Intensity AbNormality Classification Algorithm), a fully automated, supervised method for WMH detection, based on the k-nearest neighbour (k-NN) algorithm. Relative to previous k-NN based segmentation methods, BIANCA offers different options for weighting the spatial information, local spatial intensity averaging, and different options for the choice of the number and location of the training points. BIANCA is multimodal and highly flexible so that the user can adapt the tool to their protocol and specific needs. We optimised and validated BIANCA on two datasets with different MRI protocols and patient populations (a "predominantly neurodegenerative" and a "predominantly vascular" cohort). BIANCA was first optimised on a subset of images for each dataset in terms of overlap and volumetric agreement with a manually segmented WMH mask. The correlation between the volumes extracted with BIANCA (using the optimised set of options), the volumes extracted from the manual masks and visual ratings showed that BIANCA is a valid alternative to manual segmentation. The optimised set of options was then applied to the whole cohorts and the resulting WMH volume estimates showed good correlations with visual ratings and with age. Finally, we performed a reproducibility test, to evaluate the robustness of BIANCA, and compared BIANCA performance against existing methods. Our findings suggest that BIANCA, which will be freely available as part of the FSL package, is a reliable method for automated WMH segmentation in large cross-sectional cohort studies. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  17. Fully-automated identification of fish species based on otolith contour: using short-time Fourier transform and discriminant analysis (STFT-DA).

    PubMed

    Salimi, Nima; Loh, Kar Hoe; Kaur Dhillon, Sarinder; Chong, Ving Ching

    2016-01-01

    Background. Fish species may be identified based on their unique otolith shape or contour. Several pattern recognition methods have been proposed to classify fish species through morphological features of the otolith contours. However, there has been no fully-automated species identification model with the accuracy higher than 80%. The purpose of the current study is to develop a fully-automated model, based on the otolith contours, to identify the fish species with the high classification accuracy. Methods. Images of the right sagittal otoliths of 14 fish species from three families namely Sciaenidae, Ariidae, and Engraulidae were used to develop the proposed identification model. Short-time Fourier transform (STFT) was used, for the first time in the area of otolith shape analysis, to extract important features of the otolith contours. Discriminant Analysis (DA), as a classification technique, was used to train and test the model based on the extracted features. Results. Performance of the model was demonstrated using species from three families separately, as well as all species combined. Overall classification accuracy of the model was greater than 90% for all cases. In addition, effects of STFT variables on the performance of the identification model were explored in this study. Conclusions. Short-time Fourier transform could determine important features of the otolith outlines. The fully-automated model proposed in this study (STFT-DA) could predict species of an unknown specimen with acceptable identification accuracy. The model codes can be accessed at http://mybiodiversityontologies.um.edu.my/Otolith/ and https://peerj.com/preprints/1517/. The current model has flexibility to be used for more species and families in future studies.

  18. Discovering Indicators of Successful Collaboration Using Tense: Automated Extraction of Patterns in Discourse

    ERIC Educational Resources Information Center

    Thompson, Kate; Kennedy-Clark, Shannon; Wheeler, Penny; Kelly, Nick

    2014-01-01

    This paper describes a technique for locating indicators of success within the data collected from complex learning environments, proposing an application of e-research to access learner processes and measure and track group progress. The technique combines automated extraction of tense and modality via parts-of-speech tagging with a visualisation…

  19. Automating concept identification in the electronic medical record: an experiment in extracting dosage information.

    PubMed Central

    Evans, D. A.; Brownlow, N. D.; Hersh, W. R.; Campbell, E. M.

    1996-01-01

    We discuss the development and evaluation of an automated procedure for extracting drug-dosage information from clinical narratives. The process was developed rapidly using existing technology and resources, including categories of terms from UMLS96. Evaluations over a large training and smaller test set of medical records demonstrate an approximately 80% rate of exact and partial matches' on target phrases, with few false positives and a modest rate of false negatives. The results suggest a strategy for automating general concept identification in electronic medical records. PMID:8947694

  20. Automated genomic DNA purification options in agricultural applications using MagneSil paramagnetic particles

    NASA Astrophysics Data System (ADS)

    Bitner, Rex M.; Koller, Susan C.

    2002-06-01

    The automated high throughput purification of genomic DNA form plant materials can be performed using MagneSil paramagnetic particles on the Beckman-Coulter FX, BioMek 2000, and the Tecan Genesis robot. Similar automated methods are available for DNA purifications from animal blood. These methods eliminate organic extractions, lengthy incubations and cumbersome filter plates. The DNA is suitable for applications such as PCR and RAPD analysis. Methods are described for processing traditionally difficult samples such as those containing large amounts of polyphenolics or oils, while still maintaining a high level of DNA purity. The robotic protocols have ben optimized for agricultural applications such as marker assisted breeding, seed-quality testing, and SNP discovery and scoring. In addition to high yield purification of DNA from plant samples or animal blood, the use of Promega's DNA-IQ purification system is also described. This method allows for the purification of a narrow range of DNA regardless of the amount of additional DNA that is present in the initial sample. This simultaneous Isolation and Quantification of DNA allows the DNA to be used directly in applications such as PCR, SNP analysis, and RAPD, without the need for separate quantitation of the DNA.

  1. Fast fringe pattern phase demodulation using FIR Hilbert transformers

    NASA Astrophysics Data System (ADS)

    Gdeisat, Munther; Burton, David; Lilley, Francis; Arevalillo-Herráez, Miguel

    2016-01-01

    This paper suggests the use of FIR Hilbert transformers to extract the phase of fringe patterns. This method is computationally faster than any known spatial method that produces wrapped phase maps. Also, the algorithm does not require any parameters to be adjusted which are dependent upon the specific fringe pattern that is being processed, or upon the particular setup of the optical fringe projection system that is being used. It is therefore particularly suitable for full algorithmic automation. The accuracy and validity of the suggested method has been tested using both computer-generated and real fringe patterns. This novel algorithm has been proposed for its advantages in terms of computational processing speed as it is the fastest available method to extract the wrapped phase information from a fringe pattern.

  2. Electrically evoked compound action potentials artefact rejection by independent component analysis: procedure automation.

    PubMed

    Akhoun, Idrick; McKay, Colette; El-Deredy, Wael

    2015-01-15

    Independent-components-analysis (ICA) successfully separated electrically-evoked compound action potentials (ECAPs) from the stimulation artefact and noise (ECAP-ICA, Akhoun et al., 2013). This paper shows how to automate the ECAP-ICA artefact cancellation process. Raw-ECAPs without artefact rejection were consecutively recorded for each stimulation condition from at least 8 intra-cochlear electrodes. Firstly, amplifier-saturated recordings were discarded, and the data from different stimulus conditions (different current-levels) were concatenated temporally. The key aspect of the automation procedure was the sequential deductive source categorisation after ICA was applied with a restriction to 4 sources. The stereotypical aspect of the 4 sources enables their automatic classification as two artefact components, a noise and the sought ECAP based on theoretical and empirical considerations. The automatic procedure was tested using 8 cochlear implant (CI) users and one to four stimulus electrodes. The artefact and noise sources were successively identified and discarded, leaving the ECAP as the remaining source. The automated ECAP-ICA procedure successfully extracted the correct ECAPs compared to standard clinical forward masking paradigm in 22 out of 26 cases. ECAP-ICA does not require extracting the ECAP from a combination of distinct buffers as it is the case with regular methods. It is an alternative that does not have the possible bias of traditional artefact rejections such as alternate-polarity or forward-masking paradigms. The ECAP-ICA procedure bears clinical relevance, for example as the artefact rejection sub-module of automated ECAP-threshold detection techniques, which are common features of CI clinical fitting software. Copyright © 2014. Published by Elsevier B.V.

  3. Automated processing of whole blood samples for the determination of immunosuppressants by liquid chromatography tandem-mass spectrometry.

    PubMed

    Vogeser, Michael; Spöhrer, Ute

    2006-01-01

    Liquid chromatography tandem-mass spectrometry (LC-MS/MS) is an efficient technology for routine determination of immunosuppressants in whole blood; however, time-consuming manual sample preparation remains a significant limitation of this technique. Using a commercially available robotic pipetting system (Tecan Freedom EVO), we developed an automated sample-preparation protocol for quantification of tacrolimus in whole blood by LC-MS/MS. Barcode reading, sample resuspension, transfer of whole blood aliquots into a deep-well plate, addition of internal standard solution, mixing, and protein precipitation by addition of an organic solvent is performed by the robotic system. After centrifugation of the plate, the deproteinized supernatants are submitted to on-line solid phase extraction, using column switching prior to LC-MS/MS analysis. The only manual actions within the entire process are decapping of the tubes, and transfer of the deep-well plate from the robotic system to a centrifuge and finally to the HPLC autosampler. Whole blood pools were used to assess the reproducibility of the entire analytical system for measuring tacrolimus concentrations. A total coefficient of variation of 1.7% was found for the entire automated analytical process (n=40; mean tacrolimus concentration, 5.3 microg/L). Close agreement between tacrolimus results obtained after manual and automated sample preparation was observed. The analytical system described here, comprising automated protein precipitation, on-line solid phase extraction and LC-MS/MS analysis, is convenient and precise, and minimizes hands-on time and the risk of mistakes in the quantification of whole blood immunosuppressant concentrations compared to conventional methods.

  4. Automated multi-plug filtration cleanup for liquid chromatographic-tandem mass spectrometric pesticide multi-residue analysis in representative crop commodities.

    PubMed

    Qin, Yuhong; Zhang, Jingru; Zhang, Yuan; Li, Fangbing; Han, Yongtao; Zou, Nan; Xu, Haowei; Qian, Meiyuan; Pan, Canping

    2016-09-02

    An automated multi-plug filtration cleanup (m-PFC) method on modified QuEChERS (quick, easy, cheap, effective, rugged, and safe) extracts was developed. The automatic device was aimed to reduce labor-consuming manual operation workload in the cleanup steps. It could control the volume and the speed of pulling and pushing cycles accurately. In this work, m-PFC was based on multi-walled carbon nanotubes (MWCNTs) mixed with other sorbents and anhydrous magnesium sulfate (MgSO4) in a packed tip for analysis of pesticide multi-residues in crop commodities followed by liquid chromatography with tandem mass spectrometric (LC-MS/MS) detection. It was validated by analyzing 25 pesticides in six representative matrices spiked at two concentration levels of 10 and 100μg/kg. Salts, sorbents, m-PFC procedure, automated pulling and pushing volume, automated pulling speed, and pushing speed for each matrix were optimized. After optimization, two general automated m-PFC methods were introduced to relatively simple (apple, citrus fruit, peanut) and relatively complex (spinach, leek, green tea) matrices. Spike recoveries were within 83 and 108% and 1-14% RSD for most analytes in the tested matrices. Matrix-matched calibrations were performed with the coefficients of determination >0.997 between concentration levels of 10 and 1000μg/kg. The developed method was successfully applied to the determination of pesticide residues in market samples. Copyright © 2016 Elsevier B.V. All rights reserved.

  5. Information Fusion for Feature Extraction and the Development of Geospatial Information

    DTIC Science & Technology

    2004-07-01

    of automated processing . 2. Requirements for Geospatial Information Accurate, timely geospatial information is critical for many military...this evaluation illustrates some of the difficulties in comparing manual and automated processing results (figure 5). The automated delineation of

  6. Validation of an automated colony counting system for group A Streptococcus.

    PubMed

    Frost, H R; Tsoi, S K; Baker, C A; Laho, D; Sanderson-Smith, M L; Steer, A C; Smeesters, P R

    2016-02-08

    The practice of counting bacterial colony forming units on agar plates has long been used as a method to estimate the concentration of live bacteria in culture. However, due to the laborious and potentially error prone nature of this measurement technique, an alternative method is desirable. Recent technologic advancements have facilitated the development of automated colony counting systems, which reduce errors introduced during the manual counting process and recording of information. An additional benefit is the significant reduction in time taken to analyse colony counting data. Whilst automated counting procedures have been validated for a number of microorganisms, the process has not been successful for all bacteria due to the requirement for a relatively high contrast between bacterial colonies and growth medium. The purpose of this study was to validate an automated counting system for use with group A Streptococcus (GAS). Twenty-one different GAS strains, representative of major emm-types, were selected for assessment. In order to introduce the required contrast for automated counting, 2,3,5-triphenyl-2H-tetrazolium chloride (TTC) dye was added to Todd-Hewitt broth with yeast extract (THY) agar. Growth on THY agar with TTC was compared with growth on blood agar and THY agar to ensure the dye was not detrimental to bacterial growth. Automated colony counts using a ProtoCOL 3 instrument were compared with manual counting to confirm accuracy over the stages of the growth cycle (latent, mid-log and stationary phases) and in a number of different assays. The average percentage differences between plating and counting methods were analysed using the Bland-Altman method. A percentage difference of ±10 % was determined as the cut-off for a critical difference between plating and counting methods. All strains measured had an average difference of less than 10 % when plated on THY agar with TTC. This consistency was also observed over all phases of the growth cycle and when plated in blood following bactericidal assays. Agreement between these methods suggest the use of an automated colony counting technique for GAS will significantly reduce time spent counting bacteria to enable a more efficient and accurate measurement of bacteria concentration in culture.

  7. TEES 2.2: Biomedical Event Extraction for Diverse Corpora

    PubMed Central

    2015-01-01

    Background The Turku Event Extraction System (TEES) is a text mining program developed for the extraction of events, complex biomedical relationships, from scientific literature. Based on a graph-generation approach, the system detects events with the use of a rich feature set built via dependency parsing. The TEES system has achieved record performance in several of the shared tasks of its domain, and continues to be used in a variety of biomedical text mining tasks. Results The TEES system was quickly adapted to the BioNLP'13 Shared Task in order to provide a public baseline for derived systems. An automated approach was developed for learning the underlying annotation rules of event type, allowing immediate adaptation to the various subtasks, and leading to a first place in four out of eight tasks. The system for the automated learning of annotation rules is further enhanced in this paper to the point of requiring no manual adaptation to any of the BioNLP'13 tasks. Further, the scikit-learn machine learning library is integrated into the system, bringing a wide variety of machine learning methods usable with TEES in addition to the default SVM. A scikit-learn ensemble method is also used to analyze the importances of the features in the TEES feature sets. Conclusions The TEES system was introduced for the BioNLP'09 Shared Task and has since then demonstrated good performance in several other shared tasks. By applying the current TEES 2.2 system to multiple corpora from these past shared tasks an overarching analysis of the most promising methods and possible pitfalls in the evolving field of biomedical event extraction are presented. PMID:26551925

  8. Determination of perfluorinated chemicals in food and drinking water using high-flow solid-phase extraction and ultra-high performance liquid chromatography/tandem mass spectrometry.

    PubMed

    Chang, Ying-Chia; Chen, Wen-Ling; Bai, Fang-Yu; Chen, Pau-Chung; Wang, Gen-Shuh; Chen, Chia-Yang

    2012-01-01

    For this study, we developed methods of determining ten perfluorinated chemicals in drinking water, milk, fish, beef, and pig liver using high-flow automated solid-phase extraction (SPE) and ultra-high performance liquid chromatography/tandem mass spectrometry. The analytes were separated on a core-shell Kinetex C18 column. The mobile phase was composed of methanol and 10-mM N-methylmorpholine. Milk was digested with 0.5 N potassium hydroxide in Milli-Q water, and was extracted with an Atlantic HLB disk to perform automated SPE at a flow rate ranged from 70 to 86 mL/min. Drinking water was directly extracted by the SPE. Solid food samples were digested in alkaline methanol and their supernatants were diluted and also processed by SPE. The disks were washed with 40% methanol/60% water and then eluted with 0.1% ammonium hydroxide in methanol. Suppression of signal intensity of most analytes by matrixes was lower than 50%; it was generally lower in fish and drinking water but higher in liver. Most quantitative biases and relative standard deviations were lower than 15%. The limits of detection for most analytes were sub-nanograms per liter for drinking water and sub-nanograms per gram for solid food samples. This method greatly shortened the time and labor needed for digestion, SPE, and liquid chromatography. This method has been applied to analyze 14 types of food samples. Perfluorooctanoic acid was found to be the highest among the analytes (median at 3.2-64 ng/g wet weight), followed by perfluorodecanoic acid (0.7-25 ng/g) and perfluorododecanoic acid (0.6-15 ng/g).

  9. TEES 2.2: Biomedical Event Extraction for Diverse Corpora.

    PubMed

    Björne, Jari; Salakoski, Tapio

    2015-01-01

    The Turku Event Extraction System (TEES) is a text mining program developed for the extraction of events, complex biomedical relationships, from scientific literature. Based on a graph-generation approach, the system detects events with the use of a rich feature set built via dependency parsing. The TEES system has achieved record performance in several of the shared tasks of its domain, and continues to be used in a variety of biomedical text mining tasks. The TEES system was quickly adapted to the BioNLP'13 Shared Task in order to provide a public baseline for derived systems. An automated approach was developed for learning the underlying annotation rules of event type, allowing immediate adaptation to the various subtasks, and leading to a first place in four out of eight tasks. The system for the automated learning of annotation rules is further enhanced in this paper to the point of requiring no manual adaptation to any of the BioNLP'13 tasks. Further, the scikit-learn machine learning library is integrated into the system, bringing a wide variety of machine learning methods usable with TEES in addition to the default SVM. A scikit-learn ensemble method is also used to analyze the importances of the features in the TEES feature sets. The TEES system was introduced for the BioNLP'09 Shared Task and has since then demonstrated good performance in several other shared tasks. By applying the current TEES 2.2 system to multiple corpora from these past shared tasks an overarching analysis of the most promising methods and possible pitfalls in the evolving field of biomedical event extraction are presented.

  10. Follicular unit extraction hair transplant automation: options in overcoming challenges of the latest technology in hair restoration with the goal of avoiding the line scar.

    PubMed

    Rashid, Rashid M; Morgan Bicknell, Lindsay T

    2012-09-15

    Follicular unit extraction (FUE) provides many advantages over the strip surgical method of harvesting hair grafts for hair restoration. However, FUE also has its shortcomings because it is a more time intensive approach that results in increased costs and is technically a more challenging technique of hair transplantation. In this manuscript, we seek to share approaches used at our center to help minimize and/or improve on some of the challenges of FUE.

  11. Automated video-microscopic imaging and data acquisition system for colloid deposition measurements

    DOEpatents

    Abdel-Fattah, Amr I.; Reimus, Paul W.

    2004-12-28

    A video microscopic visualization system and image processing and data extraction and processing method for in situ detailed quantification of the deposition of sub-micrometer particles onto an arbitrary surface and determination of their concentration across the bulk suspension. The extracted data includes (a) surface concentration and flux of deposited, attached and detached colloids, (b) surface concentration and flux of arriving and departing colloids, (c) distribution of colloids in the bulk suspension in the direction perpendicular to the deposition surface, and (d) spatial and temporal distributions of deposited colloids.

  12. Automated Methods to Extract Patient New Information from Clinical Notes in Electronic Health Record Systems

    ERIC Educational Resources Information Center

    Zhang, Rui

    2013-01-01

    The widespread adoption of Electronic Health Record (EHR) has resulted in rapid text proliferation within clinical care. Clinicians' use of copying and pasting functions in EHR systems further compounds this by creating a large amount of redundant clinical information in clinical documents. A mixture of redundant information (especially outdated…

  13. Aerial Imagery and LIDAR Data Fusion for Unambiguous Extraction of Adjacent Level-Buildings Footprints

    NASA Astrophysics Data System (ADS)

    Mola Ebrahimi, S.; Arefi, H.; Rasti Veis, H.

    2017-09-01

    Our paper aims to present a new approach to identify and extract building footprints using aerial images and LiDAR data. Employing an edge detector algorithm, our method first extracts the outer boundary of buildings, and then by taking advantage of Hough transform and extracting the boundary of connected buildings in a building block, it extracts building footprints located in each block. The proposed method first recognizes the predominant leading orientation of a building block using Hough transform, and then rotates the block according to the inverted complement of the dominant line's angle. Therefore the block poses horizontally. Afterwards, by use of another Hough transform, vertical lines, which might be the building boundaries of interest, are extracted and the final building footprints within a block are obtained. The proposed algorithm is implemented and tested on the urban area of Zeebruges, Belgium(IEEE Contest,2015). The areas of extracted footprints are compared to the corresponding areas in the reference data and mean error is equal to 7.43 m2. Besides, qualitative and quantitative evaluations suggest that the proposed algorithm leads to acceptable results in automated precise extraction of building footprints.

  14. Three-Dimensional Reconstruction of the Virtual Plant Branching Structure Based on Terrestrial LIDAR Technologies and L-System

    NASA Astrophysics Data System (ADS)

    Gong, Y.; Yang, Y.; Yang, X.

    2018-04-01

    For the purpose of extracting productions of some specific branching plants effectively and realizing its 3D reconstruction, Terrestrial LiDAR data was used as extraction source of production, and a 3D reconstruction method based on Terrestrial LiDAR technologies combined with the L-system was proposed in this article. The topology structure of the plant architectures was extracted using the point cloud data of the target plant with space level segmentation mechanism. Subsequently, L-system productions were obtained and the structural parameters and production rules of branches, which fit the given plant, was generated. A three-dimensional simulation model of target plant was established combined with computer visualization algorithm finally. The results suggest that the method can effectively extract a given branching plant topology and describes its production, realizing the extraction of topology structure by the computer algorithm for given branching plant and also simplifying the extraction of branching plant productions which would be complex and time-consuming by L-system. It improves the degree of automation in the L-system extraction of productions of specific branching plants, providing a new way for the extraction of branching plant production rules.

  15. The LabTube - a novel microfluidic platform for assay automation in laboratory centrifuges.

    PubMed

    Kloke, A; Fiebach, A R; Zhang, S; Drechsel, L; Niekrawietz, S; Hoehl, M M; Kneusel, R; Panthel, K; Steigert, J; von Stetten, F; Zengerle, R; Paust, N

    2014-05-07

    Assay automation is the key for successful transformation of modern biotechnology into routine workflows. Yet, it requires considerable investment in processing devices and auxiliary infrastructure, which is not cost-efficient for laboratories with low or medium sample throughput or point-of-care testing. To close this gap, we present the LabTube platform, which is based on assay specific disposable cartridges for processing in laboratory centrifuges. LabTube cartridges comprise interfaces for sample loading and downstream applications and fluidic unit operations for release of prestored reagents, mixing, and solid phase extraction. Process control is achieved by a centrifugally-actuated ballpen mechanism. To demonstrate the workflow and functionality of the LabTube platform, we show two LabTube automated sample preparation assays from laboratory routines: DNA extractions from whole blood and purification of His-tagged proteins. Equal DNA and protein yields were observed compared to manual reference runs, while LabTube automation could significantly reduce the hands-on-time to one minute per extraction.

  16. Automating Frame Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sanfilippo, Antonio P.; Franklin, Lyndsey; Tratz, Stephen C.

    2008-04-01

    Frame Analysis has come to play an increasingly stronger role in the study of social movements in Sociology and Political Science. While significant steps have been made in providing a theory of frames and framing, a systematic characterization of the frame concept is still largely lacking and there are no rec-ognized criteria and methods that can be used to identify and marshal frame evi-dence reliably and in a time and cost effective manner. Consequently, current Frame Analysis work is still too reliant on manual annotation and subjective inter-pretation. The goal of this paper is to present an approach to themore » representation, acquisition and analysis of frame evidence which leverages Content Analysis, In-formation Extraction and Semantic Search methods to provide a systematic treat-ment of a Frame Analysis and automate frame annotation.« less

  17. ASE extraction method for simultaneous carbon and nitrogen stable isotope analysis in soft tissues of aquatic organisms.

    PubMed

    Bodin, Nathalie; Budzinski, Hélène; Le Ménach, Karyn; Tapie, Nathalie

    2009-06-08

    Since lipids are depleted in 13C relative to proteins and carbohydrates, variations in lipid composition among species and within individuals significantly influence delta13C and may result in misleading ecological interpretations. Whereas lipid extraction before IRMS analysis constitutes a way of stable isotope result lipid-normalisation, such a procedure was given up because of the un-controlled effects of the methods used (i.e., "Bligh & Dyer", Soxhlet, etc.) on delta15N. The aim of this work was to develop a simple, rapid and efficient lipid extraction method allowing for simultaneous C and N stable isotope analysis in the biological soft tissues of aquatic organisms. The goal was to be free from the lipid influence on delta13C values without interfering with delta15N values. For that purpose, the modern automated pressurized liquid extraction technique ASE (accelerated solvent extraction) was selected. Eel muscles representative of a broad range of fat contents were extracted via ASE by using different semi-polar solvents (100% dichloromethane and 80% n-hexane/20% acetone) and by operating at different temperature (ambient temperature and 100 degrees C) and pressure (750 and 1900 psi) conditions. The results were discussed in terms of lipid extraction efficiency as well as delta13C and delta15N variability.

  18. Automated detection system of single nucleotide polymorphisms using two kinds of functional magnetic nanoparticles

    NASA Astrophysics Data System (ADS)

    Liu, Hongna; Li, Song; Wang, Zhifei; Li, Zhiyang; Deng, Yan; Wang, Hua; Shi, Zhiyang; He, Nongyue

    2008-11-01

    Single nucleotide polymorphisms (SNPs) comprise the most abundant source of genetic variation in the human genome wide codominant SNPs identification. Therefore, large-scale codominant SNPs identification, especially for those associated with complex diseases, has induced the need for completely high-throughput and automated SNP genotyping method. Herein, we present an automated detection system of SNPs based on two kinds of functional magnetic nanoparticles (MNPs) and dual-color hybridization. The amido-modified MNPs (NH 2-MNPs) modified with APTES were used for DNA extraction from whole blood directly by electrostatic reaction, and followed by PCR, was successfully performed. Furthermore, biotinylated PCR products were captured on the streptavidin-coated MNPs (SA-MNPs) and interrogated by hybridization with a pair of dual-color probes to determine SNP, then the genotype of each sample can be simultaneously identified by scanning the microarray printed with the denatured fluorescent probes. This system provided a rapid, sensitive and highly versatile automated procedure that will greatly facilitate the analysis of different known SNPs in human genome.

  19. On the Automation of the MarkIII Data Analysis System.

    NASA Astrophysics Data System (ADS)

    Schwegmann, W.; Schuh, H.

    1999-03-01

    A faster and semiautomatic data analysis is an important contribution to the acceleration of the VLBI procedure. A concept for the automation of one of the most widely used VLBI software packages the MarkIII Data Analysis System was developed. Then, the program PWXCB, which extracts weather and cable calibration data from the station log-files, was automated supplementing the existing Fortran77 program-code. The new program XLOG and its results will be presented. Most of the tasks in the VLBI data analysis are very complex and their automation requires typical knowledge-based techniques. Thus, a knowledge-based system (KBS) for support and guidance of the analyst is being developed using the AI-workbench BABYLON, which is based on methods of artificial intelligence (AI). The advantages of a KBS for the MarkIII Data Analysis System and the required steps to build a KBS will be demonstrated. Examples about the current status of the project will be given, too.

  20. Computer vision applications for coronagraphic optical alignment and image processing.

    PubMed

    Savransky, Dmitry; Thomas, Sandrine J; Poyneer, Lisa A; Macintosh, Bruce A

    2013-05-10

    Modern coronagraphic systems require very precise alignment between optical components and can benefit greatly from automated image processing. We discuss three techniques commonly employed in the fields of computer vision and image analysis as applied to the Gemini Planet Imager, a new facility instrument for the Gemini South Observatory. We describe how feature extraction and clustering methods can be used to aid in automated system alignment tasks, and also present a search algorithm for finding regular features in science images used for calibration and data processing. Along with discussions of each technique, we present our specific implementation and show results of each one in operation.

  1. Extracting the Essential Cartographic Functionality of Programs on the Web

    NASA Astrophysics Data System (ADS)

    Ledermann, Florian

    2018-05-01

    Following Aristotle, F. P. Brooks (1987) emphasizes the distinction between "essential difficulties" and "accidental difficulties" as a key challenge in software engineering. From the point of view of cartography, it would be desirable to identify the cartographic essence of a program, and subject it to additional scrutiny, while its accidental proper-ties, again from the point of view of cartography, are usually of lesser relevance to cartographic analysis. In this paper, two methods that facilitate extracting the cartographic essence of programs are presented: close reading of their source code, and the automated analysis of their runtime behavior. The advantages and shortcomings of both methods are discussed, followed by an outlook to future developments and potential applications.

  2. Low-Cost 3D Printers Enable High-Quality and Automated Sample Preparation and Molecular Detection

    PubMed Central

    Chan, Kamfai; Coen, Mauricio; Hardick, Justin; Gaydos, Charlotte A.; Wong, Kah-Yat; Smith, Clayton; Wilson, Scott A.; Vayugundla, Siva Praneeth; Wong, Season

    2016-01-01

    Most molecular diagnostic assays require upfront sample preparation steps to isolate the target’s nucleic acids, followed by its amplification and detection using various nucleic acid amplification techniques. Because molecular diagnostic methods are generally rather difficult to perform manually without highly trained users, automated and integrated systems are highly desirable but too costly for use at point-of-care or low-resource settings. Here, we showcase the development of a low-cost and rapid nucleic acid isolation and amplification platform by modifying entry-level 3D printers that cost between $400 and $750. Our modifications consisted of replacing the extruder with a tip-comb attachment that houses magnets to conduct magnetic particle-based nucleic acid extraction. We then programmed the 3D printer to conduct motions that can perform high-quality extraction protocols. Up to 12 samples can be processed simultaneously in under 13 minutes and the efficiency of nucleic acid isolation matches well against gold-standard spin-column-based extraction technology. Additionally, we used the 3D printer’s heated bed to supply heat to perform water bath-based polymerase chain reactions (PCRs). Using another attachment to hold PCR tubes, the 3D printer was programmed to automate the process of shuttling PCR tubes between water baths. By eliminating the temperature ramping needed in most commercial thermal cyclers, the run time of a 35-cycle PCR protocol was shortened by 33%. This article demonstrates that for applications in resource-limited settings, expensive nucleic acid extraction devices and thermal cyclers that are used in many central laboratories can be potentially replaced by a device modified from inexpensive entry-level 3D printers. PMID:27362424

  3. Automated extraction of ejection fraction for quality measurement using regular expressions in Unstructured Information Management Architecture (UIMA) for heart failure

    PubMed Central

    DuVall, Scott L; South, Brett R; Bray, Bruce E; Bolton, Daniel; Heavirland, Julia; Pickard, Steve; Heidenreich, Paul; Shen, Shuying; Weir, Charlene; Samore, Matthew; Goldstein, Mary K

    2012-01-01

    Objectives Left ventricular ejection fraction (EF) is a key component of heart failure quality measures used within the Department of Veteran Affairs (VA). Our goals were to build a natural language processing system to extract the EF from free-text echocardiogram reports to automate measurement reporting and to validate the accuracy of the system using a comparison reference standard developed through human review. This project was a Translational Use Case Project within the VA Consortium for Healthcare Informatics. Materials and methods We created a set of regular expressions and rules to capture the EF using a random sample of 765 echocardiograms from seven VA medical centers. The documents were randomly assigned to two sets: a set of 275 used for training and a second set of 490 used for testing and validation. To establish the reference standard, two independent reviewers annotated all documents in both sets; a third reviewer adjudicated disagreements. Results System test results for document-level classification of EF of <40% had a sensitivity (recall) of 98.41%, a specificity of 100%, a positive predictive value (precision) of 100%, and an F measure of 99.2%. System test results at the concept level had a sensitivity of 88.9% (95% CI 87.7% to 90.0%), a positive predictive value of 95% (95% CI 94.2% to 95.9%), and an F measure of 91.9% (95% CI 91.2% to 92.7%). Discussion An EF value of <40% can be accurately identified in VA echocardiogram reports. Conclusions An automated information extraction system can be used to accurately extract EF for quality measurement. PMID:22437073

  4. Deep Learning for Automated Extraction of Primary Sites from Cancer Pathology Reports

    DOE PAGES

    Qiu, John; Yoon, Hong-Jun; Fearn, Paul A.; ...

    2017-05-03

    Pathology reports are a primary source of information for cancer registries which process high volumes of free-text reports annually. Information extraction and coding is a manual, labor-intensive process. Here in this study we investigated deep learning and a convolutional neural network (CNN), for extracting ICDO- 3 topographic codes from a corpus of breast and lung cancer pathology reports. We performed two experiments, using a CNN and a more conventional term frequency vector approach, to assess the effects of class prevalence and inter-class transfer learning. The experiments were based on a set of 942 pathology reports with human expert annotations asmore » the gold standard. CNN performance was compared against a more conventional term frequency vector space approach. We observed that the deep learning models consistently outperformed the conventional approaches in the class prevalence experiment, resulting in micro and macro-F score increases of up to 0.132 and 0.226 respectively when class labels were well populated. Specifically, the best performing CNN achieved a micro-F score of 0.722 over 12 ICD-O-3 topography codes. Transfer learning provided a consistent but modest performance boost for the deep learning methods but trends were contingent on CNN method and cancer site. Finally, these encouraging results demonstrate the potential of deep learning for automated abstraction of pathology reports.« less

  5. Deep Learning for Automated Extraction of Primary Sites from Cancer Pathology Reports

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Qiu, John; Yoon, Hong-Jun; Fearn, Paul A.

    Pathology reports are a primary source of information for cancer registries which process high volumes of free-text reports annually. Information extraction and coding is a manual, labor-intensive process. Here in this study we investigated deep learning and a convolutional neural network (CNN), for extracting ICDO- 3 topographic codes from a corpus of breast and lung cancer pathology reports. We performed two experiments, using a CNN and a more conventional term frequency vector approach, to assess the effects of class prevalence and inter-class transfer learning. The experiments were based on a set of 942 pathology reports with human expert annotations asmore » the gold standard. CNN performance was compared against a more conventional term frequency vector space approach. We observed that the deep learning models consistently outperformed the conventional approaches in the class prevalence experiment, resulting in micro and macro-F score increases of up to 0.132 and 0.226 respectively when class labels were well populated. Specifically, the best performing CNN achieved a micro-F score of 0.722 over 12 ICD-O-3 topography codes. Transfer learning provided a consistent but modest performance boost for the deep learning methods but trends were contingent on CNN method and cancer site. Finally, these encouraging results demonstrate the potential of deep learning for automated abstraction of pathology reports.« less

  6. Epileptic seizure detection in EEG signal using machine learning techniques.

    PubMed

    Jaiswal, Abeg Kumar; Banka, Haider

    2018-03-01

    Epilepsy is a well-known nervous system disorder characterized by seizures. Electroencephalograms (EEGs), which capture brain neural activity, can detect epilepsy. Traditional methods for analyzing an EEG signal for epileptic seizure detection are time-consuming. Recently, several automated seizure detection frameworks using machine learning technique have been proposed to replace these traditional methods. The two basic steps involved in machine learning are feature extraction and classification. Feature extraction reduces the input pattern space by keeping informative features and the classifier assigns the appropriate class label. In this paper, we propose two effective approaches involving subpattern based PCA (SpPCA) and cross-subpattern correlation-based PCA (SubXPCA) with Support Vector Machine (SVM) for automated seizure detection in EEG signals. Feature extraction was performed using SpPCA and SubXPCA. Both techniques explore the subpattern correlation of EEG signals, which helps in decision-making process. SVM is used for classification of seizure and non-seizure EEG signals. The SVM was trained with radial basis kernel. All the experiments have been carried out on the benchmark epilepsy EEG dataset. The entire dataset consists of 500 EEG signals recorded under different scenarios. Seven different experimental cases for classification have been conducted. The classification accuracy was evaluated using tenfold cross validation. The classification results of the proposed approaches have been compared with the results of some of existing techniques proposed in the literature to establish the claim.

  7. An Automated High-Throughput System to Fractionate Plant Natural Products for Drug Discovery

    PubMed Central

    Tu, Ying; Jeffries, Cynthia; Ruan, Hong; Nelson, Cynthia; Smithson, David; Shelat, Anang A.; Brown, Kristin M.; Li, Xing-Cong; Hester, John P.; Smillie, Troy; Khan, Ikhlas A.; Walker, Larry; Guy, Kip; Yan, Bing

    2010-01-01

    The development of an automated, high-throughput fractionation procedure to prepare and analyze natural product libraries for drug discovery screening is described. Natural products obtained from plant materials worldwide were extracted and first prefractionated on polyamide solid-phase extraction cartridges to remove polyphenols, followed by high-throughput automated fractionation, drying, weighing, and reformatting for screening and storage. The analysis of fractions with UPLC coupled with MS, PDA and ELSD detectors provides information that facilitates characterization of compounds in active fractions. Screening of a portion of fractions yielded multiple assay-specific hits in several high-throughput cellular screening assays. This procedure modernizes the traditional natural product fractionation paradigm by seamlessly integrating automation, informatics, and multimodal analytical interrogation capabilities. PMID:20232897

  8. A semi-automated magnetic capture probe based DNA extraction and real-time PCR method applied in the Swedish surveillance of Echinococcus multilocularis in red fox (Vulpes vulpes) faecal samples.

    PubMed

    Isaksson, Mats; Hagström, Åsa; Armua-Fernandez, Maria Teresa; Wahlström, Helene; Ågren, Erik Olof; Miller, Andrea; Holmberg, Anders; Lukacs, Morten; Casulli, Adriano; Deplazes, Peter; Juremalm, Mikael

    2014-12-19

    Following the first finding of Echinococcus multilocularis in Sweden in 2011, 2985 red foxes (Vulpes vulpes) were analysed by the segmental sedimentation and counting technique. This is a labour intensive method and requires handling of the whole carcass of the fox, resulting in a costly analysis. In an effort to reduce the cost of labour and sample handling, an alternative method has been developed. The method is sensitive and partially automated for detection of E. multilocularis in faecal samples. The method has been used in the Swedish E. multilocularis monitoring program for 2012-2013 on more than 2000 faecal samples. We describe a new semi-automated magnetic capture probe DNA extraction method and real time hydrolysis probe polymerase chain reaction assay (MC-PCR) for the detection of E. multilocularis DNA in faecal samples from red fox. The diagnostic sensitivity was determined by validating the new method against the sedimentation and counting technique in fox samples collected in Switzerland where E. multilocularis is highly endemic. Of 177 foxes analysed by the sedimentation and counting technique, E. multilocularis was detected in 93 animals. Eighty-two (88%, 95% C.I 79.8-93.9) of these were positive in the MC-PCR. In foxes with more than 100 worms, the MC-PCR was positive in 44 out of 46 (95.7%) cases. The two MC-PCR negative samples originated from foxes with only immature E. multilocularis worms. In foxes with 100 worms or less, (n = 47), 38 (80.9%) were positive in the MC-PCR. The diagnostic specificity of the MC-PCR was evaluated using fox scats collected within the Swedish screening. Of 2158 samples analysed, two were positive. This implies that the specificity is at least 99.9% (C.I. = 99.7-100). The MC-PCR proved to have a high sensitivity and a very high specificity. The test is partially automated but also possible to perform manually if desired. The test is well suited for nationwide E. multilocularis surveillance programs where sampling of fox scats is done to reduce the costs for sampling and where a test with a high sensitivity and a very high specificity is needed.

  9. Automated carotid artery intima layer regional segmentation.

    PubMed

    Meiburger, Kristen M; Molinari, Filippo; Acharya, U Rajendra; Saba, Luca; Rodrigues, Paulo; Liboni, William; Nicolaides, Andrew; Suri, Jasjit S

    2011-07-07

    Evaluation of the carotid artery wall is essential for the assessment of a patient's cardiovascular risk or for the diagnosis of cardiovascular pathologies. This paper presents a new, completely user-independent algorithm called carotid artery intima layer regional segmentation (CAILRS, a class of AtheroEdge™ systems), which automatically segments the intima layer of the far wall of the carotid ultrasound artery based on mean shift classification applied to the far wall. Further, the system extracts the lumen-intima and media-adventitia borders in the far wall of the carotid artery. Our new system is characterized and validated by comparing CAILRS borders with the manual tracings carried out by experts. The new technique is also benchmarked with a semi-automatic technique based on a first-order absolute moment edge operator (FOAM) and compared to our previous edge-based automated methods such as CALEX (Molinari et al 2010 J. Ultrasound Med. 29 399-418, 2010 IEEE Trans. Ultrason. Ferroelectr. Freq. Control 57 1112-24), CULEX (Delsanto et al 2007 IEEE Trans. Instrum. Meas. 56 1265-74, Molinari et al 2010 IEEE Trans. Ultrason. Ferroelectr. Freq. Control 57 1112-24), CALSFOAM (Molinari et al Int. Angiol. (at press)), and CAUDLES-EF (Molinari et al J. Digit. Imaging (at press)). Our multi-institutional database consisted of 300 longitudinal B-mode carotid images. In comparison to semi-automated FOAM, CAILRS showed the IMT bias of -0.035 ± 0.186 mm while FOAM showed -0.016 ± 0.258 mm. Our IMT was slightly underestimated with respect to the ground truth IMT, but showed uniform behavior over the entire database. CAILRS outperformed all the four previous automated methods. The system's figure of merit was 95.6%, which was lower than that of the semi-automated method (98%), but higher than that of the other automated techniques.

  10. Automated carotid artery intima layer regional segmentation

    NASA Astrophysics Data System (ADS)

    Meiburger, Kristen M.; Molinari, Filippo; Rajendra Acharya, U.; Saba, Luca; Rodrigues, Paulo; Liboni, William; Nicolaides, Andrew; Suri, Jasjit S.

    2011-07-01

    Evaluation of the carotid artery wall is essential for the assessment of a patient's cardiovascular risk or for the diagnosis of cardiovascular pathologies. This paper presents a new, completely user-independent algorithm called carotid artery intima layer regional segmentation (CAILRS, a class of AtheroEdge™ systems), which automatically segments the intima layer of the far wall of the carotid ultrasound artery based on mean shift classification applied to the far wall. Further, the system extracts the lumen-intima and media-adventitia borders in the far wall of the carotid artery. Our new system is characterized and validated by comparing CAILRS borders with the manual tracings carried out by experts. The new technique is also benchmarked with a semi-automatic technique based on a first-order absolute moment edge operator (FOAM) and compared to our previous edge-based automated methods such as CALEX (Molinari et al 2010 J. Ultrasound Med. 29 399-418, 2010 IEEE Trans. Ultrason. Ferroelectr. Freq. Control 57 1112-24), CULEX (Delsanto et al 2007 IEEE Trans. Instrum. Meas. 56 1265-74, Molinari et al 2010 IEEE Trans. Ultrason. Ferroelectr. Freq. Control 57 1112-24), CALSFOAM (Molinari et al Int. Angiol. (at press)), and CAUDLES-EF (Molinari et al J. Digit. Imaging (at press)). Our multi-institutional database consisted of 300 longitudinal B-mode carotid images. In comparison to semi-automated FOAM, CAILRS showed the IMT bias of -0.035 ± 0.186 mm while FOAM showed -0.016 ± 0.258 mm. Our IMT was slightly underestimated with respect to the ground truth IMT, but showed uniform behavior over the entire database. CAILRS outperformed all the four previous automated methods. The system's figure of merit was 95.6%, which was lower than that of the semi-automated method (98%), but higher than that of the other automated techniques.

  11. Automated solid-phase extraction hyphenated to voltammetry for the determination of quercetin using magnetic nanoparticles and sequential injection lab-on-valve approach.

    PubMed

    Wang, Yang; Wang, Lu; Tian, Tian; Hu, Xiaoya; Yang, Chun; Xu, Qin

    2012-05-21

    In this study, an automated sequential injection lab-on-valve (SI-LOV) system was designed for the on-line matrix removal and preconcentration of quercetin. Octadecyl functionalized magnetic silica nanoparticles were prepared and packed into the microcolumn of the LOV as adsorbents. After being adsorbed through hydrophobic interaction, the analyte was eluted and subsequently introduced into the electrochemical flow cell by voltammetric quantification. The main parameters affecting the performance of solid-phase extraction, such as sample pH and flow rate, eluent solution and volume, accumulation potential and accumulation time were investigated in detail. Under the optimum experimental conditions, a linear calibration curve was obtained in the range of 1.0 × 10(-8) to 1 × 10(-5) mol L(-1) with R(2) = 0.9979. The limit of detection (LOD) and limit of quantitation (LOQ) were 1.3 × 10(-9) and 4.3 × 10(-9) mol L(-1), respectively. The relative standard deviation (RSD) for the determination of 1.0 × 10(-6) mol L(-1) quercetin was found to be 2.9% (n = 11) along with a sampling frequency of 40 h(-1). The applicability and reliability of the automated method described here had been applied to the determination of quercetin in human urine and red wine samples through recovery experiments, and the obtained results were in good agreement with those obtained by the HPLC method.

  12. ICECAP: an integrated, general-purpose, automation-assisted IC50/EC50 assay platform.

    PubMed

    Li, Ming; Chou, Judy; King, Kristopher W; Jing, Jing; Wei, Dong; Yang, Liyu

    2015-02-01

    IC50 and EC50 values are commonly used to evaluate drug potency. Mass spectrometry (MS)-centric bioanalytical and biomarker labs are now conducting IC50/EC50 assays, which, if done manually, are tedious and error-prone. Existing bioanalytical sample preparation automation systems cannot meet IC50/EC50 assay throughput demand. A general-purpose, automation-assisted IC50/EC50 assay platform was developed to automate the calculations of spiking solutions and the matrix solutions preparation scheme, the actual spiking and matrix solutions preparations, as well as the flexible sample extraction procedures after incubation. In addition, the platform also automates the data extraction, nonlinear regression curve fitting, computation of IC50/EC50 values, graphing, and reporting. The automation-assisted IC50/EC50 assay platform can process the whole class of assays of varying assay conditions. In each run, the system can handle up to 32 compounds and up to 10 concentration levels per compound, and it greatly improves IC50/EC50 assay experimental productivity and data processing efficiency. © 2014 Society for Laboratory Automation and Screening.

  13. NMRNet: A deep learning approach to automated peak picking of protein NMR spectra.

    PubMed

    Klukowski, Piotr; Augoff, Michal; Zieba, Maciej; Drwal, Maciej; Gonczarek, Adam; Walczak, Michal J

    2018-03-14

    Automated selection of signals in protein NMR spectra, known as peak picking, has been studied for over 20 years, nevertheless existing peak picking methods are still largely deficient. Accurate and precise automated peak picking would accelerate the structure calculation, and analysis of dynamics and interactions of macromolecules. Recent advancement in handling big data, together with an outburst of machine learning techniques, offer an opportunity to tackle the peak picking problem substantially faster than manual picking and on par with human accuracy. In particular, deep learning has proven to systematically achieve human-level performance in various recognition tasks, and thus emerges as an ideal tool to address automated identification of NMR signals. We have applied a convolutional neural network for visual analysis of multidimensional NMR spectra. A comprehensive test on 31 manually-annotated spectra has demonstrated top-tier average precision (AP) of 0.9596, 0.9058 and 0.8271 for backbone, side-chain and NOESY spectra, respectively. Furthermore, a combination of extracted peak lists with automated assignment routine, FLYA, outperformed other methods, including the manual one, and led to correct resonance assignment at the levels of 90.40%, 89.90% and 90.20% for three benchmark proteins. The proposed model is a part of a Dumpling software (platform for protein NMR data analysis), and is available at https://dumpling.bio/. michaljerzywalczak@gmail.compiotr.klukowski@pwr.edu.pl. Supplementary data are available at Bioinformatics online.

  14. Depth-time interpolation of feature trends extracted from mobile microelectrode data with kernel functions.

    PubMed

    Wong, Stephen; Hargreaves, Eric L; Baltuch, Gordon H; Jaggi, Jurg L; Danish, Shabbar F

    2012-01-01

    Microelectrode recording (MER) is necessary for precision localization of target structures such as the subthalamic nucleus during deep brain stimulation (DBS) surgery. Attempts to automate this process have produced quantitative temporal trends (feature activity vs. time) extracted from mobile MER data. Our goal was to evaluate computational methods of generating spatial profiles (feature activity vs. depth) from temporal trends that would decouple automated MER localization from the clinical procedure and enhance functional localization in DBS surgery. We evaluated two methods of interpolation (standard vs. kernel) that generated spatial profiles from temporal trends. We compared interpolated spatial profiles to true spatial profiles that were calculated with depth windows, using correlation coefficient analysis. Excellent approximation of true spatial profiles is achieved by interpolation. Kernel-interpolated spatial profiles produced superior correlation coefficient values at optimal kernel widths (r = 0.932-0.940) compared to standard interpolation (r = 0.891). The choice of kernel function and kernel width resulted in trade-offs in smoothing and resolution. Interpolation of feature activity to create spatial profiles from temporal trends is accurate and can standardize and facilitate MER functional localization of subcortical structures. The methods are computationally efficient, enhancing localization without imposing additional constraints on the MER clinical procedure during DBS surgery. Copyright © 2012 S. Karger AG, Basel.

  15. Coupling solid-phase extraction and enzyme-linked immunosorbent assay for ultratrace determination of herbicides in pristine water

    USGS Publications Warehouse

    Aga, D.S.; Thurman, E.M.

    1993-01-01

    Solid-phase extraction (SPE) and enzyme-linked immunosorbent assay (ELISA) were coupled for automated trace analysis of pristine water samples containing 2-chloro-4-ethylamino-6-isopropylamine-s-triazine (atrazine) and 2-chloro-2???,6???-diethyl-N-(methoxymethyl)acetanilide (alachlor). The isolation of the two herbicides on a C18-resin involved the selection of an elution solvent that both removes interfering substances and is compatible with ELISA. Ethyl acetate was selected as the elution solvent followed by a solvent exchange with methanol/water (20/80, % v/v). The SPE-ELISA method has a detection limit of 5.0 ng/L (5 ppt), >90% recovery, and a relative standard deviation of ??10%. The performance of a microtiter plate-based ELISA and a magnetic particle-based ELISA coupled to SPE was also evaluated. Although the sensitivity of the two ELISA methods was comparable, the precision using magnetic particles was improved considerably (??10% versus ??20%) because of the faster reaction kinetics provided by the magnetic particles. Finally, SPE-ELISA and isotope dilution gas chromatography/ mass spectrometry correlated well (correlation coefficient of 0.96) for lake-water samples. The SPE-ELISA method is simple and may have broader applications for the inexpensive automated analysis of other contaminants in water at trace levels.

  16. An automated leaching method for the determination of opal in sediments and particulate matter

    NASA Astrophysics Data System (ADS)

    Müller, Peter J.; Schneider, Ralph

    1993-03-01

    An automated leaching method for the analysis of biogenic silica (opal) in sediments and particulate matter is described. The opaline material is extracted with 1 M NaOH at 85°C in a stainless steel vessel under constant stirring, and the increase in dissolved silica is continuously monitored. For this purpose, a minor portion of the leaching solution is cycled to an autoanalyzer and analyzed for dissolved silicon by molybdate-blue spectrophotometry. The resulting absorbance versus time plot is then evaluated according to the extrapolation procedure of DEMASTER (1981). The method has been tested on sponge spicules, radiolarian tests. Recent and Pliocene diatomaceous ooze samples, clay minerals and quartz, artificial sediment mixtures, and on various plankton, sediment trap and sediment samples. The results show that the relevant forms of biogenic opal in Quaternary sediments are quantitatively recovered. The time required for an analysis is dependent on the sample type, ranging from 10 to 20 min for plankton and sediment trap material and up to 40-60 min for Quaternary sediments. The silica co-extracted from silicate minerals is largely compensated for by the applied extrapolation technique. The remaining degree of uncertainty is on the order of 0.4 wt% SiO 2 or less, depending on the clay mineral composition and content.

  17. Retinal Microaneurysms Detection Using Gradient Vector Analysis and Class Imbalance Classification.

    PubMed

    Dai, Baisheng; Wu, Xiangqian; Bu, Wei

    2016-01-01

    Retinal microaneurysms (MAs) are the earliest clinically observable lesions of diabetic retinopathy. Reliable automated MAs detection is thus critical for early diagnosis of diabetic retinopathy. This paper proposes a novel method for the automated MAs detection in color fundus images based on gradient vector analysis and class imbalance classification, which is composed of two stages, i.e. candidate MAs extraction and classification. In the first stage, a candidate MAs extraction algorithm is devised by analyzing the gradient field of the image, in which a multi-scale log condition number map is computed based on the gradient vectors for vessel removal, and then the candidate MAs are localized according to the second order directional derivatives computed in different directions. Due to the complexity of fundus image, besides a small number of true MAs, there are also a large amount of non-MAs in the extracted candidates. Classifying the true MAs and the non-MAs is an extremely class imbalanced classification problem. Therefore, in the second stage, several types of features including geometry, contrast, intensity, edge, texture, region descriptors and other features are extracted from the candidate MAs and a class imbalance classifier, i.e., RUSBoost, is trained for the MAs classification. With the Retinopathy Online Challenge (ROC) criterion, the proposed method achieves an average sensitivity of 0.433 at 1/8, 1/4, 1/2, 1, 2, 4 and 8 false positives per image on the ROC database, which is comparable with the state-of-the-art approaches, and 0.321 on the DiaRetDB1 V2.1 database, which outperforms the state-of-the-art approaches.

  18. Merging dietary assessment with the adolescent lifestyle.

    PubMed

    Schap, T E; Zhu, F; Delp, E J; Boushey, C J

    2014-01-01

    The use of image-based dietary assessment methods shows promise for improving dietary self-report among children. The Technology Assisted Dietary Assessment (TADA) food record application is a self-administered food record specifically designed to address the burden and human error associated with conventional methods of dietary assessment. Users would take images of foods and beverages at all eating occasions using a mobile telephone or mobile device with an integrated camera [e.g. Apple iPhone, Apple iPod Touch (Apple Inc., Cupertino, CA, USA); Nexus One (Google, Mountain View, CA, USA)]. Once the images are taken, the images are transferred to a back-end server for automated analysis. The first step in this process is image analysis (i.e. segmentation, feature extraction and classification), which allows for automated food identification. Portion size estimation is also automated via segmentation and geometric shape template modeling. The results of the automated food identification and volume estimation can be indexed with the Food and Nutrient Database for Dietary Studies to provide a detailed diet analysis for use in epidemiological or intervention studies. Data collected during controlled feeding studies in a camp-like setting have allowed for formative evaluation and validation of the TADA food record application. This review summarises the system design and the evidence-based development of image-based methods for dietary assessment among children. © 2013 The Authors Journal of Human Nutrition and Dietetics © 2013 The British Dietetic Association Ltd.

  19. Semi-automation of Doppler Spectrum Image Analysis for Grading Aortic Valve Stenosis Severity.

    PubMed

    Niakšu, O; Balčiunaitė, G; Kizlaitis, R J; Treigys, P

    2016-01-01

    Doppler echocardiography analysis has become a golden standard in the modern diagnosis of heart diseases. In this paper, we propose a set of techniques for semi-automated parameter extraction for aortic valve stenosis severity grading. The main objectives of the study is to create echocardiography image processing techniques, which minimize manual image processing work of clinicians and leads to reduced human error rates. Aortic valve and left ventricle output tract spectrogram images have been processed and analyzed. A novel method was developed to trace systoles and to extract diagnostic relevant features. The results of the introduced method have been compared to the findings of the participating cardiologists. The experimental results showed the accuracy of the proposed method is comparable to the manual measurement performed by medical professionals. Linear regression analysis of the calculated parameters and the measurements manually obtained by the cardiologists resulted in the strongly correlated values: peak systolic velocity's and mean pressure gradient's R2 both equal to 0.99, their means' differences equal to 0.02 m/s and 4.09 mmHg, respectively, and aortic valve area's R2 of 0.89 with the two methods means' difference of 0.19 mm. The introduced Doppler echocardiography images processing method can be used as a computer-aided assistance in the aortic valve stenosis diagnostics. In our future work, we intend to improve precision of left ventricular outflow tract spectrogram measurements and apply data mining methods to propose a clinical decision support system for diagnosing aortic valve stenosis.

  20. A displacement pump procedure to load extracts for automated gel permeation chromatography.

    PubMed

    Daft, J; Hopper, M; Hensley, D; Sisk, R

    1990-01-01

    Automated gel permeation chromatography (GPC) effectively separates lipids from pesticides in sample extracts that contain fat. Using a large syringe to load sample extracts manually onto GPC models having 5 mL holding loops is awkward, slow, and potentially hazardous. Loading with a small-volume displacement pump, however, is convenient and fast (ca 1 loop every 20 s). And more importantly, the analyst is not exposed to toxic organic vapors because the loading pump and its connecting lines do not leak in the way that a syringe does.

  1. Drug side effect extraction from clinical narratives of psychiatry and psychology patients

    PubMed Central

    Kocher, Jean-Pierre A; Chute, Christopher G; Savova, Guergana K

    2011-01-01

    Objective To extract physician-asserted drug side effects from electronic medical record clinical narratives. Materials and methods Pattern matching rules were manually developed through examining keywords and expression patterns of side effects to discover an individual side effect and causative drug relationship. A combination of machine learning (C4.5) using side effect keyword features and pattern matching rules was used to extract sentences that contain side effect and causative drug pairs, enabling the system to discover most side effect occurrences. Our system was implemented as a module within the clinical Text Analysis and Knowledge Extraction System. Results The system was tested in the domain of psychiatry and psychology. The rule-based system extracting side effects and causative drugs produced an F score of 0.80 (0.55 excluding allergy section). The hybrid system identifying side effect sentences had an F score of 0.75 (0.56 excluding allergy section) but covered more side effect and causative drug pairs than individual side effect extraction. Discussion The rule-based system was able to identify most side effects expressed by clear indication words. More sophisticated semantic processing is required to handle complex side effect descriptions in the narrative. We demonstrated that our system can be trained to identify sentences with complex side effect descriptions that can be submitted to a human expert for further abstraction. Conclusion Our system was able to extract most physician-asserted drug side effects. It can be used in either an automated mode for side effect extraction or semi-automated mode to identify side effect sentences that can significantly simplify abstraction by a human expert. PMID:21946242

  2. An online peak extraction algorithm for ion mobility spectrometry data.

    PubMed

    Kopczynski, Dominik; Rahmann, Sven

    2015-01-01

    Ion mobility (IM) spectrometry (IMS), coupled with multi-capillary columns (MCCs), has been gaining importance for biotechnological and medical applications because of its ability to detect and quantify volatile organic compounds (VOC) at low concentrations in the air or in exhaled breath at ambient pressure and temperature. Ongoing miniaturization of spectrometers creates the need for reliable data analysis on-the-fly in small embedded low-power devices. We present the first fully automated online peak extraction method for MCC/IMS measurements consisting of several thousand individual spectra. Each individual spectrum is processed as it arrives, removing the need to store the measurement before starting the analysis, as is currently the state of the art. Thus the analysis device can be an inexpensive low-power system such as the Raspberry Pi. The key idea is to extract one-dimensional peak models (with four parameters) from each spectrum and then merge these into peak chains and finally two-dimensional peak models. We describe the different algorithmic steps in detail and evaluate the online method against state-of-the-art peak extraction methods.

  3. Comparison of two methods for measuring γ-H2AX nuclear fluorescence as a marker of DNA damage in cultured human cells: applications for microbeam radiation therapy

    NASA Astrophysics Data System (ADS)

    Anderson, D.; Andrais, B.; Mirzayans, R.; Siegbahn, E. A.; Fallone, B. G.; Warkentin, B.

    2013-06-01

    Microbeam radiation therapy (MRT) delivers single fractions of very high doses of synchrotron x-rays using arrays of microbeams. In animal experiments, MRT has achieved higher tumour control and less normal tissue toxicity compared to single-fraction broad beam irradiations of much lower dose. The mechanism behind the normal tissue sparing of MRT has yet to be fully explained. An accurate method for evaluating DNA damage, such as the γ-H2AX immunofluorescence assay, will be important for understanding the role of cellular communication in the radiobiological response of normal and cancerous cell types to MRT. We compare two methods of quantifying γ-H2AX nuclear fluorescence for uniformly irradiated cell cultures: manual counting of γ-H2AX foci by eye, and an automated, MATLAB-based fluorescence intensity measurement. We also demonstrate the automated analysis of cell cultures irradiated with an array of microbeams. In addition to offering a relatively high dynamic range of γ-H2AX signal versus irradiation dose ( > 10 Gy), our automated method provides speed, robustness, and objectivity when examining a series of images. Our in-house analysis facilitates the automated extraction of the spatial distribution of the γ-H2AX intensity with respect to the microbeam array — for example, the intensities in the peak (high dose area) and valley (area between two microbeams) regions. The automated analysis is particularly beneficial when processing a large number of samples, as is needed to systematically study the relationship between the numerous dosimetric and geometric parameters involved with MRT (e.g., microbeam width, microbeam spacing, microbeam array dimensions, peak dose, valley dose, and geometric arrangement of multiple arrays) and the resulting DNA damage.

  4. Rapid Automated Sample Preparation for Biological Assays

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shusteff, M

    Our technology utilizes acoustic, thermal, and electric fields to separate out contaminants such as debris or pollen from environmental samples, lyse open cells, and extract the DNA from the lysate. The objective of the project is to optimize the system described for a forensic sample, and demonstrate its performance for integration with downstream assay platforms (e.g. MIT-LL's ANDE). We intend to increase the quantity of DNA recovered from the sample beyond the current {approx}80% achieved using solid phase extraction methods. Task 1: Develop and test an acoustic filter for cell extraction. Task 2: Develop and test lysis chip. Task 3:more » Develop and test DNA extraction chip. All chips have been fabricated based on the designs laid out in last month's report.« less

  5. A brief review of machine vision in the context of automated wood identification systems

    Treesearch

    John C. Hermanson; Alex C. Wiedenhoeft

    2011-01-01

    The need for accurate and rapid field identification of wood to combat illegal logging around the world is outpacing the ability to train personnel to perform this task. Despite increased interest in non-anatomical (DNA, spectroscopic, chemical) methods for wood identification, anatomical characteristics are the least labile data that can be extracted from solid wood...

  6. Skeletal maturity determination from hand radiograph by model-based analysis

    NASA Astrophysics Data System (ADS)

    Vogelsang, Frank; Kohnen, Michael; Schneider, Hansgerd; Weiler, Frank; Kilbinger, Markus W.; Wein, Berthold B.; Guenther, Rolf W.

    2000-06-01

    Derived from a model based segmentation algorithm for hand radiographs proposed in our former work we now present a method to determine skeletal maturity by an automated analysis of regions of interest (ROI). These ROIs including the epiphyseal and carpal bones, which are most important for skeletal maturity determination, can be extracted out of the radiograph by knowledge based algorithms.

  7. Development of Automated Tracking System with Active Cameras for Figure Skating

    NASA Astrophysics Data System (ADS)

    Haraguchi, Tomohiko; Taki, Tsuyoshi; Hasegawa, Junichi

    This paper presents a system based on the control of PTZ cameras for automated real-time tracking of individual figure skaters moving on an ice rink. In the video images of figure skating, irregular trajectories, various postures, rapid movements, and various costume colors are included. Therefore, it is difficult to determine some features useful for image tracking. On the other hand, an ice rink has a limited area and uniform high intensity, and skating is always performed on ice. In the proposed system, an ice rink region is first extracted from a video image by the region growing method, and then, a skater region is extracted using the rink shape information. In the camera control process, each camera is automatically panned and/or tilted so that the skater region is as close to the center of the image as possible; further, the camera is zoomed to maintain the skater image at an appropriate scale. The results of experiments performed for 10 training scenes show that the skater extraction rate is approximately 98%. Thus, it was concluded that tracking with camera control was successful for almost all the cases considered in the study.

  8. Regenerable immuno-biochip for screening ochratoxin A in green coffee extract using an automated microarray chip reader with chemiluminescence detection.

    PubMed

    Sauceda-Friebe, Jimena C; Karsunke, Xaver Y Z; Vazac, Susanna; Biselli, Scarlett; Niessner, Reinhard; Knopp, Dietmar

    2011-03-18

    Ochratoxin A (OTA) can contaminate foodstuffs in the ppb to ppm range and once formed, it is difficult to remove. Because of its toxicity and potential risks to human health, the need exists for rapid, efficient detection methods that comply with legal maximum residual limits. In this work we have synthesized an OTA conjugate functionalized with a water-soluble peptide for covalent immobilization on a glass biochip by means of contact spotting. The chip was used for OTA determination with an indirect competitive immunoassay format with flow-through reagent addition and chemiluminescence detection, carried out with the stand-alone automated Munich Chip Reader 3 (MCR 3) platform. A buffer model and real green coffee extracts were used for this purpose. At the present, covalent conjugate immobilization allowed for at least 20 assay-regeneration cycles of the biochip surface. The total analysis time for a single sample, including measurement and surface regeneration, was 12 min and the LOQ of OTA in green coffee extract was 0.3 μg L(-1) which corresponds to 7 μg kg(-1). Copyright © 2011 Elsevier B.V. All rights reserved.

  9. Informatics in radiology: automated Web-based graphical dashboard for radiology operational business intelligence.

    PubMed

    Nagy, Paul G; Warnock, Max J; Daly, Mark; Toland, Christopher; Meenan, Christopher D; Mezrich, Reuben S

    2009-11-01

    Radiology departments today are faced with many challenges to improve operational efficiency, performance, and quality. Many organizations rely on antiquated, paper-based methods to review their historical performance and understand their operations. With increased workloads, geographically dispersed image acquisition and reading sites, and rapidly changing technologies, this approach is increasingly untenable. A Web-based dashboard was constructed to automate the extraction, processing, and display of indicators and thereby provide useful and current data for twice-monthly departmental operational meetings. The feasibility of extracting specific metrics from clinical information systems was evaluated as part of a longer-term effort to build a radiology business intelligence architecture. Operational data were extracted from clinical information systems and stored in a centralized data warehouse. Higher-level analytics were performed on the centralized data, a process that generated indicators in a dynamic Web-based graphical environment that proved valuable in discussion and root cause analysis. Results aggregated over a 24-month period since implementation suggest that this operational business intelligence reporting system has provided significant data for driving more effective management decisions to improve productivity, performance, and quality of service in the department.

  10. Automation of static and dynamic non-dispersive liquid phase microextraction. Part 1: Approaches based on extractant drop-, plug-, film- and microflow-formation.

    PubMed

    Alexovič, Michal; Horstkotte, Burkhard; Solich, Petr; Sabo, Ján

    2016-02-04

    Simplicity, effectiveness, swiftness, and environmental friendliness - these are the typical requirements for the state of the art development of green analytical techniques. Liquid phase microextraction (LPME) stands for a family of elegant sample pretreatment and analyte preconcentration techniques preserving these principles in numerous applications. By using only fractions of solvent and sample compared to classical liquid-liquid extraction, the extraction kinetics, the preconcentration factor, and the cost efficiency can be increased. Moreover, significant improvements can be made by automation, which is still a hot topic in analytical chemistry. This review surveys comprehensively and in two parts the developments of automation of non-dispersive LPME methodologies performed in static and dynamic modes. Their advantages and limitations and the reported analytical performances are discussed and put into perspective with the corresponding manual procedures. The automation strategies, techniques, and their operation advantages as well as their potentials are further described and discussed. In this first part, an introduction to LPME and their static and dynamic operation modes as well as their automation methodologies is given. The LPME techniques are classified according to the different approaches of protection of the extraction solvent using either a tip-like (needle/tube/rod) support (drop-based approaches), a wall support (film-based approaches), or microfluidic devices. In the second part, the LPME techniques based on porous supports for the extraction solvent such as membranes and porous media are overviewed. An outlook on future demands and perspectives in this promising area of analytical chemistry is finally given. Copyright © 2015 Elsevier B.V. All rights reserved.

  11. Automation of static and dynamic non-dispersive liquid phase microextraction. Part 2: Approaches based on impregnated membranes and porous supports.

    PubMed

    Alexovič, Michal; Horstkotte, Burkhard; Solich, Petr; Sabo, Ján

    2016-02-11

    A critical overview on automation of modern liquid phase microextraction (LPME) approaches based on the liquid impregnation of porous sorbents and membranes is presented. It is the continuation of part 1, in which non-dispersive LPME techniques based on the use of the extraction phase (EP) in the form of drop, plug, film, or microflow have been surveyed. Compared to the approaches described in part 1, porous materials provide an improved support for the EP. Simultaneously they allow to enlarge its contact surface and to reduce the risk of loss by incident flow or by components of surrounding matrix. Solvent-impregnated membranes or hollow fibres are further ideally suited for analyte extraction with simultaneous or subsequent back-extraction. Their use can therefore improve the procedure robustness and reproducibility as well as it "opens the door" to the new operation modes and fields of application. However, additional work and time are required for membrane replacement and renewed impregnation. Automation of porous support-based and membrane-based approaches plays an important role in the achievement of better reliability, rapidness, and reproducibility compared to manual assays. Automated renewal of the extraction solvent and coupling of sample pretreatment with the detection instrumentation can be named as examples. The different LPME methodologies using impregnated membranes and porous supports for the extraction phase and the different strategies of their automation, and their analytical applications are comprehensively described and discussed in this part. Finally, an outlook on future demands and perspectives of LPME techniques from both parts as a promising area in the field of sample pretreatment is given. Copyright © 2015 Elsevier B.V. All rights reserved.

  12. Extracting Lane Geometry and Topology Information from Vehicle Fleet Trajectories in Complex Urban Scenarios Using a Reversible Jump Mcmc Method

    NASA Astrophysics Data System (ADS)

    Roeth, O.; Zaum, D.; Brenner, C.

    2017-05-01

    Highly automated driving (HAD) requires maps not only of high spatial precision but also of yet unprecedented actuality. Traditionally small highly specialized fleets of measurement vehicles are used to generate such maps. Nevertheless, for achieving city-wide or even nation-wide coverage, automated map update mechanisms based on very large vehicle fleet data gain importance since highly frequent measurements are only to be obtained using such an approach. Furthermore, the processing of imprecise mass data in contrast to few dedicated highly accurate measurements calls for a high degree of automation. We present a method for the generation of lane-accurate road network maps from vehicle trajectory data (GPS or better). Our approach therefore allows for exploiting today's connected vehicle fleets for the generation of HAD maps. The presented algorithm is based on elementary building blocks which guarantees useful lane models and uses a Reversible Jump Markov chain Monte Carlo method to explore the models parameters in order to reconstruct the one most likely emitting the input data. The approach is applied to a challenging urban real-world scenario of different trajectory accuracy levels and is evaluated against a LIDAR-based ground truth map.

  13. Advantages of automation in plasma sample preparation prior to HPLC/MS/MS quantification: application to the determination of cilazapril and cilazaprilat in a bioequivalence study.

    PubMed

    Kolocouri, Filomila; Dotsikas, Yannis; Apostolou, Constantinos; Kousoulos, Constantinos; Soumelas, Georgios-Stefanos; Loukas, Yannis L

    2011-01-01

    An HPLC/MS/MS method characterized by complete automation and high throughput was developed for the determination of cilazapril and its active metabolite cilazaprilat in human plasma. All sample preparation and analysis steps were performed by using 2.2 mL 96 deep-well plates, while robotic liquid handling workstations were utilized for all liquid transfer steps, including liquid-liquid extraction. The whole procedure was very fast compared to a manual procedure with vials and no automation. The method also had a very short chromatographic run time of 1.5 min. Sample analysis was performed by RP-HPLC/MS/MS with positive electrospray ionization using multiple reaction monitoring. The calibration curve was linear in the range of 0.500-300 and 0.250-150 ng/mL for cilazapril and cilazaprilat, respectively. The proposed method was fully validated and proved to be selective, accurate, precise, reproducible, and suitable for the determination of cilazapril and cilazaprilat in human plasma. Therefore, it was applied to a bioequivalence study after per os administration of 2.5 mg tablet formulations of cilazapril.

  14. Towards a method of rapid extraction of strontium-90 from urine: urine pretreatment and alkali metal removal

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hawkins, C.; Dietz, M.; Kaminski, M.

    2016-03-01

    A technical program to support the Centers of Disease Control and Prevention is being developed to provide an analytical method for rapid extraction of Sr-90 from urine, with the intent of assessing the general population’s exposure during an emergency response to a radiological terrorist event. Results are presented on the progress in urine sample preparation and chemical separation steps that provide an accurate and quantitative detection of Sr-90 based upon an automated column separation sequence and a liquid scintillation assay. Batch extractions were used to evaluate the urine pretreatment and the column separation efficiency and loading capacity based upon commercial,more » extractant-loaded resins. An efficient pretreatment process for decolorizing and removing organics from urine without measurable loss of radiostrontium from the sample was demonstrated. In addition, the Diphonix® resin shows promise for the removal of high concentrations of common strontium interferents in urine as a first separation step for Sr-90 analysis.« less

  15. The effects of automated scatter feeders on captive grizzly bear activity budgets.

    PubMed

    Andrews, Nathan L P; Ha, James C

    2014-01-01

    Although captive bears are popular zoo attractions, they are known to exhibit high levels of repetitive behaviors (RBs). These behaviors have also made them particularly popular subjects for welfare research. To date, most research on ursid welfare has focused on various feeding methods that seek to increase time spent searching for, extracting, or consuming food. Prior research indicates an average of a 50% reduction in RBs when attempts are successful and, roughly, a 50% success rate across studies. This research focused on decreasing time spent in an RB while increasing the time spent active by increasing time spent searching for, extracting, and consuming food. The utility of timed, automated scatter feeders was examined for use with captive grizzly bears (Ursis arctos horribilis). Findings include a significant decrease in time spent in RB and a significant increase in time spent active while the feeders were in use. Further, the bears exhibited a wider range of behaviors and a greater use of their enclosure.

  16. Automated extraction and validation of children's gait parameters with the Kinect.

    PubMed

    Motiian, Saeid; Pergami, Paola; Guffey, Keegan; Mancinelli, Corrie A; Doretto, Gianfranco

    2015-12-02

    Gait analysis for therapy regimen prescription and monitoring requires patients to physically access clinics with specialized equipment. The timely availability of such infrastructure at the right frequency is especially important for small children. Besides being very costly, this is a challenge for many children living in rural areas. This is why this work develops a low-cost, portable, and automated approach for in-home gait analysis, based on the Microsoft Kinect. A robust and efficient method for extracting gait parameters is introduced, which copes with the high variability of noisy Kinect skeleton tracking data experienced across the population of young children. This is achieved by temporally segmenting the data with an approach based on coupling a probabilistic matching of stride template models, learned offline, with the estimation of their global and local temporal scaling. A preliminary study conducted on healthy children between 2 and 4 years of age is performed to analyze the accuracy, precision, repeatability, and concurrent validity of the proposed method against the GAITRite when measuring several spatial and temporal children's gait parameters. The method has excellent accuracy and good precision, with segmenting temporal sequences of body joint locations into stride and step cycles. Also, the spatial and temporal gait parameters, estimated automatically, exhibit good concurrent validity with those provided by the GAITRite, as well as very good repeatability. In particular, on a range of nine gait parameters, the relative and absolute agreements were found to be good and excellent, and the overall agreements were found to be good and moderate. This work enables and validates the automated use of the Kinect for children's gait analysis in healthy subjects. In particular, the approach makes a step forward towards developing a low-cost, portable, parent-operated in-home tool for clinicians assisting young children.

  17. Expert and crowd-sourced validation of an individualized sleep spindle detection method employing complex demodulation and individualized normalization

    PubMed Central

    Ray, Laura B.; Sockeel, Stéphane; Soon, Melissa; Bore, Arnaud; Myhr, Ayako; Stojanoski, Bobby; Cusack, Rhodri; Owen, Adrian M.; Doyon, Julien; Fogel, Stuart M.

    2015-01-01

    A spindle detection method was developed that: (1) extracts the signal of interest (i.e., spindle-related phasic changes in sigma) relative to ongoing “background” sigma activity using complex demodulation, (2) accounts for variations of spindle characteristics across the night, scalp derivations and between individuals, and (3) employs a minimum number of sometimes arbitrary, user-defined parameters. Complex demodulation was used to extract instantaneous power in the spindle band. To account for intra- and inter-individual differences, the signal was z-score transformed using a 60 s sliding window, per channel, over the course of the recording. Spindle events were detected with a z-score threshold corresponding to a low probability (e.g., 99th percentile). Spindle characteristics, such as amplitude, duration and oscillatory frequency, were derived for each individual spindle following detection, which permits spindles to be subsequently and flexibly categorized as slow or fast spindles from a single detection pass. Spindles were automatically detected in 15 young healthy subjects. Two experts manually identified spindles from C3 during Stage 2 sleep, from each recording; one employing conventional guidelines, and the other, identifying spindles with the aid of a sigma (11–16 Hz) filtered channel. These spindles were then compared between raters and to the automated detection to identify the presence of true positives, true negatives, false positives and false negatives. This method of automated spindle detection resolves or avoids many of the limitations that complicate automated spindle detection, and performs well compared to a group of non-experts, and importantly, has good external validity with respect to the extant literature in terms of the characteristics of automatically detected spindles. PMID:26441604

  18. A fully automatable enzymatic method for DNA extraction from plant tissues

    PubMed Central

    Manen, Jean-François; Sinitsyna, Olga; Aeschbach, Lorène; Markov, Alexander V; Sinitsyn, Arkady

    2005-01-01

    Background DNA extraction from plant tissues, unlike DNA isolation from mammalian tissues, remains difficult due to the presence of a rigid cell wall around the plant cells. Currently used methods inevitably require a laborious mechanical grinding step, necessary to disrupt the cell wall for the release of DNA. Results Using a cocktail of different carbohydrases, a method was developed that enables a complete digestion of the plant cell walls and subsequent DNA release. Optimized conditions for the digestion reaction minimize DNA shearing and digestion, and maximize DNA release from the plant cell. The method gave good results in 125 of the 156 tested species. Conclusion In combination with conventional DNA isolation techniques, the new enzymatic method allows to obtain high-yield, high-molecular weight DNA, which can be used for many applications, including genome characterization by AFLP, RAPD and SSR. Automation of the protocol (from leaf disks to DNA) is possible with existing workstations. PMID:16269076

  19. Protein extraction from methanol fixed paraffin embedded tissue blocks: A new possibility using cell blocks

    PubMed Central

    Kokkat, Theresa J.; McGarvey, Diane; Patel, Miral S.; Tieniber, Andrew D.; LiVolsi, Virginia A.; Baloch, Zubair W.

    2013-01-01

    Background: Methanol fixed and paraffin embedded (MFPE) cellblocks are an essential cytology preparation. However, MFPE cellblocks often contain limited material and their relatively small size has caused them to be overlooked in biomarker discovery. Advances in the field of molecular biotechnology have made it possible to extract proteins from formalin fixed and paraffin embedded (FFPE) tissue blocks. In contrast, there are no established methods for extracting proteins from MFPE cellblocks. We investigated commonly available CHAPS (3-[(3-cholamidopropyl) dimethylammonio]-1-propanesulfonate) buffer, as well as two commercially available Qiagen® kits and compared their effectiveness on MFPE tissue for protein yields. Materials and Methods: MFPE blocks were made by Cellient™ automated system using human tissue specimens from normal and malignant specimens collected in ThinPrep™ Vials. Protein was extracted from Cellient-methanol fixed and paraffin embedded blocks with CHAPS buffer method as well as FFPE and Mammalian Qiagen® kits. Results: Comparison of protein yields demonstrated the effectiveness of various protein extraction methods on MFPE cellblocks. Conclusion: In the current era of minimally invasive techniques to obtain minimal amount of tissue for diagnostic and prognostic purposes, the use of commercial and lab made buffer on low weight MFPE scrapings obtained by Cellient® processor opens new possibilities for protein biomarker research. PMID:24403950

  20. Parameters optimization using experimental design for headspace solid phase micro-extraction analysis of short-chain chlorinated paraffins in waters under the European water framework directive.

    PubMed

    Gandolfi, F; Malleret, L; Sergent, M; Doumenq, P

    2015-08-07

    The water framework directives (WFD 2000/60/EC and 2013/39/EU) force European countries to monitor the quality of their aquatic environment. Among the priority hazardous substances targeted by the WFD, short chain chlorinated paraffins C10-C13 (SCCPs), still represent an analytical challenge, because few laboratories are nowadays able to analyze them. Moreover, an annual average quality standards as low as 0.4μgL(-1) was set for SCCPs in surface water. Therefore, to test for compliance, the implementation of sensitive and reliable analysis method of SCCPs in water are required. The aim of this work was to address this issue by evaluating automated solid phase micro-extraction (SPME) combined on line with gas chromatography-electron capture negative ionization mass spectrometry (GC/ECNI-MS). Fiber polymer, extraction mode, ionic strength, extraction temperature and time were the most significant thermodynamic and kinetic parameters studied. To determine the suitable factors working ranges, the study of the extraction conditions was first carried out by using a classical one factor-at-a-time approach. Then a mixed level factorial 3×2(3) design was performed, in order to give rise to the most influent parameters and to estimate potential interactions effects between them. The most influent factors, i.e. extraction temperature and duration, were optimized by using a second experimental design, in order to maximize the chromatographic response. At the close of the study, a method involving headspace SPME (HS-SPME) coupled to GC/ECNI-MS is proposed. The optimum extraction conditions were sample temperature 90°C, extraction time 80min, with the PDMS 100μm fiber and desorption at 250°C during 2min. Linear response from 0.2ngmL(-1) to 10ngmL(-1) with r(2)=0.99 and limits of detection and quantification, respectively of 4pgmL(-1) and 120pgmL(-1) in MilliQ water, were achieved. The method proved to be applicable in different types of waters and show key advantages, such as simplicity, automation and sensitivity, required for the monitoring programs linked to the WFD. Copyright © 2015 Elsevier B.V. All rights reserved.

  1. Automated extraction of radiation dose information for CT examinations.

    PubMed

    Cook, Tessa S; Zimmerman, Stefan; Maidment, Andrew D A; Kim, Woojin; Boonn, William W

    2010-11-01

    Exposure to radiation as a result of medical imaging is currently in the spotlight, receiving attention from Congress as well as the lay press. Although scanner manufacturers are moving toward including effective dose information in the Digital Imaging and Communications in Medicine headers of imaging studies, there is a vast repository of retrospective CT data at every imaging center that stores dose information in an image-based dose sheet. As such, it is difficult for imaging centers to participate in the ACR's Dose Index Registry. The authors have designed an automated extraction system to query their PACS archive and parse CT examinations to extract the dose information stored in each dose sheet. First, an open-source optical character recognition program processes each dose sheet and converts the information to American Standard Code for Information Interchange (ASCII) text. Each text file is parsed, and radiation dose information is extracted and stored in a database which can be queried using an existing pathology and radiology enterprise search tool. Using this automated extraction pipeline, it is possible to perform dose analysis on the >800,000 CT examinations in the PACS archive and generate dose reports for all of these patients. It is also possible to more effectively educate technologists, radiologists, and referring physicians about exposure to radiation from CT by generating report cards for interpreted and performed studies. The automated extraction pipeline enables compliance with the ACR's reporting guidelines and greater awareness of radiation dose to patients, thus resulting in improved patient care and management. Copyright © 2010 American College of Radiology. Published by Elsevier Inc. All rights reserved.

  2. Automated extraction of direct, reactive, and vat dyes from cellulosic fibers for forensic analysis by capillary electrophoresis.

    PubMed

    Dockery, C R; Stefan, A R; Nieuwland, A A; Roberson, S N; Baguley, B M; Hendrix, J E; Morgan, S L

    2009-08-01

    Systematic designed experiments were employed to find the optimum conditions for extraction of direct, reactive, and vat dyes from cotton fibers prior to forensic characterization. Automated microextractions were coupled with measurements of extraction efficiencies on a microplate reader UV-visible spectrophotometer to enable rapid screening of extraction efficiency as a function of solvent composition. Solvent extraction conditions were also developed to be compatible with subsequent forensic characterization of extracted dyes by capillary electrophoresis with UV-visible diode array detection. The capillary electrophoresis electrolyte successfully used in this work consists of 5 mM ammonium acetate in 40:60 acetonitrile-water at pH 9.3, with the addition of sodium dithionite reducing agent to facilitate analysis of vat dyes. The ultimate goal of these research efforts is enhanced discrimination of trace fiber evidence by analysis of extracted dyes.

  3. Fully 3D-Printed Preconcentrator for Selective Extraction of Trace Elements in Seawater.

    PubMed

    Su, Cheng-Kuan; Peng, Pei-Jin; Sun, Yuh-Chang

    2015-07-07

    In this study, we used a stereolithographic 3D printing technique and polyacrylate polymers to manufacture a solid phase extraction preconcentrator for the selective extraction of trace elements and the removal of unwanted salt matrices, enabling accurate and rapid analyses of trace elements in seawater samples when combined with a quadrupole-based inductively coupled plasma mass spectrometer. To maximize the extraction efficiency, we evaluated the effect of filling the extraction channel with ordered cuboids to improve liquid mixing. Upon automation of the system and optimization of the method, the device allowed highly sensitive and interference-free determination of Mn, Ni, Zn, Cu, Cd, and Pb, with detection limits comparable with those of most conventional methods. The system's analytical reliability was further confirmed through analyses of reference materials and spike analyses of real seawater samples. This study suggests that 3D printing can be a powerful tool for building multilayer fluidic manipulation devices, simplifying the construction of complex experimental components, and facilitating the operation of sophisticated analytical procedures for most sample pretreatment applications.

  4. Object-based classification of semi-arid wetlands

    NASA Astrophysics Data System (ADS)

    Halabisky, Meghan; Moskal, L. Monika; Hall, Sonia A.

    2011-01-01

    Wetlands are valuable ecosystems that benefit society. However, throughout history wetlands have been converted to other land uses. For this reason, timely wetland maps are necessary for developing strategies to protect wetland habitat. The goal of this research was to develop a time-efficient, automated, low-cost method to map wetlands in a semi-arid landscape that could be scaled up for use at a county or state level, and could lay the groundwork for expanding to forested areas. Therefore, it was critical that the research project contain two components: accurate automated feature extraction and the use of low-cost imagery. For that reason, we tested the effectiveness of geographic object-based image analysis (GEOBIA) to delineate and classify wetlands using freely available true color aerial photographs provided through the National Agriculture Inventory Program. The GEOBIA method produced an overall accuracy of 89% (khat = 0.81), despite the absence of infrared spectral data. GEOBIA provides the automation that can save significant resources when scaled up while still providing sufficient spatial resolution and accuracy to be useful to state and local resource managers and policymakers.

  5. Unsupervised Pathological Area Extraction using 3D T2 and FLAIR MR Images

    NASA Astrophysics Data System (ADS)

    Dvořák, Pavel; Bartušek, Karel; Smékal, Zdeněk

    2014-12-01

    This work discusses fully automated extraction of brain tumor and edema in 3D MR volumes. The goal of this work is the extraction of the whole pathological area using such an algorithm that does not require a human intervention. For the good visibility of these kinds of tissues both T2-weighted and FLAIR images were used. The proposed method was tested on 80 MR volumes of publicly available BRATS database, which contains high and low grade gliomas, both real and simulated. The performance was evaluated by the Dice coefficient, where the results were differentiated between high and low grade and real and simulated gliomas. The method reached promising results for all of the combinations of images: real high grade (0.73 ± 0.20), real low grade (0.81 ± 0.06), simulated high grade (0.81 ± 0.14), and simulated low grade (0.81 ± 0.04).

  6. Automated reverse engineering of nonlinear dynamical systems

    PubMed Central

    Bongard, Josh; Lipson, Hod

    2007-01-01

    Complex nonlinear dynamics arise in many fields of science and engineering, but uncovering the underlying differential equations directly from observations poses a challenging task. The ability to symbolically model complex networked systems is key to understanding them, an open problem in many disciplines. Here we introduce for the first time a method that can automatically generate symbolic equations for a nonlinear coupled dynamical system directly from time series data. This method is applicable to any system that can be described using sets of ordinary nonlinear differential equations, and assumes that the (possibly noisy) time series of all variables are observable. Previous automated symbolic modeling approaches of coupled physical systems produced linear models or required a nonlinear model to be provided manually. The advance presented here is made possible by allowing the method to model each (possibly coupled) variable separately, intelligently perturbing and destabilizing the system to extract its less observable characteristics, and automatically simplifying the equations during modeling. We demonstrate this method on four simulated and two real systems spanning mechanics, ecology, and systems biology. Unlike numerical models, symbolic models have explanatory value, suggesting that automated “reverse engineering” approaches for model-free symbolic nonlinear system identification may play an increasing role in our ability to understand progressively more complex systems in the future. PMID:17553966

  7. Automated choroidal segmentation method in human eye with 1050nm optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Liu, Cindy; Wang, Ruikang K.

    2014-02-01

    Choroidal thickness (ChT), defined as the distance between the retinal pigment epithelium (RPE) and the choroid-sclera interface (CSI), is highly correlated with various ocular disorders like high myopia, diabetic retinopathy, and central serous chorioretinopathy. Long wavelength Optical Coherence Tomography (OCT) has the ability to penetrate deep to the CSI, making the measurement of the ChT possible. The ability to accurately segment the CSI and RPE is important in extracting clinical information. However, automated CSI segmentation is challenging due to the weak boundary in the lower choroid and inconsistent texture with varied blood vessels. We propose a K-means clustering based automated algorithm, which is effective in segmenting the CSI and RPE. The performance of the method was evaluated using 531 frames from 4 normal subjects. The RPE and CSI segmentation time was about 0.3 seconds per frame, and the average time was around 0.5 seconds per frame with correction among frames, which is faster than reported algorithms. The results from the proposed method are consistent with the manual segmentation results. Further investigation includes the optimization of the algorithm to cover more OCT images captured from patients and the increase of the processing speed and robustness of the segmentation method.

  8. Automated reverse engineering of nonlinear dynamical systems.

    PubMed

    Bongard, Josh; Lipson, Hod

    2007-06-12

    Complex nonlinear dynamics arise in many fields of science and engineering, but uncovering the underlying differential equations directly from observations poses a challenging task. The ability to symbolically model complex networked systems is key to understanding them, an open problem in many disciplines. Here we introduce for the first time a method that can automatically generate symbolic equations for a nonlinear coupled dynamical system directly from time series data. This method is applicable to any system that can be described using sets of ordinary nonlinear differential equations, and assumes that the (possibly noisy) time series of all variables are observable. Previous automated symbolic modeling approaches of coupled physical systems produced linear models or required a nonlinear model to be provided manually. The advance presented here is made possible by allowing the method to model each (possibly coupled) variable separately, intelligently perturbing and destabilizing the system to extract its less observable characteristics, and automatically simplifying the equations during modeling. We demonstrate this method on four simulated and two real systems spanning mechanics, ecology, and systems biology. Unlike numerical models, symbolic models have explanatory value, suggesting that automated "reverse engineering" approaches for model-free symbolic nonlinear system identification may play an increasing role in our ability to understand progressively more complex systems in the future.

  9. Assessment of commercial NLP engines for medication information extraction from dictated clinical notes.

    PubMed

    Jagannathan, V; Mullett, Charles J; Arbogast, James G; Halbritter, Kevin A; Yellapragada, Deepthi; Regulapati, Sushmitha; Bandaru, Pavani

    2009-04-01

    We assessed the current state of commercial natural language processing (NLP) engines for their ability to extract medication information from textual clinical documents. Two thousand de-identified discharge summaries and family practice notes were submitted to four commercial NLP engines with the request to extract all medication information. The four sets of returned results were combined to create a comparison standard which was validated against a manual, physician-derived gold standard created from a subset of 100 reports. Once validated, the individual vendor results for medication names, strengths, route, and frequency were compared against this automated standard with precision, recall, and F measures calculated. Compared with the manual, physician-derived gold standard, the automated standard was successful at accurately capturing medication names (F measure=93.2%), but performed less well with strength (85.3%) and route (80.3%), and relatively poorly with dosing frequency (48.3%). Moderate variability was seen in the strengths of the four vendors. The vendors performed better with the structured discharge summaries than with the clinic notes in an analysis comparing the two document types. Although automated extraction may serve as the foundation for a manual review process, it is not ready to automate medication lists without human intervention.

  10. A semi-automated Raman micro-spectroscopy method for morphological and chemical characterizations of microplastic litter.

    PubMed

    L, Frère; I, Paul-Pont; J, Moreau; P, Soudant; C, Lambert; A, Huvet; E, Rinnert

    2016-12-15

    Every step of microplastic analysis (collection, extraction and characterization) is time-consuming, representing an obstacle to the implementation of large scale monitoring. This study proposes a semi-automated Raman micro-spectroscopy method coupled to static image analysis that allows the screening of a large quantity of microplastic in a time-effective way with minimal machine operator intervention. The method was validated using 103 particles collected at the sea surface spiked with 7 standard plastics: morphological and chemical characterization of particles was performed in <3h. The method was then applied to a larger environmental sample (n=962 particles). The identification rate was 75% and significantly decreased as a function of particle size. Microplastics represented 71% of the identified particles and significant size differences were observed: polystyrene was mainly found in the 2-5mm range (59%), polyethylene in the 1-2mm range (40%) and polypropylene in the 0.335-1mm range (42%). Copyright © 2016 Elsevier Ltd. All rights reserved.

  11. Automated Identification of the Heart Wall Throughout the Entire Cardiac Cycle Using Optimal Cardiac Phase for Extracted Features

    NASA Astrophysics Data System (ADS)

    Takahashi, Hiroki; Hasegawa, Hideyuki; Kanai, Hiroshi

    2011-07-01

    In most methods for evaluation of cardiac function based on echocardiography, the heart wall is currently identified manually by an operator. However, this task is very time-consuming and suffers from inter- and intraobserver variability. The present paper proposes a method that uses multiple features of ultrasonic echo signals for automated identification of the heart wall region throughout an entire cardiac cycle. In addition, the optimal cardiac phase to select a frame of interest, i.e., the frame for the initiation of tracking, was determined. The heart wall region at the frame of interest in this cardiac phase was identified by the expectation-maximization (EM) algorithm, and heart wall regions in the following frames were identified by tracking each point classified in the initial frame as the heart wall region using the phased tracking method. The results for two subjects indicate the feasibility of the proposed method in the longitudinal axis view of the heart.

  12. Automated detection of neovascularization for proliferative diabetic retinopathy screening.

    PubMed

    Roychowdhury, Sohini; Koozekanani, Dara D; Parhi, Keshab K

    2016-08-01

    Neovascularization is the primary manifestation of proliferative diabetic retinopathy (PDR) that can lead to acquired blindness. This paper presents a novel method that classifies neovascularizations in the 1-optic disc (OD) diameter region (NVD) and elsewhere (NVE) separately to achieve low false positive rates of neovascularization classification. First, the OD region and blood vessels are extracted. Next, the major blood vessel segments in the 1-OD diameter region are classified for NVD, and minor blood vessel segments elsewhere are classified for NVE. For NVD and NVE classifications, optimal region-based feature sets of 10 and 6 features, respectively, are used. The proposed method achieves classification sensitivity, specificity and accuracy for NVD and NVE of 74%, 98.2%, 87.6%, and 61%, 97.5%, 92.1%, respectively. Also, the proposed method achieves 86.4% sensitivity and 76% specificity for screening images with PDR from public and local data sets. Thus, the proposed NVD and NVE detection methods can play a key role in automated screening and prioritization of patients with diabetic retinopathy.

  13. Computational Analysis of Behavior.

    PubMed

    Egnor, S E Roian; Branson, Kristin

    2016-07-08

    In this review, we discuss the emerging field of computational behavioral analysis-the use of modern methods from computer science and engineering to quantitatively measure animal behavior. We discuss aspects of experiment design important to both obtaining biologically relevant behavioral data and enabling the use of machine vision and learning techniques for automation. These two goals are often in conflict. Restraining or restricting the environment of the animal can simplify automatic behavior quantification, but it can also degrade the quality or alter important aspects of behavior. To enable biologists to design experiments to obtain better behavioral measurements, and computer scientists to pinpoint fruitful directions for algorithm improvement, we review known effects of artificial manipulation of the animal on behavior. We also review machine vision and learning techniques for tracking, feature extraction, automated behavior classification, and automated behavior discovery, the assumptions they make, and the types of data they work best with.

  14. Localization-based super-resolution imaging meets high-content screening.

    PubMed

    Beghin, Anne; Kechkar, Adel; Butler, Corey; Levet, Florian; Cabillic, Marine; Rossier, Olivier; Giannone, Gregory; Galland, Rémi; Choquet, Daniel; Sibarita, Jean-Baptiste

    2017-12-01

    Single-molecule localization microscopy techniques have proven to be essential tools for quantitatively monitoring biological processes at unprecedented spatial resolution. However, these techniques are very low throughput and are not yet compatible with fully automated, multiparametric cellular assays. This shortcoming is primarily due to the huge amount of data generated during imaging and the lack of software for automation and dedicated data mining. We describe an automated quantitative single-molecule-based super-resolution methodology that operates in standard multiwell plates and uses analysis based on high-content screening and data-mining software. The workflow is compatible with fixed- and live-cell imaging and allows extraction of quantitative data like fluorophore photophysics, protein clustering or dynamic behavior of biomolecules. We demonstrate that the method is compatible with high-content screening using 3D dSTORM and DNA-PAINT based super-resolution microscopy as well as single-particle tracking.

  15. Metabolite profiling on apple volatile content based on solid phase microextraction and gas-chromatography time of flight mass spectrometry.

    PubMed

    Aprea, Eugenio; Gika, Helen; Carlin, Silvia; Theodoridis, Georgios; Vrhovsek, Urska; Mattivi, Fulvio

    2011-07-15

    A headspace SPME GC-TOF-MS method was developed for the acquisition of metabolite profiles of apple volatiles. As a first step, an experimental design was applied to find out the most appropriate conditions for the extraction of apple volatile compounds by SPME. The selected SPME method was applied in profiling of four different apple varieties by GC-EI-TOF-MS. Full scan GC-MS data were processed by MarkerLynx software for peak picking, normalisation, alignment and feature extraction. Advanced chemometric/statistical techniques (PCA and PLS-DA) were used to explore data and extract useful information. Characteristic markers of each variety were successively identified using the NIST library thus providing useful information for variety classification. The developed HS-SPME sampling method is fully automated and proved useful in obtaining the fingerprint of the volatile content of the fruit. The described analytical protocol can aid in further studies of the apple metabolome. Copyright © 2011 Elsevier B.V. All rights reserved.

  16. Improving treatment plan evaluation with automation.

    PubMed

    Covington, Elizabeth L; Chen, Xiaoping; Younge, Kelly C; Lee, Choonik; Matuszak, Martha M; Kessler, Marc L; Keranen, Wayne; Acosta, Eduardo; Dougherty, Ashley M; Filpansick, Stephanie E; Moran, Jean M

    2016-11-08

    The goal of this work is to evaluate the effectiveness of Plan-Checker Tool (PCT) which was created to improve first-time plan quality, reduce patient delays, increase the efficiency of our electronic workflow, and standardize and automate the phys-ics plan review in the treatment planning system (TPS). PCT uses an application programming interface to check and compare data from the TPS and treatment management system (TMS). PCT includes a comprehensive checklist of automated and manual checks that are documented when performed by the user as part of a plan readiness check for treatment. Prior to and during PCT development, errors identified during the physics review and causes of patient treatment start delays were tracked to prioritize which checks should be automated. Nineteen of 33checklist items were automated, with data extracted with PCT. There was a 60% reduction in the number of patient delays in the six months after PCT release. PCT was suc-cessfully implemented for use on all external beam treatment plans in our clinic. While the number of errors found during the physics check did not decrease, automation of checks increased visibility of errors during the physics check, which led to decreased patient delays. The methods used here can be applied to any TMS and TPS that allows queries of the database. © 2016 The Authors.

  17. Automated Fluid Feature Extraction from Transient Simulations

    NASA Technical Reports Server (NTRS)

    Haimes, Robert; Lovely, David

    1999-01-01

    In the past, feature extraction and identification were interesting concepts, but not required to understand the underlying physics of a steady flow field. This is because the results of the more traditional tools like iso-surfaces, cuts and streamlines were more interactive and easily abstracted so they could be represented to the investigator. These tools worked and properly conveyed the collected information at the expense of much interaction. For unsteady flow-fields, the investigator does not have the luxury of spending time scanning only one "snap-shot" of the simulation. Automated assistance is required in pointing out areas of potential interest contained within the flow. This must not require a heavy compute burden (the visualization should not significantly slow down the solution procedure for co-processing environments like pV3). And methods must be developed to abstract the feature and display it in a manner that physically makes sense. The following is a list of the important physical phenomena found in transient (and steady-state) fluid flow: (1) Shocks, (2) Vortex cores, (3) Regions of recirculation, (4) Boundary layers, (5) Wakes. Three papers and an initial specification for the (The Fluid eXtraction tool kit) FX Programmer's guide were included. The papers, submitted to the AIAA Computational Fluid Dynamics Conference, are entitled : (1) Using Residence Time for the Extraction of Recirculation Regions, (2) Shock Detection from Computational Fluid Dynamics results and (3) On the Velocity Gradient Tensor and Fluid Feature Extraction.

  18. ACQUA: Automated Cyanobacterial Quantification Algorithm for toxic filamentous genera using spline curves, pattern recognition and machine learning.

    PubMed

    Gandola, Emanuele; Antonioli, Manuela; Traficante, Alessio; Franceschini, Simone; Scardi, Michele; Congestri, Roberta

    2016-05-01

    Toxigenic cyanobacteria are one of the main health risks associated with water resources worldwide, as their toxins can affect humans and fauna exposed via drinking water, aquaculture and recreation. Microscopy monitoring of cyanobacteria in water bodies and massive growth systems is a routine operation for cell abundance and growth estimation. Here we present ACQUA (Automated Cyanobacterial Quantification Algorithm), a new fully automated image analysis method designed for filamentous genera in Bright field microscopy. A pre-processing algorithm has been developed to highlight filaments of interest from background signals due to other phytoplankton and dust. A spline-fitting algorithm has been designed to recombine interrupted and crossing filaments in order to perform accurate morphometric analysis and to extract the surface pattern information of highlighted objects. In addition, 17 specific pattern indicators have been developed and used as input data for a machine-learning algorithm dedicated to the recognition between five widespread toxic or potentially toxic filamentous genera in freshwater: Aphanizomenon, Cylindrospermopsis, Dolichospermum, Limnothrix and Planktothrix. The method was validated using freshwater samples from three Italian volcanic lakes comparing automated vs. manual results. ACQUA proved to be a fast and accurate tool to rapidly assess freshwater quality and to characterize cyanobacterial assemblages in aquatic environments. Copyright © 2016 Elsevier B.V. All rights reserved.

  19. Automated Passive Capillary Lysimeters for Estimating Water Drainage in the Vadose Zone

    NASA Astrophysics Data System (ADS)

    Jabro, J.; Evans, R.

    2009-04-01

    In this study, we demonstrated and evaluated the performance and accuracy of an automated PCAP lysimeters that we designed for in-situ continuous measuring and estimating of drainage water below the rootzone of a sugarbeet-potato-barley rotation under two irrigation frequencies. Twelve automated PCAPs with sampling surface dimensions of 31 cm width * 91 cm long and 87 cm in height were placed 90 cm below the soil surface in a Lihen sandy loam. Our state-of-the-art design incorporated Bluetooth wireless technology to enable an automated datalogger to transmit drainage water data simultaneously every 15 minutes to a remote host and had a greater efficiency than other types of lysimeters. It also offered a significantly larger coverage area (2700 cm2) than similarly designed vadose zone lysimeters. The cumulative manually extracted drainage water was compared with the cumulative volume of drainage water recorded by the datalogger from the tipping bucket using several statistical methods. Our results indicated that our automated PCAPs are accurate and provided convenient means for estimating water drainage in the vadose zone without the need for costly and manually time-consuming supportive systems.

  20. Crossword: A Fully Automated Algorithm for the Segmentation and Quality Control of Protein Microarray Images

    PubMed Central

    2015-01-01

    Biological assays formatted as microarrays have become a critical tool for the generation of the comprehensive data sets required for systems-level understanding of biological processes. Manual annotation of data extracted from images of microarrays, however, remains a significant bottleneck, particularly for protein microarrays due to the sensitivity of this technology to weak artifact signal. In order to automate the extraction and curation of data from protein microarrays, we describe an algorithm called Crossword that logically combines information from multiple approaches to fully automate microarray segmentation. Automated artifact removal is also accomplished by segregating structured pixels from the background noise using iterative clustering and pixel connectivity. Correlation of the location of structured pixels across image channels is used to identify and remove artifact pixels from the image prior to data extraction. This component improves the accuracy of data sets while reducing the requirement for time-consuming visual inspection of the data. Crossword enables a fully automated protocol that is robust to significant spatial and intensity aberrations. Overall, the average amount of user intervention is reduced by an order of magnitude and the data quality is increased through artifact removal and reduced user variability. The increase in throughput should aid the further implementation of microarray technologies in clinical studies. PMID:24417579

  1. Crowdsourcing the Measurement of Interstate Conflict

    PubMed Central

    2016-01-01

    Much of the data used to measure conflict is extracted from news reports. This is typically accomplished using either expert coders to quantify the relevant information or machine coders to automatically extract data from documents. Although expert coding is costly, it produces quality data. Machine coding is fast and inexpensive, but the data are noisy. To diminish the severity of this tradeoff, we introduce a method for analyzing news documents that uses crowdsourcing, supplemented with computational approaches. The new method is tested on documents about Militarized Interstate Disputes, and its accuracy ranges between about 68 and 76 percent. This is shown to be a considerable improvement over automated coding, and to cost less and be much faster than expert coding. PMID:27310427

  2. n-SIFT: n-dimensional scale invariant feature transform.

    PubMed

    Cheung, Warren; Hamarneh, Ghassan

    2009-09-01

    We propose the n-dimensional scale invariant feature transform (n-SIFT) method for extracting and matching salient features from scalar images of arbitrary dimensionality, and compare this method's performance to other related features. The proposed features extend the concepts used for 2-D scalar images in the computer vision SIFT technique for extracting and matching distinctive scale invariant features. We apply the features to images of arbitrary dimensionality through the use of hyperspherical coordinates for gradients and multidimensional histograms to create the feature vectors. We analyze the performance of a fully automated multimodal medical image matching technique based on these features, and successfully apply the technique to determine accurate feature point correspondence between pairs of 3-D MRI images and dynamic 3D + time CT data.

  3. Automated synthetic scene generation

    NASA Astrophysics Data System (ADS)

    Givens, Ryan N.

    Physics-based simulations generate synthetic imagery to help organizations anticipate system performance of proposed remote sensing systems. However, manually constructing synthetic scenes which are sophisticated enough to capture the complexity of real-world sites can take days to months depending on the size of the site and desired fidelity of the scene. This research, sponsored by the Air Force Research Laboratory's Sensors Directorate, successfully developed an automated approach to fuse high-resolution RGB imagery, lidar data, and hyperspectral imagery and then extract the necessary scene components. The method greatly reduces the time and money required to generate realistic synthetic scenes and developed new approaches to improve material identification using information from all three of the input datasets.

  4. An Investigation of the "e-rater"® Automated Scoring Engine's Grammar, Usage, Mechanics, and Style Microfeatures and Their Aggregation Model. Research Report. ETS RR-17-04

    ERIC Educational Resources Information Center

    Chen, Jing; Zhang, Mo; Bejar, Isaac I.

    2017-01-01

    Automated essay scoring (AES) generally computes essay scores as a function of macrofeatures derived from a set of microfeatures extracted from the text using natural language processing (NLP). In the "e-rater"® automated scoring engine, developed at "Educational Testing Service" (ETS) for the automated scoring of essays, each…

  5. Critical comparison of the on-line and off-line molecularly imprinted solid-phase extraction of patulin coupled with liquid chromatography.

    PubMed

    Lhotská, Ivona; Holznerová, Anežka; Solich, Petr; Šatínský, Dalibor

    2017-12-01

    Reaching trace amounts of mycotoxin contamination requires sensitive and selective analytical tools for their determination. Improving the selectivity of sample pretreatment steps covering new and modern extraction techniques is one way to achieve it. Molecularly imprinted polymers as selective sorbent for extraction undoubtedly meet these criteria. The presented work is focused on the hyphenation of on-line molecularly imprinted solid-phase extraction with a chromatography system using a column-switching approach. Making a critical comparison with a simultaneously developed off-line extraction procedure, evaluation of pros and cons of each method, and determining the reliability of both methods on a real sample analysis were carried out. Both high-performance liquid chromatography methods, using off-line extraction on molecularly imprinted polymer and an on-line column-switching approach, were validated, and the validation results were compared against each other. Although automation leads to significant time savings, fewer human errors, and required no handling of toxic solvents, it reached worse detection limits (15 versus 6 μg/L), worse recovery values (68.3-123.5 versus 81.2-109.9%), and worse efficiency throughout the entire clean-up process in comparison with the off-line extraction method. The difficulties encountered, the compromises made during the optimization of on-line coupling and their critical evaluation are presented in detail. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  6. Design of automated oil sludge treatment unit

    NASA Astrophysics Data System (ADS)

    Chukhareva, N.; Korotchenko, T.; Yurkin, A.

    2015-11-01

    The article provides the feasibility study of contemporary oil sludge treatment methods. The basic parameters of a new resource-efficient oil sludge treatment unit that allows extracting as much oil as possible and disposing other components in efficient way have been outlined. Based on the calculation results, it has been revealed that in order to reduce the cost of the treatment unit and the expenses related to sludge disposal, it is essential to apply various combinations of the existing treatment methods.

  7. Determination of phosphonoformate (foscarnet) in calf and human serum by automated solid-phase extraction and high-performance liquid chromatography with amperometric detection.

    PubMed

    Ba, B B; Corniot, A G; Ducint, D; Breilh, D; Grellet, J; Saux, M C

    1999-03-05

    An isocratic high-performance liquid chromatographic method with automated solid-phase extraction has been developed to determine foscarnet in calf and human serums. Extraction was performed with an anion exchanger, SAX, from which the analyte was eluted with a 50 mM potassium pyrophosphate buffer, pH 8.4. The mobile phase consisted of methanol-40 mM disodium hydrogenphosphate, pH 7.6 containing 0.25 mM tetrahexylammonium hydrogensulphate (25:75, v/v). The analyte was separated on a polyether ether ketone (PEEK) column 150x4.6 mm I.D. packed with Kromasil 100 C18, 5 microm. Amperometric detection allowed a quantification limit of 15 microM. The assay was linear from 15 to 240 microM. The recovery of foscarnet from calf serum ranged from 60.65+/-1.89% for 15 microM to 67.45+/-1.24% for 200 microM. The coefficient of variation was < or = 3.73% for intra-assay precision and < or =7.24% for inter-assay precision for calf serum concentrations ranged from 15 to 800 microM. For the same samples, the deviation from the nominal value ranged from -8.97% to +5.40% for same day accuracy and from -4.50% to +2.77% for day-to-day accuracy. Selectivity was satisfactory towards potential co-medications. Replacement of human serum by calf serum for calibration standards and quality control samples was validated. Automation brought more protection against biohazards and increase in productivity for routine monitoring and pharmacokinetic studies.

  8. Automated multisyringe stir bar sorptive extraction using robust montmorillonite/epoxy-coated stir bars.

    PubMed

    Ghani, Milad; Saraji, Mohammad; Maya, Fernando; Cerdà, Víctor

    2016-05-06

    Herein we present a simple, rapid and low cost strategy for the preparation of robust stir bar coatings based on the combination of montmorillonite with epoxy resin. The composite stir bar was implemented in a novel automated multisyringe stir bar sorptive extraction system (MS-SBSE), and applied to the extraction of four chlorophenols (4-chlorophenol, 2,4-dichlorophenol, 2,4,6-trichlorophenol and pentachlorophenol) as model compounds, followed by high performance liquid chromatography-diode array detection. The different experimental parameters of the MS-SBSE, such as sample volume, selection of the desorption solvent, desorption volume, desorption time, sample solution pH, salt effect and extraction time were studied. Under the optimum conditions, the detection limits were between 0.02 and 0.34μgL(-1). Relative standard deviations (RSD) of the method for the analytes at 10μgL(-1) concentration level ranged from 3.5% to 4.1% (as intra-day RSD) and from 3.9% to 4.3% (as inter-day RSD at 50μgL(-1) concentration level). Batch-to-batch reproducibility for three different stir bars was 4.6-5.1%. The enrichment factors were between 30 and 49. In order to investigate the capability of the developed technique for real sample analysis, well water, wastewater and leachates from a solid waste treatment plant were satisfactorily analyzed. Copyright © 2016 Elsevier B.V. All rights reserved.

  9. Automated segmentation of middle hepatic vein in non-contrast x-ray CT images based on an atlas-driven approach

    NASA Astrophysics Data System (ADS)

    Kitagawa, Teruhiko; Zhou, Xiangrong; Hara, Takeshi; Fujita, Hiroshi; Yokoyama, Ryujiro; Kondo, Hiroshi; Kanematsu, Masayuki; Hoshi, Hiroaki

    2008-03-01

    In order to support the diagnosis of hepatic diseases, understanding the anatomical structures of hepatic lobes and hepatic vessels is necessary. Although viewing and understanding the hepatic vessels in contrast media-enhanced CT images is easy, the observation of the hepatic vessels in non-contrast X-ray CT images that are widely used for the screening purpose is difficult. We are developing a computer-aided diagnosis (CAD) system to support the liver diagnosis based on non-contrast X-ray CT images. This paper proposes a new approach to segment the middle hepatic vein (MHV), a key structure (landmark) for separating the liver region into left and right lobes. Extraction and classification of hepatic vessels are difficult in non-contrast X-ray CT images because the contrast between hepatic vessels and other liver tissues is low. Our approach uses an atlas-driven method by the following three stages. (1) Construction of liver atlases of left and right hepatic lobes using a learning datasets. (2) Fully-automated enhancement and extraction of hepatic vessels in liver regions. (3) Extraction of MHV based on the results of (1) and (2). The proposed approach was applied to 22 normal liver cases of non-contrast X-ray CT images. The preliminary results show that the proposed approach achieves the success in 14 cases for MHV extraction.

  10. Evaluation of three automated nucleic acid extraction systems for identification of respiratory viruses in clinical specimens by multiplex real-time PCR.

    PubMed

    Kim, Yoonjung; Han, Mi-Soon; Kim, Juwon; Kwon, Aerin; Lee, Kyung-A

    2014-01-01

    A total of 84 nasopharyngeal swab specimens were collected from 84 patients. Viral nucleic acid was extracted by three automated extraction systems: QIAcube (Qiagen, Germany), EZ1 Advanced XL (Qiagen), and MICROLAB Nimbus IVD (Hamilton, USA). Fourteen RNA viruses and two DNA viruses were detected using the Anyplex II RV16 Detection kit (Seegene, Republic of Korea). The EZ1 Advanced XL system demonstrated the best analytical sensitivity for all the three viral strains. The nucleic acids extracted by EZ1 Advanced XL showed higher positive rates for virus detection than the others. Meanwhile, the MICROLAB Nimbus IVD system was comprised of fully automated steps from nucleic extraction to PCR setup function that could reduce human errors. For the nucleic acids recovered from nasopharyngeal swab specimens, the QIAcube system showed the fewest false negative results and the best concordance rate, and it may be more suitable for detecting various viruses including RNA and DNA virus strains. Each system showed different sensitivity and specificity for detection of certain viral pathogens and demonstrated different characteristics such as turnaround time and sample capacity. Therefore, these factors should be considered when new nucleic acid extraction systems are introduced to the laboratory.

  11. Efficient quantification of water content in edible oils by headspace gas chromatography with vapour phase calibration.

    PubMed

    Xie, Wei-Qi; Gong, Yi-Xian; Yu, Kong-Xian

    2018-06-01

    An automated and accurate headspace gas chromatographic (HS-GC) technique was investigated for rapidly quantifying water content in edible oils. In this method, multiple headspace extraction (MHE) procedures were used to analyse the integrated water content from the edible oil sample. A simple vapour phase calibration technique with an external vapour standard was used to calibrate both the water content in the gas phase and the total weight of water in edible oil sample. After that the water in edible oils can be quantified. The data showed that the relative standard deviation of the present HS-GC method in the precision test was less than 1.13%, the relative differences between the new method and a reference method (i.e. the oven-drying method) were no more than 1.62%. The present HS-GC method is automated, accurate, efficient, and can be a reliable tool for quantifying water content in edible oil related products and research. © 2017 Society of Chemical Industry. © 2017 Society of Chemical Industry.

  12. Automated Recognition of 3D Features in GPIR Images

    NASA Technical Reports Server (NTRS)

    Park, Han; Stough, Timothy; Fijany, Amir

    2007-01-01

    A method of automated recognition of three-dimensional (3D) features in images generated by ground-penetrating imaging radar (GPIR) is undergoing development. GPIR 3D images can be analyzed to detect and identify such subsurface features as pipes and other utility conduits. Until now, much of the analysis of GPIR images has been performed manually by expert operators who must visually identify and track each feature. The present method is intended to satisfy a need for more efficient and accurate analysis by means of algorithms that can automatically identify and track subsurface features, with minimal supervision by human operators. In this method, data from multiple sources (for example, data on different features extracted by different algorithms) are fused together for identifying subsurface objects. The algorithms of this method can be classified in several different ways. In one classification, the algorithms fall into three classes: (1) image-processing algorithms, (2) feature- extraction algorithms, and (3) a multiaxis data-fusion/pattern-recognition algorithm that includes a combination of machine-learning, pattern-recognition, and object-linking algorithms. The image-processing class includes preprocessing algorithms for reducing noise and enhancing target features for pattern recognition. The feature-extraction algorithms operate on preprocessed data to extract such specific features in images as two-dimensional (2D) slices of a pipe. Then the multiaxis data-fusion/ pattern-recognition algorithm identifies, classifies, and reconstructs 3D objects from the extracted features. In this process, multiple 2D features extracted by use of different algorithms and representing views along different directions are used to identify and reconstruct 3D objects. In object linking, which is an essential part of this process, features identified in successive 2D slices and located within a threshold radius of identical features in adjacent slices are linked in a directed-graph data structure. Relative to past approaches, this multiaxis approach offers the advantages of more reliable detections, better discrimination of objects, and provision of redundant information, which can be helpful in filling gaps in feature recognition by one of the component algorithms. The image-processing class also includes postprocessing algorithms that enhance identified features to prepare them for further scrutiny by human analysts (see figure). Enhancement of images as a postprocessing step is a significant departure from traditional practice, in which enhancement of images is a preprocessing step.

  13. Automated indexing for making of a newspaper article database

    NASA Astrophysics Data System (ADS)

    Kamio, Tatsuo

    Automated indexing has been widely employed in the process of making newspaper article databases. It is essential to speed up the compiling time of the said databases for the large amount of articles come out daily, and save manpower involved in it, with the aid of computers. However, indexed terms which are extracted by the current automated indexing systems have no links with subject analysis, so that they are not considered to be keywords in a strict sense. Thus, the system of Nihon Keizai Shimbun KK enables to justify keywords to certain extent based on the two clues ; 1) at which location the extracted term occurred, and 2) whether or not subject area of the article corresponds to thesaurus class of the extracted term, by using characteristics peculiar to newspaper articles. Also the experiment of assigning keywords which are not occurred in articles was conducted. The fairly good result was obtained.

  14. Topography-Assisted Electromagnetic Platform for Blood-to-PCR in a Droplet

    PubMed Central

    Chiou, Chi-Han; Shin, Dong Jin; Zhang, Yi; Wang, Tza-Huei

    2013-01-01

    This paper presents an electromagnetically actuated platform for automated sample preparation and detection of nucleic acids. The proposed platform integrates nucleic acid extraction using silica-coated magnetic particles with real-time polymerase chain reaction (PCR) on a single cartridge. Extraction of genomic material was automated by manipulating magnetic particles in droplets using a series of planar coil electromagnets assisted by topographical features, enabling efficient fluidic processing over a variety of buffers and reagents. The functionality of the platform was demonstrated by performing nucleic acid extraction from whole blood, followed by real-time PCR detection of KRAS oncogene. Automated sample processing from whole blood to PCR-ready droplet was performed in 15 minutes. We took a modular approach of decoupling the modules of magnetic manipulation and optical detection from the device itself, enabling a low-complexity cartridge that operates in tandem with simple external instruments. PMID:23835223

  15. Automated feature extraction for retinal vascular biometry in zebrafish using OCT angiography

    NASA Astrophysics Data System (ADS)

    Bozic, Ivan; Rao, Gopikrishna M.; Desai, Vineet; Tao, Yuankai K.

    2017-02-01

    Zebrafish have been identified as an ideal model for angiogenesis because of anatomical and functional similarities with other vertebrates. The scale and complexity of zebrafish assays are limited by the need to manually treat and serially screen animals, and recent technological advances have focused on automation and improving throughput. Here, we use optical coherence tomography (OCT) and OCT angiography (OCT-A) to perform noninvasive, in vivo imaging of retinal vasculature in zebrafish. OCT-A summed voxel projections were low pass filtered and skeletonized to create an en face vascular map prior to connectivity analysis. Vascular segmentation was referenced to the optic nerve head (ONH), which was identified by automatically segmenting the retinal pigment epithelium boundary on the OCT structural volume. The first vessel branch generation was identified as skeleton segments with branch points closest to the ONH, and subsequent generations were found iteratively by expanding the search space outwards from the ONH. Biometric parameters, including length, curvature, and branch angle of each vessel segment were calculated and grouped by branch generation. Despite manual handling and alignment of each animal over multiple time points, we observe distinct qualitative patterns that enable unique identification of each eye from individual animals. We believe this OCT-based retinal biometry method can be applied for automated animal identification and handling in high-throughput organism-level pharmacological assays and genetic screens. In addition, these extracted features may enable high-resolution quantification of longitudinal vascular changes as a method for studying zebrafish models of retinal neovascularization and vascular remodeling.

  16. Automated acoustic analysis in detection of spontaneous swallows in Parkinson's disease.

    PubMed

    Golabbakhsh, Marzieh; Rajaei, Ali; Derakhshan, Mahmoud; Sadri, Saeed; Taheri, Masoud; Adibi, Peyman

    2014-10-01

    Acoustic monitoring of swallow frequency has become important as the frequency of spontaneous swallowing can be an index for dysphagia and related complications. In addition, it can be employed as an objective quantification of ingestive behavior. Commonly, swallowing complications are manually detected using videofluoroscopy recordings, which require expensive equipment and exposure to radiation. In this study, a noninvasive automated technique is proposed that uses breath and swallowing recordings obtained via a microphone located over the laryngopharynx. Nonlinear diffusion filters were used in which a scale-space decomposition of recorded sound at different levels extract swallows from breath sounds and artifacts. This technique was compared to manual detection of swallows using acoustic signals on a sample of 34 subjects with Parkinson's disease. A speech language pathologist identified five subjects who showed aspiration during the videofluoroscopic swallowing study. The proposed automated method identified swallows with a sensitivity of 86.67 %, a specificity of 77.50 %, and an accuracy of 82.35 %. These results indicate the validity of automated acoustic recognition of swallowing as a fast and efficient approach to objectively estimate spontaneous swallow frequency.

  17. AN EVALUATION OF SAMPLE DISPERSION MEDIAS USED WITH ACCELERATED SOLVENT EXTRACTION FOR THE EXTRACTION AND RECOVERY OF ARSENICALS FROM LFB AND DORM-2

    EPA Science Inventory

    An accelerated solvent extraction (ASE) device was evaluated as a semi-automated means for extracting arsenicals from quality control (QC) samples and DORM-2 [standard reference material (SRM)]. Unlike conventional extraction procedures, the ASE requires that the sample be dispe...

  18. Primary secondary amine as a sorbent material in dispersive solid-phase extraction clean-up for the determination of indicator polychlorinated biphenyls in environmental water samples by gas chromatography with electron capture detection.

    PubMed

    Guo, Yuanming; Hu, Hongmei; Li, Tiejun; Xue, Lijian; Zhang, Xiaoning; Zhong, Zhi; Zhang, Yurong; Jin, Yanjian

    2017-08-01

    A simple, rapid, and novel method has been developed and validated for determination of seven indicator polychlorinated biphenyls in water samples by gas chromatography with electron capture detection. 1 L of water samples containing 30 g of anhydrous sodium sulfate was first liquid-liquid extracted with an automated Jipad-6XB vertical oscillator using n-hexane/dichloromethane (1:1, v/v). The concentrated extract was cleaned up by dispersive solid-phase extraction with 100 mg of primary secondary amine as sorbent material. The linearity of this method ranged from 1.25 to 100 μg/L, with regression coefficients ranging between 0.9994 and 0.9999. The limits of detection were in the ng/L level, ranging between 0.2 and 0.3 ng/L. The recoveries of seven spiked polychlorinated biphenyls with external calibration method at different concentration levels in tap water, lake water, and sea water were in the ranges of 85-112, 76-116, and 72-108%, respectively, and with relative standard deviations of 3.3-4.5, 3.4-5.6, and 3.1-4.8% (n = 5), respectively. The performance of the proposed method was compared with traditional liquid-liquid extraction and solid-phase extraction clean-up methods, and comparable efficiencies were obtained. It is concluded that this method can be successfully applied for the determination of polychlorinated biphenyls in different water samples. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  19. Purifying Nucleic Acids from Samples of Extremely Low Biomass

    NASA Technical Reports Server (NTRS)

    La Duc, Myron; Osman, Shariff; Venkateswaran, Kasthuri

    2008-01-01

    A new method is able to circumvent the bias to which one commercial DNA extraction method falls prey with regard to the lysing of certain types of microbial cells, resulting in a truncated spectrum of microbial diversity. By prefacing the protocol with glass-bead-beating agitation (mechanically lysing a much more encompassing array of cell types and spores), the resulting microbial diversity detection is greatly enhanced. In preliminary studies, a commercially available automated DNA extraction method is effective at delivering total DNA yield, but only the non-hardy members of the bacterial bisque were represented in clone libraries, suggesting that this method was ineffective at lysing the hardier cell types. To circumvent such a bias in cells, yet another extraction method was devised. In this technique, samples are first subjected to a stringent bead-beating step, and then are processed via standard protocols. Prior to being loaded into extraction vials, samples are placed in micro-centrifuge bead tubes containing 50 micro-L of commercially produced lysis solution. After inverting several times, tubes are agitated at maximum speed for two minutes. Following agitation, tubes are centrifuged at 10,000 x g for one minute. At this time, the aqueous volumes are removed from the bead tubes and are loaded into extraction vials to be further processed via extraction regime. The new method couples two independent methodologies in such as way as to yield the highest concentration of PCR-amplifiable DNA with consistent and reproducible results and with the most accurate and encompassing report of species richness.

  20. Cest Analysis: Automated Change Detection from Very-High Remote Sensing Images

    NASA Astrophysics Data System (ADS)

    Ehlers, M.; Klonus, S.; Jarmer, T.; Sofina, N.; Michel, U.; Reinartz, P.; Sirmacek, B.

    2012-08-01

    A fast detection, visualization and assessment of change in areas of crisis or catastrophes are important requirements for coordination and planning of help. Through the availability of new satellites and/or airborne sensors with very high spatial resolutions (e.g., WorldView, GeoEye) new remote sensing data are available for a better detection, delineation and visualization of change. For automated change detection, a large number of algorithms has been proposed and developed. From previous studies, however, it is evident that to-date no single algorithm has the potential for being a reliable change detector for all possible scenarios. This paper introduces the Combined Edge Segment Texture (CEST) analysis, a decision-tree based cooperative suite of algorithms for automated change detection that is especially designed for the generation of new satellites with very high spatial resolution. The method incorporates frequency based filtering, texture analysis, and image segmentation techniques. For the frequency analysis, different band pass filters can be applied to identify the relevant frequency information for change detection. After transforming the multitemporal images via a fast Fourier transform (FFT) and applying the most suitable band pass filter, different methods are available to extract changed structures: differencing and correlation in the frequency domain and correlation and edge detection in the spatial domain. Best results are obtained using edge extraction. For the texture analysis, different 'Haralick' parameters can be calculated (e.g., energy, correlation, contrast, inverse distance moment) with 'energy' so far providing the most accurate results. These algorithms are combined with a prior segmentation of the image data as well as with morphological operations for a final binary change result. A rule-based combination (CEST) of the change algorithms is applied to calculate the probability of change for a particular location. CEST was tested with high-resolution satellite images of the crisis areas of Darfur (Sudan). CEST results are compared with a number of standard algorithms for automated change detection such as image difference, image ratioe, principal component analysis, delta cue technique and post classification change detection. The new combined method shows superior results averaging between 45% and 15% improvement in accuracy.

  1. Rapid Change Detection Algorithm for Disaster Management

    NASA Astrophysics Data System (ADS)

    Michel, U.; Thunig, H.; Ehlers, M.; Reinartz, P.

    2012-07-01

    This paper focuses on change detection applications in areas where catastrophic events took place which resulted in rapid destruction especially of manmade objects. Standard methods for automated change detection prove not to be sufficient; therefore a new method was developed and tested. The presented method allows a fast detection and visualization of change in areas of crisis or catastrophes. While often new methods of remote sensing are developed without user oriented aspects, organizations and authorities are not able to use these methods because of absence of remote sensing know how. Therefore a semi-automated procedure was developed. Within a transferable framework, the developed algorithm can be implemented for a set of remote sensing data among different investigation areas. Several case studies are the base for the retrieved results. Within a coarse dividing into statistical parts and the segmentation in meaningful objects, the framework is able to deal with different types of change. By means of an elaborated Temporal Change Index (TCI) only panchromatic datasets are used to extract areas which are destroyed, areas which were not affected and in addition areas where rebuilding has already started.

  2. Real-time ultrasound image classification for spine anesthesia using local directional Hadamard features.

    PubMed

    Pesteie, Mehran; Abolmaesumi, Purang; Ashab, Hussam Al-Deen; Lessoway, Victoria A; Massey, Simon; Gunka, Vit; Rohling, Robert N

    2015-06-01

    Injection therapy is a commonly used solution for back pain management. This procedure typically involves percutaneous insertion of a needle between or around the vertebrae, to deliver anesthetics near nerve bundles. Most frequently, spinal injections are performed either blindly using palpation or under the guidance of fluoroscopy or computed tomography. Recently, due to the drawbacks of the ionizing radiation of such imaging modalities, there has been a growing interest in using ultrasound imaging as an alternative. However, the complex spinal anatomy with different wave-like structures, affected by speckle noise, makes the accurate identification of the appropriate injection plane difficult. The aim of this study was to propose an automated system that can identify the optimal plane for epidural steroid injections and facet joint injections. A multi-scale and multi-directional feature extraction system to provide automated identification of the appropriate plane is proposed. Local Hadamard coefficients are obtained using the sequency-ordered Hadamard transform at multiple scales. Directional features are extracted from local coefficients which correspond to different regions in the ultrasound images. An artificial neural network is trained based on the local directional Hadamard features for classification. The proposed method yields distinctive features for classification which successfully classified 1032 images out of 1090 for epidural steroid injection and 990 images out of 1052 for facet joint injection. In order to validate the proposed method, a leave-one-out cross-validation was performed. The average classification accuracy for leave-one-out validation was 94 % for epidural and 90 % for facet joint targets. Also, the feature extraction time for the proposed method was 20 ms for a native 2D ultrasound image. A real-time machine learning system based on the local directional Hadamard features extracted by the sequency-ordered Hadamard transform for detecting the laminae and facet joints in ultrasound images has been proposed. The system has the potential to assist the anesthesiologists in quickly finding the target plane for epidural steroid injections and facet joint injections.

  3. SHERPA: an image segmentation and outline feature extraction tool for diatoms and other objects

    PubMed Central

    2014-01-01

    Background Light microscopic analysis of diatom frustules is widely used both in basic and applied research, notably taxonomy, morphometrics, water quality monitoring and paleo-environmental studies. In these applications, usually large numbers of frustules need to be identified and/or measured. Although there is a need for automation in these applications, and image processing and analysis methods supporting these tasks have previously been developed, they did not become widespread in diatom analysis. While methodological reports for a wide variety of methods for image segmentation, diatom identification and feature extraction are available, no single implementation combining a subset of these into a readily applicable workflow accessible to diatomists exists. Results The newly developed tool SHERPA offers a versatile image processing workflow focused on the identification and measurement of object outlines, handling all steps from image segmentation over object identification to feature extraction, and providing interactive functions for reviewing and revising results. Special attention was given to ease of use, applicability to a broad range of data and problems, and supporting high throughput analyses with minimal manual intervention. Conclusions Tested with several diatom datasets from different sources and of various compositions, SHERPA proved its ability to successfully analyze large amounts of diatom micrographs depicting a broad range of species. SHERPA is unique in combining the following features: application of multiple segmentation methods and selection of the one giving the best result for each individual object; identification of shapes of interest based on outline matching against a template library; quality scoring and ranking of resulting outlines supporting quick quality checking; extraction of a wide range of outline shape descriptors widely used in diatom studies and elsewhere; minimizing the need for, but enabling manual quality control and corrections. Although primarily developed for analyzing images of diatom valves originating from automated microscopy, SHERPA can also be useful for other object detection, segmentation and outline-based identification problems. PMID:24964954

  4. SHERPA: an image segmentation and outline feature extraction tool for diatoms and other objects.

    PubMed

    Kloster, Michael; Kauer, Gerhard; Beszteri, Bánk

    2014-06-25

    Light microscopic analysis of diatom frustules is widely used both in basic and applied research, notably taxonomy, morphometrics, water quality monitoring and paleo-environmental studies. In these applications, usually large numbers of frustules need to be identified and/or measured. Although there is a need for automation in these applications, and image processing and analysis methods supporting these tasks have previously been developed, they did not become widespread in diatom analysis. While methodological reports for a wide variety of methods for image segmentation, diatom identification and feature extraction are available, no single implementation combining a subset of these into a readily applicable workflow accessible to diatomists exists. The newly developed tool SHERPA offers a versatile image processing workflow focused on the identification and measurement of object outlines, handling all steps from image segmentation over object identification to feature extraction, and providing interactive functions for reviewing and revising results. Special attention was given to ease of use, applicability to a broad range of data and problems, and supporting high throughput analyses with minimal manual intervention. Tested with several diatom datasets from different sources and of various compositions, SHERPA proved its ability to successfully analyze large amounts of diatom micrographs depicting a broad range of species. SHERPA is unique in combining the following features: application of multiple segmentation methods and selection of the one giving the best result for each individual object; identification of shapes of interest based on outline matching against a template library; quality scoring and ranking of resulting outlines supporting quick quality checking; extraction of a wide range of outline shape descriptors widely used in diatom studies and elsewhere; minimizing the need for, but enabling manual quality control and corrections. Although primarily developed for analyzing images of diatom valves originating from automated microscopy, SHERPA can also be useful for other object detection, segmentation and outline-based identification problems.

  5. Automated image analysis for quantitative fluorescence in situ hybridization with environmental samples.

    PubMed

    Zhou, Zhi; Pons, Marie Noëlle; Raskin, Lutgarde; Zilles, Julie L

    2007-05-01

    When fluorescence in situ hybridization (FISH) analyses are performed with complex environmental samples, difficulties related to the presence of microbial cell aggregates and nonuniform background fluorescence are often encountered. The objective of this study was to develop a robust and automated quantitative FISH method for complex environmental samples, such as manure and soil. The method and duration of sample dispersion were optimized to reduce the interference of cell aggregates. An automated image analysis program that detects cells from 4',6'-diamidino-2-phenylindole (DAPI) micrographs and extracts the maximum and mean fluorescence intensities for each cell from corresponding FISH images was developed with the software Visilog. Intensity thresholds were not consistent even for duplicate analyses, so alternative ways of classifying signals were investigated. In the resulting method, the intensity data were divided into clusters using fuzzy c-means clustering, and the resulting clusters were classified as target (positive) or nontarget (negative). A manual quality control confirmed this classification. With this method, 50.4, 72.1, and 64.9% of the cells in two swine manure samples and one soil sample, respectively, were positive as determined with a 16S rRNA-targeted bacterial probe (S-D-Bact-0338-a-A-18). Manual counting resulted in corresponding values of 52.3, 70.6, and 61.5%, respectively. In two swine manure samples and one soil sample 21.6, 12.3, and 2.5% of the cells were positive with an archaeal probe (S-D-Arch-0915-a-A-20), respectively. Manual counting resulted in corresponding values of 22.4, 14.0, and 2.9%, respectively. This automated method should facilitate quantitative analysis of FISH images for a variety of complex environmental samples.

  6. Mining Available Data from the United States Environmental Protection Agency to Support Rapid Life Cycle Inventory Modeling of Chemical Manufacturing.

    PubMed

    Cashman, Sarah A; Meyer, David E; Edelen, Ashley N; Ingwersen, Wesley W; Abraham, John P; Barrett, William M; Gonzalez, Michael A; Randall, Paul M; Ruiz-Mercado, Gerardo; Smith, Raymond L

    2016-09-06

    Demands for quick and accurate life cycle assessments create a need for methods to rapidly generate reliable life cycle inventories (LCI). Data mining is a suitable tool for this purpose, especially given the large amount of available governmental data. These data are typically applied to LCIs on a case-by-case basis. As linked open data becomes more prevalent, it may be possible to automate LCI using data mining by establishing a reproducible approach for identifying, extracting, and processing the data. This work proposes a method for standardizing and eventually automating the discovery and use of publicly available data at the United States Environmental Protection Agency for chemical-manufacturing LCI. The method is developed using a case study of acetic acid. The data quality and gap analyses for the generated inventory found that the selected data sources can provide information with equal or better reliability and representativeness on air, water, hazardous waste, on-site energy usage, and production volumes but with key data gaps including material inputs, water usage, purchased electricity, and transportation requirements. A comparison of the generated LCI with existing data revealed that the data mining inventory is in reasonable agreement with existing data and may provide a more-comprehensive inventory of air emissions and water discharges. The case study highlighted challenges for current data management practices that must be overcome to successfully automate the method using semantic technology. Benefits of the method are that the openly available data can be compiled in a standardized and transparent approach that supports potential automation with flexibility to incorporate new data sources as needed.

  7. Semi Automated Land Cover Layer Updating Process Utilizing Spectral Analysis and GIS Data Fusion

    NASA Astrophysics Data System (ADS)

    Cohen, L.; Keinan, E.; Yaniv, M.; Tal, Y.; Felus, A.; Regev, R.

    2018-04-01

    Technological improvements made in recent years of mass data gathering and analyzing, influenced the traditional methods of updating and forming of the national topographic database. It has brought a significant increase in the number of use cases and detailed geo information demands. Processes which its purpose is to alternate traditional data collection methods developed in many National Mapping and Cadaster Agencies. There has been significant progress in semi-automated methodologies aiming to facilitate updating of a topographic national geodatabase. Implementation of those is expected to allow a considerable reduction of updating costs and operation times. Our previous activity has focused on building automatic extraction (Keinan, Zilberstein et al, 2015). Before semiautomatic updating method, it was common that interpreter identification has to be as detailed as possible to hold most reliable database eventually. When using semi-automatic updating methodologies, the ability to insert human insights based knowledge is limited. Therefore, our motivations were to reduce the created gap by allowing end-users to add their data inputs to the basic geometric database. In this article, we will present a simple Land cover database updating method which combines insights extracted from the analyzed image, and a given spatial data of vector layers. The main stages of the advanced practice are multispectral image segmentation and supervised classification together with given vector data geometric fusion while maintaining the principle of low shape editorial work to be done. All coding was done utilizing open source software components.

  8. Automated Mounting Bias Calibration for Airborne LIDAR System

    NASA Astrophysics Data System (ADS)

    Zhang, J.; Jiang, W.; Jiang, S.

    2012-07-01

    Mounting bias is the major error source of Airborne LIDAR system. In this paper, an automated calibration method for estimating LIDAR system mounting parameters is introduced. LIDAR direct geo-referencing model is used to calculate systematic errors. Due to LIDAR footprints discretely sampled, the real corresponding laser points are hardly existence among different strips. The traditional corresponding point methodology does not seem to apply to LIDAR strip registration. We proposed a Virtual Corresponding Point Model to resolve the corresponding problem among discrete laser points. Each VCPM contains a corresponding point and three real laser footprints. Two rules are defined to calculate tie point coordinate from real laser footprints. The Scale Invariant Feature Transform (SIFT) is used to extract corresponding points in LIDAR strips, and the automatic flow of LIDAR system calibration based on VCPM is detailed described. The practical examples illustrate the feasibility and effectiveness of the proposed calibration method.

  9. Watershed identification of polygonal patterns in noisy SAR images.

    PubMed

    Moreels, Pierre; Smrekar, Suzanne E

    2003-01-01

    This paper describes a new approach to pattern recognition in synthetic aperture radar (SAR) images. A visual analysis of the images provided by NASA's Magellan mission to Venus has revealed a number of zones showing polygonal-shaped faults on the surface of the planet. The goal of the paper is to provide a method to automate the identification of such zones. The high level of noise in SAR images and its multiplicative nature make automated image analysis difficult and conventional edge detectors, like those based on gradient images, inefficient. We present a scheme based on an improved watershed algorithm and a two-scale analysis. The method extracts potential edges in the SAR image, analyzes the patterns obtained, and decides whether or not the image contains a "polygon area". This scheme can also be applied to other SAR or visual images, for instance in observation of Mars and Jupiter's satellite Europa.

  10. Natural language processing of spoken diet records (SDRs).

    PubMed

    Lacson, Ronilda; Long, William

    2006-01-01

    Dietary assessment is a fundamental aspect of nutritional evaluation that is essential for management of obesity as well as for assessing dietary impact on chronic diseases. Various methods have been used for dietary assessment including written records, 24-hour recalls, and food frequency questionnaires. The use of mobile phones to provide real-time dietary records provides potential advantages for accessibility, ease of use and automated documentation. However, understanding even a perfect transcript of spoken dietary records (SDRs) is challenging for people. This work presents a first step towards automatic analysis of SDRs. Our approach consists of four steps - identification of food items, identification of food quantifiers, classification of food quantifiers and temporal annotation. Our method enables automatic extraction of dietary information from SDRs, which in turn allows automated mapping to a Diet History Questionnaire dietary database. Our model has an accuracy of 90%. This work demonstrates the feasibility of automatically processing SDRs.

  11. Validated Automatic Brain Extraction of Head CT Images

    PubMed Central

    Muschelli, John; Ullman, Natalie L.; Mould, W. Andrew; Vespa, Paul; Hanley, Daniel F.; Crainiceanu, Ciprian M.

    2015-01-01

    Background X-ray Computed Tomography (CT) imaging of the brain is commonly used in diagnostic settings. Although CT scans are primarily used in clinical practice, they are increasingly used in research. A fundamental processing step in brain imaging research is brain extraction – the process of separating the brain tissue from all other tissues. Methods for brain extraction have either been 1) validated but not fully automated, or 2) fully automated and informally proposed, but never formally validated. Aim To systematically analyze and validate the performance of FSL's brain extraction tool (BET) on head CT images of patients with intracranial hemorrhage. This was done by comparing the manual gold standard with the results of several versions of automatic brain extraction and by estimating the reliability of automated segmentation of longitudinal scans. The effects of the choice of BET parameters and data smoothing is studied and reported. Methods All images were thresholded using a 0 – 100 Hounsfield units (HU) range. In one variant of the pipeline, data were smoothed using a 3-dimensional Gaussian kernel (σ = 1mm3) and re-thresholded to 0 – 100 HU; in the other, data were not smoothed. BET was applied using 1 of 3 fractional intensity (FI) thresholds: 0.01, 0.1, or 0.35 and any holes in the brain mask were filled. For validation against a manual segmentation, 36 images from patients with intracranial hemorrhage were selected from 19 different centers from the MISTIE (Minimally Invasive Surgery plus recombinant-tissue plasminogen activator for Intracerebral Evacuation) stroke trial. Intracranial masks of the brain were manually created by one expert CT reader. The resulting brain tissue masks were quantitatively compared to the manual segmentations using sensitivity, specificity, accuracy, and the Dice Similarity Index (DSI). Brain extraction performance across smoothing and FI thresholds was compared using the Wilcoxon signed-rank test. The intracranial volume (ICV) of each scan was estimated by multiplying the number of voxels in the brain mask by the dimensions of each voxel for that scan. From this, we calculated the ICV ratio comparing manual and automated segmentation: ICVautomatedICVmanual. To estimate the performance in a large number of scans, brain masks were generated from the 6 BET pipelines for 1095 longitudinal scans from 129 patients. Failure rates were estimated from visual inspection. ICV of each scan was estimated and and an intraclass correlation (ICC) was estimated using a one-way ANOVA. Results Smoothing images improves brain extraction results using BET for all measures except specificity (all p < 0.01, uncorrected), irrespective of the FI threshold. Using an FI of 0.01 or 0.1 performed better than 0.35. Thus, all reported results refer only to smoothed data using an FI of 0.01 or 0.1. Using an FI of 0.01 had a higher median sensitivity (0.9901) than an FI of 0.1 (0.9884, median difference: 0.0014, p < 0.001), accuracy (0.9971 vs. 0.9971; median difference: 0.0001, p < 0.001), and DSI (0.9895 vs. 0.9894; median difference: 0.0004, p < 0.001) and lower specificity (0.9981 vs. 0.9982; median difference: −0.0001, p < 0.001). These measures are all very high indicating that a range of FI values may produce visually indistinguishable brain extractions. Using smoothed data and an FI of 0.01, the mean (SD) ICV ratio was 1.002 (0.008); the mean being close to 1 indicates the ICV estimates are similar for automated and manual segmentation. In the 1095 longitudinal scans, this pipeline had a low failure rate (5.2%) and the ICC estimate was high (0.929, 95% CI: 0.91, 0.945) for successfully extracted brains. Conclusion BET performs well at brain extraction on thresholded, 1mm3 smoothed CT images with an FI of 0.01 or 0.1. Smoothing before applying BET is an important step not previously discussed in the literature. Analysis code is provided. PMID:25862260

  12. Determination of volatile organic hydrocarbons in water samples by solid-phase dynamic extraction.

    PubMed

    Jochmann, Maik A; Yuan, Xue; Schmidt, Torsten C

    2007-03-01

    In the present study a headspace solid-phase dynamic extraction method coupled to gas chromatography-mass spectrometry (HS-SPDE-GC/MS) for the trace determination of volatile halogenated hydrocarbons and benzene from groundwater samples was developed and evaluated. As target compounds, benzene as well as 11 chlorinated and brominated hydrocarbons (vinyl chloride, dichloromethane, cis-1,2-dichloroethylene, trans-1,2-dichloroethylene, carbon tetrachloride, chloroform, trichloroethylene, tetrachloroethylene, bromoform) of environmental and toxicological concern were included in this study. The analytes were extracted using a SPDE needle device, coated with a poly(dimethylsiloxane) with 10% embedded activated carbon phase (50-microm film thickness and 56-mm film length) and were analyzed by GC/MS in full-scan mode. Parameters that affect the extraction yield such as extraction and desorption temperature, salting-out, extraction and desorption flow rate, extraction volume and desorption volume, the number of extraction cycles, and the pre-desorption time have been evaluated and optimized. The linearity of the HS-SPDE-GC/MS method was established over several orders of magnitude. Method detection limits (MDLs) for the compounds investigated ranged between 12 ng/L for cis-dichloroethylene and trans-dichloroethylene and 870 ng/L for vinyl chloride. The method was thoroughly validated, and the precision at two concentration levels (0.1 mg/L and a concentration 5 times above the MDL) was between 3.1 and 16% for the analytes investigated. SPDE provides high sensitivity, short sample preparation and extraction times and a high sample throughput because of full automation. Finally, the applicability to real environmental samples is shown exemplarily for various groundwater samples from a former waste-oil recycling facility. Groundwater from the site showed a complex contamination with chlorinated volatile organic compounds and aromatic hydrocarbons.

  13. EXTRACTION AND DETECTION OF ARSENICALS IN SEAWEED VIA ACCELERATED SOLVENT EXTRACTION WITH ION CHROMATOGRAPHIC SEPARATION AND ICP-MS DETECTION

    EPA Science Inventory

    An accelerated solvent extraction (ASE) device was evaluated as a semi-automated means of extracting arsenicals from ribbon kelp. Objective was to investigate effect of experimentally controllable ASE parameters (pressure, temperature, static time and solvent composition) on extr...

  14. SU-E-J-252: Reproducibility of Radiogenomic Image Features: Comparison of Two Semi-Automated Segmentation Methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, M; Woo, B; Kim, J

    Purpose: Objective and reliable quantification of imaging phenotype is an essential part of radiogenomic studies. We compared the reproducibility of two semi-automatic segmentation methods for quantitative image phenotyping in magnetic resonance imaging (MRI) of glioblastoma multiforme (GBM). Methods: MRI examinations with T1 post-gadolinium and FLAIR sequences of 10 GBM patients were downloaded from the Cancer Image Archive site. Two semi-automatic segmentation tools with different algorithms (deformable model and grow cut method) were used to segment contrast enhancement, necrosis and edema regions by two independent observers. A total of 21 imaging features consisting of area and edge groups were extracted automaticallymore » from the segmented tumor. The inter-observer variability and coefficient of variation (COV) were calculated to evaluate the reproducibility. Results: Inter-observer correlations and coefficient of variation of imaging features with the deformable model ranged from 0.953 to 0.999 and 2.1% to 9.2%, respectively, and the grow cut method ranged from 0.799 to 0.976 and 3.5% to 26.6%, respectively. Coefficient of variation for especially important features which were previously reported as predictive of patient survival were: 3.4% with deformable model and 7.4% with grow cut method for the proportion of contrast enhanced tumor region; 5.5% with deformable model and 25.7% with grow cut method for the proportion of necrosis; and 2.1% with deformable model and 4.4% with grow cut method for edge sharpness of tumor on CE-T1W1. Conclusion: Comparison of two semi-automated tumor segmentation techniques shows reliable image feature extraction for radiogenomic analysis of GBM patients with multiparametric Brain MRI.« less

  15. A simple automated instrument for DNA extraction in forensic casework.

    PubMed

    Montpetit, Shawn A; Fitch, Ian T; O'Donnell, Patrick T

    2005-05-01

    The Qiagen BioRobot EZ1 is a small, rapid, and reliable automated DNA extraction instrument capable of extracting DNA from up to six samples in as few as 20 min using magnetic bead technology. The San Diego Police Department Crime Laboratory has validated the BioRobot EZ1 for the DNA extraction of evidence and reference samples in forensic casework. The BioRobot EZ1 was evaluated for use on a variety of different evidence sample types including blood, saliva, and semen evidence. The performance of the BioRobot EZ1 with regard to DNA recovery and potential cross-contamination was also assessed. DNA yields obtained with the BioRobot EZ1 were comparable to those from organic extraction. The BioRobot EZ1 was effective at removing PCR inhibitors, which often co-purify with DNA in organic extractions. The incorporation of the BioRobot EZ1 into forensic casework has streamlined the DNA analysis process by reducing the need for labor-intensive phenol-chloroform extractions.

  16. [Studying on purification technology of Resina Draconis phenol extracts based on design space method].

    PubMed

    Zhang, Jian; Zhang, Xin; Bi, Yu-An; Xu, Gui-Hong; Huang, Wen-Zhe; Wang, Zhen-Zhong; Xiao, Wei

    2017-09-01

    The "design space" method was used to optimize the purification process of Resina Draconis phenol extracts by using the concept of "quality derived from design" (QbD). The content and transfer rate of laurin B and 7,4'-dihydroxyflavone and yield of extract were selected as the critical quality attributes (CQA). Plackett-Burman design showed that the critical process parameters (CPP) were concentration of alkali, the amount of alkali and the temperature of alkali dissolution. Then the Box-Behnken design was used to establish the mathematical model between CQA and CPP. The variance analysis results showed that the P values of the five models were less than 0.05 and the mismatch values were all greater than 0.05, indicating that the model could well describe the relationship between CQA and CPP. Finally, the control limits of the above 5 indicators (content and transfer rate of laurine B and 7,4'-dihydroxyflavone, as well as the extract yield) were set, and then the probability-based design space was calculated by Monte Carlo simulation and verified. The results of the design space validation showed that the optimized purification method can ensure the stability of the Resina Draconis phenol extracts refining process, which would help to improve the quality uniformity between batches of phenol extracts and provide data support for production automation control. Copyright© by the Chinese Pharmaceutical Association.

  17. Direct mass spectrometric screening of antibiotics from bacterial surfaces using liquid extraction surface analysis.

    PubMed

    Kai, Marco; González, Ignacio; Genilloud, Olga; Singh, Sheo B; Svatoš, Aleš

    2012-10-30

    There is a need to find new antibiotic agents to fight resistant pathogenic bacteria. To search successfully for novel antibiotics from bacteria cultivated under diverse conditions, we need a fast and cost-effective screening method. A combination of Liquid Extraction Surface Analysis (LESA), automated chip-based nanoelectrospray ionization, and high-resolution mass or tandem mass spectrometry using an Orbitrap XL was tested as the screening platform. Actinobacteria, known to produce well-recognized thiazolyl peptide antibiotics, were cultivated on a plate of solid medium and the antibiotics were extracted by organic solvent mixtures from the surface of colonies grown on the plate and analyzed using mass spectrometry (MS). LESA combined with high-resolution MS is a powerful tool with which to extract and detect thiazolyl peptide antibiotics from different Actinobacteria. Known antibiotics were correctly detected with high mass accuracy (<4 ppm) and structurally characterized using tandem mass spectra. Our method is the first step toward the development of a novel high-throughput extraction and identification tool for antibiotics in particular and natural products in general. The method described in this paper is suitable for (1) screening the natural products produced by bacterial colonies on cultivation plates within the first 2 min following extraction and (2) detecting antibiotics at high mass accuracy; the cost is around 2 Euro per sample. Copyright © 2012 John Wiley & Sons, Ltd.

  18. An automated flow injection system for metal determination by flame atomic absorption spectrometry involving on-line fabric disk sorptive extraction technique.

    PubMed

    Anthemidis, A; Kazantzi, V; Samanidou, V; Kabir, A; Furton, K G

    2016-08-15

    A novel flow injection-fabric disk sorptive extraction (FI-FDSE) system was developed for automated determination of trace metals. The platform was based on a minicolumn packed with sol-gel coated fabric media in the form of disks, incorporated into an on-line solid-phase extraction system, coupled with flame atomic absorption spectrometry (FAAS). This configuration provides minor backpressure, resulting in high loading flow rates and shorter analytical cycles. The potentials of this technique were demonstrated for trace lead and cadmium determination in environmental water samples. The applicability of different sol-gel coated FPSE media was investigated. The on-line formed complex of metal with ammonium pyrrolidine dithiocarbamate (APDC) was retained onto the fabric surface and methyl isobutyl ketone (MIBK) was used to elute the analytes prior to atomization. For 90s preconcentration time, enrichment factors of 140 and 38 and detection limits (3σ) of 1.8 and 0.4μgL(-1) were achieved for lead and cadmium determination, respectively, with a sampling frequency of 30h(-1). The accuracy of the proposed method was estimated by analyzing standard reference materials and spiked water samples. Copyright © 2016 Elsevier B.V. All rights reserved.

  19. Quantitative analysis of ex vivo colorectal epithelium using an automated feature extraction algorithm for microendoscopy image data

    PubMed Central

    Prieto, Sandra P.; Lai, Keith K.; Laryea, Jonathan A.; Mizell, Jason S.; Muldoon, Timothy J.

    2016-01-01

    Abstract. Qualitative screening for colorectal polyps via fiber bundle microendoscopy imaging has shown promising results, with studies reporting high rates of sensitivity and specificity, as well as low interobserver variability with trained clinicians. A quantitative image quality control and image feature extraction algorithm (QFEA) was designed to lessen the burden of training and provide objective data for improved clinical efficacy of this method. After a quantitative image quality control step, QFEA extracts field-of-view area, crypt area, crypt circularity, and crypt number per image. To develop and validate this QFEA, a training set of microendoscopy images was collected from freshly resected porcine colon epithelium. The algorithm was then further validated on ex vivo image data collected from eight human subjects, selected from clinically normal appearing regions distant from grossly visible tumor in surgically resected colorectal tissue. QFEA has proven flexible in application to both mosaics and individual images, and its automated crypt detection sensitivity ranges from 71 to 94% despite intensity and contrast variation within the field of view. It also demonstrates the ability to detect and quantify differences in grossly normal regions among different subjects, suggesting the potential efficacy of this approach in detecting occult regions of dysplasia. PMID:27335893

  20. Automated dynamic liquid-liquid-liquid microextraction followed by high-performance liquid chromatography-ultraviolet detection for the determination of phenoxy acid herbicides in environmental waters.

    PubMed

    Wu, Jingming; Ee, Kim Huey; Lee, Hian Kee

    2005-08-05

    Automated dynamic liquid-liquid-liquid microextraction (D-LLLME) controlled by a programmable syringe pump and combined with HPLC-UV was investigated for the extraction and determination of 5 phenoxy acid herbicides in aqueous samples. In the extraction procedure, the acceptor phase was repeatedly withdrawn into and discharged from the hollow fiber by the syringe pump. The repetitive movement of acceptor phase into and out of the hollow fiber channel facilitated the transfer of analytes into donor phase, from the organic phase held in the pore of the fiber. Parameters such as the organic solvent, concentrations of the donor and acceptor phases, plunger movement pattern, speed of agitation and ionic strength of donor phase were evaluated. Good linearity of analytes was achieved in the range of 0.5-500 ng/ml with coefficients of determination, r2 > 0.9994. Good repeatabilities of extraction performance were obtained with relative standard deviations lower than 7.5%. The method provided up-to 490-fold enrichment within 13 min. In addition, the limits of detection (LODs) ranged from 0.1 to 0.4 ng/mL (S/N = 3). D-LLLME was successfully applied for the analysis of phenoxy acid herbicides from real environmental water samples.

  1. Yet another method for triangulation and contouring for automated cartography

    NASA Technical Reports Server (NTRS)

    De Floriani, L.; Falcidieno, B.; Nasy, G.; Pienovi, C.

    1982-01-01

    An algorithm is presented for hierarchical subdivision of a set of three-dimensional surface observations. The data structure used for obtaining the desired triangulation is also singularly appropriate for extracting contours. Some examples are presented, and the results obtained are compared with those given by Delaunay triangulation. The data points selected by the algorithm provide a better approximation to the desired surface than do randomly selected points.

  2. Automating Nearshore Bathymetry Extraction from Wave Motion in Satellite Optical Imagery

    DTIC Science & Technology

    2012-03-01

    positions and overlap in the electromagnetic spectrum (From DigitalGlobe, 2011b). ..............................18  Figure 9.  STK snap shot of...to-Noise Ratio STK Satellite Tool Kit UTM Universal Transverse Mercator WKB Wave Kinematics Bathymetry xviii THIS PAGE INTENTIONALLY LEFT...planned over the coming months. 21 Figure 9. STK snap shot of WorldView-2 collection pass. C. METHOD The imagery was collected at about 2200Z

  3. Spectral-clustering approach to Lagrangian vortex detection.

    PubMed

    Hadjighasem, Alireza; Karrasch, Daniel; Teramoto, Hiroshi; Haller, George

    2016-06-01

    One of the ubiquitous features of real-life turbulent flows is the existence and persistence of coherent vortices. Here we show that such coherent vortices can be extracted as clusters of Lagrangian trajectories. We carry out the clustering on a weighted graph, with the weights measuring pairwise distances of fluid trajectories in the extended phase space of positions and time. We then extract coherent vortices from the graph using tools from spectral graph theory. Our method locates all coherent vortices in the flow simultaneously, thereby showing high potential for automated vortex tracking. We illustrate the performance of this technique by identifying coherent Lagrangian vortices in several two- and three-dimensional flows.

  4. A knowledge-based approach to automated planning for hepatocellular carcinoma.

    PubMed

    Zhang, Yujie; Li, Tingting; Xiao, Han; Ji, Weixing; Guo, Ming; Zeng, Zhaochong; Zhang, Jianying

    2018-01-01

    To build a knowledge-based model of liver cancer for Auto-Planning, a function in Pinnacle, which is used as an automated inverse intensity modulated radiation therapy (IMRT) planning system. Fifty Tomotherapy patients were enrolled to extract the dose-volume histograms (DVHs) information and construct the protocol for Auto-Planning model. Twenty more patients were chosen additionally to test the model. Manual planning and automatic planning were performed blindly for all twenty test patients with the same machine and treatment planning system. The dose distributions of target and organs at risks (OARs), along with the working time for planning, were evaluated. Statistically significant results showed that automated plans performed better in target conformity index (CI) while mean target dose was 0.5 Gy higher than manual plans. The differences between target homogeneity indexes (HI) of the two methods were not statistically significant. Additionally, the doses of normal liver, left kidney, and small bowel were significantly reduced with automated plan. Particularly, mean dose and V15 of normal liver were 1.4 Gy and 40.5 cc lower with automated plans respectively. Mean doses of left kidney and small bowel were reduced with automated plans by 1.2 Gy and 2.1 Gy respectively. In contrast, working time was also significantly reduced with automated planning. Auto-Planning shows availability and effectiveness in our knowledge-based model for liver cancer. © 2017 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.

  5. Improving treatment plan evaluation with automation

    PubMed Central

    Covington, Elizabeth L.; Chen, Xiaoping; Younge, Kelly C.; Lee, Choonik; Matuszak, Martha M.; Kessler, Marc L.; Keranen, Wayne; Acosta, Eduardo; Dougherty, Ashley M.; Filpansick, Stephanie E.

    2016-01-01

    The goal of this work is to evaluate the effectiveness of Plan‐Checker Tool (PCT) which was created to improve first‐time plan quality, reduce patient delays, increase the efficiency of our electronic workflow, and standardize and automate the physics plan review in the treatment planning system (TPS). PCT uses an application programming interface to check and compare data from the TPS and treatment management system (TMS). PCT includes a comprehensive checklist of automated and manual checks that are documented when performed by the user as part of a plan readiness check for treatment. Prior to and during PCT development, errors identified during the physics review and causes of patient treatment start delays were tracked to prioritize which checks should be automated. Nineteen of 33 checklist items were automated, with data extracted with PCT. There was a 60% reduction in the number of patient delays in the six months after PCT release. PCT was successfully implemented for use on all external beam treatment plans in our clinic. While the number of errors found during the physics check did not decrease, automation of checks increased visibility of errors during the physics check, which led to decreased patient delays. The methods used here can be applied to any TMS and TPS that allows queries of the database. PACS number(s): 87.55.‐x, 87.55.N‐, 87.55.Qr, 87.55.tm, 89.20.Bb PMID:27929478

  6. Automated Inspection of Power Line Corridors to Measure Vegetation Undercut Using Uav-Based Images

    NASA Astrophysics Data System (ADS)

    Maurer, M.; Hofer, M.; Fraundorfer, F.; Bischof, H.

    2017-08-01

    Power line corridor inspection is a time consuming task that is performed mostly manually. As the development of UAVs made huge progress in recent years, and photogrammetric computer vision systems became well established, it is time to further automate inspection tasks. In this paper we present an automated processing pipeline to inspect vegetation undercuts of power line corridors. For this, the area of inspection is reconstructed, geo-referenced, semantically segmented and inter class distance measurements are calculated. The presented pipeline performs an automated selection of the proper 3D reconstruction method for on the one hand wiry (power line), and on the other hand solid objects (surrounding). The automated selection is realized by performing pixel-wise semantic segmentation of the input images using a Fully Convolutional Neural Network. Due to the geo-referenced semantic 3D reconstructions a documentation of areas where maintenance work has to be performed is inherently included in the distance measurements and can be extracted easily. We evaluate the influence of the semantic segmentation according to the 3D reconstruction and show that the automated semantic separation in wiry and dense objects of the 3D reconstruction routine improves the quality of the vegetation undercut inspection. We show the generalization of the semantic segmentation to datasets acquired using different acquisition routines and to varied seasons in time.

  7. Streamlined structure elucidation of an unknown compound in a pigment formulation.

    PubMed

    Yüce, Imanuel; Morlock, Gertrud E

    2016-10-21

    A fast and reliable quality control is important for ink manufacturers to ensure a constant production grade of mixtures and chemical formulations, and unknown components attract their attention. Structure elucidating techniques seem time-consuming in combination with column-based methods, but especially the low solubility of pigment formulations is challenging the analysis. In contrast, layer chromatography is more tolerant with regard to pigment particles. One PLC plate for NMR and FTIR analyses and one HPTLC plate for recording of high resolution mass spectra, MS/MS spectra and for gathering information on polarity and spectral properties were needed to characterize a structure, exemplarily shown for an unknown component in pigment Red 57:1 to be 3-hydroxy-2-naphtoic acid. A preparative layer chromatography (PLC) workflow was developed that used an Automated Multiple Development 2 (AMD 2) system. The 0.5-mm PLC plate could still be operated in the AMD 2 system and allowed a smooth switch from the analytical to the preparative gradient separation. Through automated gradient development and the resulting focusing of bands, the sharpness of the PLC bands was improved. For NMR, the necessary high load of the target compound on the PLC plate was achieved via a selective solvent extraction that discriminated the polar sample matrix and thus increased the application volume of the extract that could maximally be applied without overloading. By doing so, the yield for NMR analysis was improved by a factor of 9. The effectivity gain through a simple, but thoroughly chosen extraction solvent is often overlooked, and for educational purpose, it was clearly illustrated and demonstrated by an extended solvent screening. Thus, PLC using an automated gradient development after a selective extraction was proven to be a new powerful combination for structural elucidation by NMR. Copyright © 2016 Elsevier B.V. All rights reserved.

  8. Automated Knowledge Discovery from Simulators

    NASA Technical Reports Server (NTRS)

    Burl, Michael C.; DeCoste, D.; Enke, B. L.; Mazzoni, D.; Merline, W. J.; Scharenbroich, L.

    2006-01-01

    In this paper, we explore one aspect of knowledge discovery from simulators, the landscape characterization problem, where the aim is to identify regions in the input/ parameter/model space that lead to a particular output behavior. Large-scale numerical simulators are in widespread use by scientists and engineers across a range of government agencies, academia, and industry; in many cases, simulators provide the only means to examine processes that are infeasible or impossible to study otherwise. However, the cost of simulation studies can be quite high, both in terms of the time and computational resources required to conduct the trials and the manpower needed to sift through the resulting output. Thus, there is strong motivation to develop automated methods that enable more efficient knowledge extraction.

  9. Heart rate calculation from ensemble brain wave using wavelet and Teager-Kaiser energy operator.

    PubMed

    Srinivasan, Jayaraman; Adithya, V

    2015-01-01

    Electroencephalogram (EEG) signal artifacts are caused by various factors, such as, Electro-oculogram (EOG), Electromyogram (EMG), Electrocardiogram (ECG), movement artifact and line interference. The relatively high electrical energy cardiac activity causes EEG artifacts. In EEG signal processing the general approach is to remove the ECG signal. In this paper, we introduce an automated method to extract the ECG signal from EEG using wavelet and Teager-Kaiser energy operator for R-peak enhancement and detection. From the detected R-peaks the heart rate (HR) is calculated for clinical diagnosis. To check the efficiency of our method, we compare the HR calculated from ECG signal recorded in synchronous with EEG. The proposed method yields a mean error of 1.4% for the heart rate and 1.7% for mean R-R interval. The result illustrates that, proposed method can be used for ECG extraction from single channel EEG and used in clinical diagnosis like estimation for stress analysis, fatigue, and sleep stages classification studies as a multi-model system. In addition, this method eliminates the dependence of additional synchronous ECG in extraction of ECG from EEG signal process.

  10. Development of a Magnetic Microbead Affinity Selection Screen (MagMASS) Using Mass Spectrometry for Ligands to the Retinoid X Receptor-α

    NASA Astrophysics Data System (ADS)

    Rush, Michael D.; Walker, Elisabeth M.; Prehna, Gerd; Burton, Tristesse; van Breemen, Richard B.

    2017-03-01

    To overcome limiting factors in mass spectrometry-based screening methods such as automation while still facilitating the screening of complex mixtures such as botanical extracts, magnetic microbead affinity selection screening (MagMASS) was developed. The screening process involves immobilization of a target protein on a magnetic microbead using a variety of possible chemistries, incubation with mixtures of molecules containing possible ligands, a washing step that removes non-bound compounds while a magnetic field retains the beads in the microtiter well, and an organic solvent release step followed by LC-MS analysis. Using retinoid X receptor-α (RXRα) as an example, which is a nuclear receptor and target for anti-inflammation therapy as well as cancer treatment and prevention, a MagMASS assay was developed and compared with an existing screening assay, pulsed ultrafiltration (PUF)-MS. Optimization of MagMASS involved evaluation of multiple protein constructs and several magnetic bead immobilization chemistries. The full-length RXRα construct immobilized with amylose beads provided optimum results. Additional enhancements of MagMASS were the application of 96-well plates to enable automation, use of UHPLC instead of HPLC for faster MS analyses, and application of metabolomics software for faster, automated data analysis. Performance of MagMASS was demonstrated using mixtures of synthetic compounds and known ligands spiked into botanical extracts.

  11. Assessing the Agreement Between Eo-Based Semi-Automated Landslide Maps with Fuzzy Manual Landslide Delineation

    NASA Astrophysics Data System (ADS)

    Albrecht, F.; Hölbling, D.; Friedl, B.

    2017-09-01

    Landslide mapping benefits from the ever increasing availability of Earth Observation (EO) data resulting from programmes like the Copernicus Sentinel missions and improved infrastructure for data access. However, there arises the need for improved automated landslide information extraction processes from EO data while the dominant method is still manual delineation. Object-based image analysis (OBIA) provides the means for the fast and efficient extraction of landslide information. To prove its quality, automated results are often compared to manually delineated landslide maps. Although there is awareness of the uncertainties inherent in manual delineations, there is a lack of understanding how they affect the levels of agreement in a direct comparison of OBIA-derived landslide maps and manually derived landslide maps. In order to provide an improved reference, we present a fuzzy approach for the manual delineation of landslides on optical satellite images, thereby making the inherent uncertainties of the delineation explicit. The fuzzy manual delineation and the OBIA classification are compared by accuracy metrics accepted in the remote sensing community. We have tested this approach for high resolution (HR) satellite images of three large landslides in Austria and Italy. We were able to show that the deviation of the OBIA result from the manual delineation can mainly be attributed to the uncertainty inherent in the manual delineation process, a relevant issue for the design of validation processes for OBIA-derived landslide maps.

  12. Fully automated methods for the determination of hydrochlorothiazide in human plasma and urine.

    PubMed

    Hsieh, J Y; Lin, C; Matuszewski, B K; Dobrinska, M R

    1994-12-01

    LC assays utilizing fully automated sample preparation procedures on Zymark PyTechnology Robot and BenchMate Workstation for the quantification of hydrochlorothiazide (HCTZ) in human plasma and urine have been developed. After aliquoting plasma and urine samples, and adding internal standard (IS) manually, the robot executed buffer and organic solvent addition, liquid-liquid extraction, solvent evaporation and on-line LC injection steps for plasma samples, whereas, BenchMate performed buffer and organic solvent addition, liquid-liquid and solid-phase extractions, and on-line LC injection steps for urine samples. Chromatographic separations were carried out on Beckman Octyl Ultrasphere column using the mobile phase composed of 12% (v/v) acetonitrile and 88% of either an ion-pairing reagent (plasma) or 0.1% trifluoroacetic acid (urine). The eluent from the column was monitored with UV detector (271 nm). Peak heights for HCTZ and IS were automatically processed using a PE-Nelson ACCESS*CHROM laboratory automation system. The assays have been validated in the concentration range of 2-100 ng ml-1 in plasma and 0.1-20 micrograms ml-1 in urine. Both plasma and urine assays have the sensitivity and specificity necessary to determine plasma and urine concentrations of HCTZ from low dose (6.25/12.5 mg) administration of HCTZ to human subjects in the presence or absence of losartan.

  13. Automated quantification of Epstein-Barr Virus in whole blood of hematopoietic stem cell transplant patients using the Abbott m2000 system.

    PubMed

    Salmona, Maud; Fourati, Slim; Feghoul, Linda; Scieux, Catherine; Thiriez, Aline; Simon, François; Resche-Rigon, Matthieu; LeGoff, Jérôme

    2016-08-01

    Accurate quantification of Epstein-Barr virus (EBV) load in blood is essential for the management of post-transplant lymphoproliferative disorders. The automation of DNA extraction and amplification may improve accuracy and reproducibility. We evaluated the EBV PCR Kit V1 with fully automated DNA extraction and amplification on the m2000 system (Abbott assay). Conversion factor between copies and international units (IU), lower limit of quantification, imprecision and linearity were determined in a whole blood (WB) matrix. Results from 339 clinical WB specimens were compared with a home-brew real-time PCR assay used in our laboratory (in-house assay). The conversion factor between copies and IU was 3.22 copies/IU. The lower limit of quantification (LLQ) was 1000 copies/mL. Intra- and inter-assay coefficients of variation were 3.1% and 7.9% respectively for samples with EBV load higher than the LLQ. The comparison between Abbott assay and in-house assay showed a good concordance (kappa = 0.77). Loads were higher with the Abbott assay (mean difference = 0.62 log10 copies/mL). The EBV PCR Kit V1 assay on the m2000 system provides a reliable and easy-to-use method for quantification of EBV DNA in WB. Copyright © 2016 Elsevier Inc. All rights reserved.

  14. In-tube electro-membrane extraction with a sub-microliter organic solvent consumption as an efficient technique for synthetic food dyes determination in foodstuff samples.

    PubMed

    Bazregar, Mohammad; Rajabi, Maryam; Yamini, Yadollah; Asghari, Alireza; Abdossalami asl, Yousef

    2015-09-04

    A simple and efficient extraction technique with a sub-microliter organic solvent consumption termed as in-tube electro-membrane extraction (IEME) is introduced. This method is based upon the electro-kinetic migration of ionized compounds by the application of an electrical potential difference. For this purpose, a thin polypropylene (PP) sheet placed inside a tube acts as a support for the membrane solvent, and 30μL of an aqueous acceptor solution is separated by this solvent from 1.2mL of an aqueous donor solution. This method yielded high extraction recoveries (63-81%), and the consumption of the organic solvent used was only 0.5μL. By performing this method, the purification is high, and the utilization of the organic solvent, used as a mediator, is very simple and repeatable. The proposed method was evaluated by extraction of four synthetic food dyes (Amaranth, Ponceau 4R, Allura Red, and Carmoisine) as the model analytes. Optimization of variables affecting the method was carried out in order to achieve the best extraction efficiency. These variables were the type of membrane solvent, applied extraction voltage, extraction time, pH range, and concentration of salt added. Under the optimized conditions, IEME-HPLC-UV provided a good linearity in the range of 1.00-800ngmL(-1), low limits of detection (0.3-1ngmL(-1)), and good extraction repeatabilities (RSDs below 5.2%, n=5). It seems that this design is a proper one for the automation of the method. Also the consumption of the organic solvent in a sub-microliter scale, and its simplicity, high efficiency, and high purification can help one getting closer to the objectives of the green chemistry. Copyright © 2015 Elsevier B.V. All rights reserved.

  15. Development of "ultrasound-assisted dynamic extraction" and its combination with CCC and CPC for simultaneous extraction and isolation of phytochemicals.

    PubMed

    Zhang, Yuchi; Liu, Chunming; Li, Jing; Qi, Yanjuan; Li, Yuchun; Li, Sainan

    2015-09-01

    A new method for the extraction of medicinal herbs termed ultrasonic-assisted dynamic extraction (UADE) was designed and evaluated. This technique was coupled with counter-current chromatography (CCC) and centrifugal partition chromatography (CPC) and then applied to the continuous extraction and online isolation of chemical constituents from Paeonia lactiflora Pall (white peony) roots. The mechanical parameters, including the pitch and diameter of the shaft, were optimized by means of mathematical modeling. Furthermore, the configuration and mechanism of online UADE coupled with CCC and CPC were elaborated. The stationary phases of the two-phase solvent systems from CCC and CPC were utilized as the UADE solution. The extraction solution was pumped into the sample loop and then introduced into the CCC column; the target compounds were eluted with the lower aqueous phase of the two-phase solvent system. During the CCC separation, the extraction solution was continuously fed in the sample loop by turning the ten-port valve; the extraction solution was then pumped into the CPC column and eluted by the mobile phase of the two-phase solvent system mentioned above. When the first cycle of the UADE/CCC/CPC was completed, the second cycle experiment could be carried out, and so on. Four target compounds (albiflorin, benzoylpaeoniflorin, paeoniflorin, and galloylpaeoniflorin) with purities above 94.96% were successfully extracted and isolated online using the two-phase solvent system comprising ethyl acetate-n-butanol-ethanol-water (1:3.5:2:4.5, v/v/v/v). Compared with conventional extraction methods, the instrumental setup of the present method offers the advantages of automation and systematic extraction and isolation of natural products. Crown Copyright © 2015. Published by Elsevier B.V. All rights reserved.

  16. Chinese character recognition based on Gabor feature extraction and CNN

    NASA Astrophysics Data System (ADS)

    Xiong, Yudian; Lu, Tongwei; Jiang, Yongyuan

    2018-03-01

    As an important application in the field of text line recognition and office automation, Chinese character recognition has become an important subject of pattern recognition. However, due to the large number of Chinese characters and the complexity of its structure, there is a great difficulty in the Chinese character recognition. In order to solve this problem, this paper proposes a method of printed Chinese character recognition based on Gabor feature extraction and Convolution Neural Network(CNN). The main steps are preprocessing, feature extraction, training classification. First, the gray-scale Chinese character image is binarized and normalized to reduce the redundancy of the image data. Second, each image is convoluted with Gabor filter with different orientations, and the feature map of the eight orientations of Chinese characters is extracted. Third, the feature map through Gabor filters and the original image are convoluted with learning kernels, and the results of the convolution is the input of pooling layer. Finally, the feature vector is used to classify and recognition. In addition, the generalization capacity of the network is improved by Dropout technology. The experimental results show that this method can effectively extract the characteristics of Chinese characters and recognize Chinese characters.

  17. ZK DrugResist 2.0: A TextMiner to extract semantic relations of drug resistance from PubMed.

    PubMed

    Khalid, Zoya; Sezerman, Osman Ugur

    2017-05-01

    Extracting useful knowledge from an unstructured textual data is a challenging task for biologists, since biomedical literature is growing exponentially on a daily basis. Building an automated method for such tasks is gaining much attention of researchers. ZK DrugResist is an online tool that automatically extracts mutations and expression changes associated with drug resistance from PubMed. In this study we have extended our tool to include semantic relations extracted from biomedical text covering drug resistance and established a server including both of these features. Our system was tested for three relations, Resistance (R), Intermediate (I) and Susceptible (S) by applying hybrid feature set. From the last few decades the focus has changed to hybrid approaches as it provides better results. In our case this approach combines rule-based methods with machine learning techniques. The results showed 97.67% accuracy with 96% precision, recall and F-measure. The results have outperformed the previously existing relation extraction systems thus can facilitate computational analysis of drug resistance against complex diseases and further can be implemented on other areas of biomedicine. Copyright © 2017 Elsevier Inc. All rights reserved.

  18. Nonlinear features for product inspection

    NASA Astrophysics Data System (ADS)

    Talukder, Ashit; Casasent, David P.

    1999-03-01

    Classification of real-time X-ray images of randomly oriented touching pistachio nuts is discussed. The ultimate objective is the development of a system for automated non-invasive detection of defective product items on a conveyor belt. We discuss the extraction of new features that allow better discrimination between damaged and clean items (pistachio nuts). This feature extraction and classification stage is the new aspect of this paper; our new maximum representation and discriminating feature (MRDF) extraction method computes nonlinear features that are used as inputs to a new modified k nearest neighbor classifier. In this work, the MRDF is applied to standard features (rather than iconic data). The MRDF is robust to various probability distributions of the input class and is shown to provide good classification and new ROC (receiver operating characteristic) data.

  19. Analysis of Supercritical-Extracted Chelated Metal Ions From Mixed Organic-Inorganic Samples

    NASA Technical Reports Server (NTRS)

    Sinha, Mahadeva P. (Inventor)

    1996-01-01

    Organic and inorganic contaminants of an environmental sample are analyzed by the same GC-MS instrument by adding an oxidizing agent to the sample to oxidize metal or metal compounds to form metal ions. The metal ions are converted to chelate complexes and the chelate complexes are extracted into a supercritical fluid such as CO2. The metal chelate extract after flowing through a restrictor tube is directly injected into the ionization chamber of a mass spectrometer, preferably containing a refractory metal filament such as rhenium to fragment the complex to release metal ions which are detected. This provides a fast, economical method for the analysis of metal contaminants in a sample and can be automated. An organic extract of the sample in conventional or supercritical fluid solvents can be detected in the same mass spectrometer, preferably after separation in a supercritical fluid chromatograph.

  20. Semi-automated in vivo solid-phase microextraction sampling and the diffusion-based interface calibration model to determine the pharmacokinetics of methoxyfenoterol and fenoterol in rats.

    PubMed

    Yeung, Joanne Chung Yan; de Lannoy, Inés; Gien, Brad; Vuckovic, Dajana; Yang, Yingbo; Bojko, Barbara; Pawliszyn, Janusz

    2012-09-12

    In vivo solid-phase microextraction (SPME) can be used to sample the circulating blood of animals without the need to withdraw a representative blood sample. In this study, in vivo SPME in combination with liquid-chromatography tandem mass spectrometry (LC-MS/MS) was used to determine the pharmacokinetics of two drug analytes, R,R-fenoterol and R,R-methoxyfenoterol, administered as 5 mg kg(-1) i.v. bolus doses to groups of 5 rats. This research illustrates, for the first time, the feasibility of the diffusion-based calibration interface model for in vivo SPME studies. To provide a constant sampling rate as required for the diffusion-based interface model, partial automation of the SPME sampling of the analytes from the circulating blood was accomplished using an automated blood sampling system. The use of the blood sampling system allowed automation of all SPME sampling steps in vivo, except for the insertion and removal of the SPME probe from the sampling interface. The results from in vivo SPME were compared to the conventional method based on blood withdrawal and sample clean up by plasma protein precipitation. Both whole blood and plasma concentrations were determined by the conventional method. The concentrations of methoxyfenoterol and fenoterol obtained by SPME generally concur with the whole blood concentrations determined by the conventional method indicating the utility of the proposed method. The proposed diffusion-based interface model has several advantages over other kinetic calibration models for in vivo SPME sampling including (i) it does not require the addition of a standard into the sample matrix during in vivo studies, (ii) it is simple and rapid and eliminates the need to pre-load appropriate standard onto the SPME extraction phase and (iii) the calibration constant for SPME can be calculated based on the diffusion coefficient, extraction time, fiber length and radius, and size of the boundary layer. In the current study, the experimental calibration constants of 338.9±30 mm(-3) and 298.5±25 mm(-3) are in excellent agreement with the theoretical calibration constants of 307.9 mm(-3) and 316.0 mm(-3) for fenoterol and methoxyfenoterol respectively. Copyright © 2012 Elsevier B.V. All rights reserved.

  1. Two-dimensional thermal video analysis of offshore bird and bat flight

    DOE PAGES

    Matzner, Shari; Cullinan, Valerie I.; Duberstein, Corey A.

    2015-09-11

    Thermal infrared video can provide essential information about bird and bat presence and activity for risk assessment studies, but the analysis of recorded video can be time-consuming and may not extract all of the available information. Automated processing makes continuous monitoring over extended periods of time feasible, and maximizes the information provided by video. This is especially important for collecting data in remote locations that are difficult for human observers to access, such as proposed offshore wind turbine sites. We present guidelines for selecting an appropriate thermal camera based on environmental conditions and the physical characteristics of the target animals.more » We developed new video image processing algorithms that automate the extraction of bird and bat flight tracks from thermal video, and that characterize the extracted tracks to support animal identification and behavior inference. The algorithms use a video peak store process followed by background masking and perceptual grouping to extract flight tracks. The extracted tracks are automatically quantified in terms that could then be used to infer animal type and possibly behavior. The developed automated processing generates results that are reproducible and verifiable, and reduces the total amount of video data that must be retained and reviewed by human experts. Finally, we suggest models for interpreting thermal imaging information.« less

  2. Two-dimensional thermal video analysis of offshore bird and bat flight

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matzner, Shari; Cullinan, Valerie I.; Duberstein, Corey A.

    Thermal infrared video can provide essential information about bird and bat presence and activity for risk assessment studies, but the analysis of recorded video can be time-consuming and may not extract all of the available information. Automated processing makes continuous monitoring over extended periods of time feasible, and maximizes the information provided by video. This is especially important for collecting data in remote locations that are difficult for human observers to access, such as proposed offshore wind turbine sites. We present guidelines for selecting an appropriate thermal camera based on environmental conditions and the physical characteristics of the target animals.more » We developed new video image processing algorithms that automate the extraction of bird and bat flight tracks from thermal video, and that characterize the extracted tracks to support animal identification and behavior inference. The algorithms use a video peak store process followed by background masking and perceptual grouping to extract flight tracks. The extracted tracks are automatically quantified in terms that could then be used to infer animal type and possibly behavior. The developed automated processing generates results that are reproducible and verifiable, and reduces the total amount of video data that must be retained and reviewed by human experts. Finally, we suggest models for interpreting thermal imaging information.« less

  3. Automated smoother for the numerical decoupling of dynamics models.

    PubMed

    Vilela, Marco; Borges, Carlos C H; Vinga, Susana; Vasconcelos, Ana Tereza R; Santos, Helena; Voit, Eberhard O; Almeida, Jonas S

    2007-08-21

    Structure identification of dynamic models for complex biological systems is the cornerstone of their reverse engineering. Biochemical Systems Theory (BST) offers a particularly convenient solution because its parameters are kinetic-order coefficients which directly identify the topology of the underlying network of processes. We have previously proposed a numerical decoupling procedure that allows the identification of multivariate dynamic models of complex biological processes. While described here within the context of BST, this procedure has a general applicability to signal extraction. Our original implementation relied on artificial neural networks (ANN), which caused slight, undesirable bias during the smoothing of the time courses. As an alternative, we propose here an adaptation of the Whittaker's smoother and demonstrate its role within a robust, fully automated structure identification procedure. In this report we propose a robust, fully automated solution for signal extraction from time series, which is the prerequisite for the efficient reverse engineering of biological systems models. The Whittaker's smoother is reformulated within the context of information theory and extended by the development of adaptive signal segmentation to account for heterogeneous noise structures. The resulting procedure can be used on arbitrary time series with a nonstationary noise process; it is illustrated here with metabolic profiles obtained from in-vivo NMR experiments. The smoothed solution that is free of parametric bias permits differentiation, which is crucial for the numerical decoupling of systems of differential equations. The method is applicable in signal extraction from time series with nonstationary noise structure and can be applied in the numerical decoupling of system of differential equations into algebraic equations, and thus constitutes a rather general tool for the reverse engineering of mechanistic model descriptions from multivariate experimental time series.

  4. Automated Extraction of Flow Features

    NASA Technical Reports Server (NTRS)

    Dorney, Suzanne (Technical Monitor); Haimes, Robert

    2005-01-01

    Computational Fluid Dynamics (CFD) simulations are routinely performed as part of the design process of most fluid handling devices. In order to efficiently and effectively use the results of a CFD simulation, visualization tools are often used. These tools are used in all stages of the CFD simulation including pre-processing, interim-processing, and post-processing, to interpret the results. Each of these stages requires visualization tools that allow one to examine the geometry of the device, as well as the partial or final results of the simulation. An engineer will typically generate a series of contour and vector plots to better understand the physics of how the fluid is interacting with the physical device. Of particular interest are detecting features such as shocks, re-circulation zones, and vortices (which will highlight areas of stress and loss). As the demand for CFD analyses continues to increase the need for automated feature extraction capabilities has become vital. In the past, feature extraction and identification were interesting concepts, but not required in understanding the physics of a steady flow field. This is because the results of the more traditional tools like; isc-surface, cuts and streamlines, were more interactive and easily abstracted so they could be represented to the investigator. These tools worked and properly conveyed the collected information at the expense of a great deal of interaction. For unsteady flow-fields, the investigator does not have the luxury of spending time scanning only one "snapshot" of the simulation. Automated assistance is required in pointing out areas of potential interest contained within the flow. This must not require a heavy compute burden (the visualization should not significantly slow down the solution procedure for co-processing environments). Methods must be developed to abstract the feature of interest and display it in a manner that physically makes sense.

  5. Automated Extraction of Flow Features

    NASA Technical Reports Server (NTRS)

    Dorney, Suzanne (Technical Monitor); Haimes, Robert

    2004-01-01

    Computational Fluid Dynamics (CFD) simulations are routinely performed as part of the design process of most fluid handling devices. In order to efficiently and effectively use the results of a CFD simulation, visualization tools are often used. These tools are used in all stages of the CFD simulation including pre-processing, interim-processing, and post-processing, to interpret the results. Each of these stages requires visualization tools that allow one to examine the geometry of the device, as well as the partial or final results of the simulation. An engineer will typically generate a series of contour and vector plots to better understand the physics of how the fluid is interacting with the physical device. Of particular interest are detecting features such as shocks, recirculation zones, and vortices (which will highlight areas of stress and loss). As the demand for CFD analyses continues to increase the need for automated feature extraction capabilities has become vital. In the past, feature extraction and identification were interesting concepts, but not required in understanding the physics of a steady flow field. This is because the results of the more traditional tools like; iso-surface, cuts and streamlines, were more interactive and easily abstracted so they could be represented to the investigator. These tools worked and properly conveyed the collected information at the expense of a great deal of interaction. For unsteady flow-fields, the investigator does not have the luxury of spending time scanning only one "snapshot" of the simulation. Automated assistance is required in pointing out areas of potential interest contained within the flow. This must not require a heavy compute burden (the visualization should not significantly slow down the solution procedure for (co-processing environments). Methods must be developed to abstract the feature of interest and display it in a manner that physically makes sense.

  6. Automated Fluid Feature Extraction from Transient Simulations

    NASA Technical Reports Server (NTRS)

    Haimes, Robert

    1998-01-01

    In the past, feature extraction and identification were interesting concepts, but not required to understand the underlying physics of a steady flow field. This is because the results of the more traditional tools like iso-surfaces, cuts and streamlines were more interactive and easily abstracted so they could be represented to the investigator. These tools worked and properly conveyed the collected information at the expense of much interaction. For unsteady flow-fields, the investigator does not have the luxury of spending time scanning only one 'snap-shot' of the simulation. Automated assistance is required in pointing out areas of potential interest contained within the flow. This must not require a heavy compute burden (the visualization should not significantly slow down the solution procedure for co-processing environments like pV3). And methods must be developed to abstract the feature and display it in a manner that physically makes sense. The following is a list of the important physical phenomena found in transient (and steady-state) fluid flow: Shocks; Vortex ores; Regions of Recirculation; Boundary Layers; Wakes.

  7. Text Mining in Biomedical Domain with Emphasis on Document Clustering

    PubMed Central

    2017-01-01

    Objectives With the exponential increase in the number of articles published every year in the biomedical domain, there is a need to build automated systems to extract unknown information from the articles published. Text mining techniques enable the extraction of unknown knowledge from unstructured documents. Methods This paper reviews text mining processes in detail and the software tools available to carry out text mining. It also reviews the roles and applications of text mining in the biomedical domain. Results Text mining processes, such as search and retrieval of documents, pre-processing of documents, natural language processing, methods for text clustering, and methods for text classification are described in detail. Conclusions Text mining techniques can facilitate the mining of vast amounts of knowledge on a given topic from published biomedical research articles and draw meaningful conclusions that are not possible otherwise. PMID:28875048

  8. An approach for automated fault diagnosis based on a fuzzy decision tree and boundary analysis of a reconstructed phase space.

    PubMed

    Aydin, Ilhan; Karakose, Mehmet; Akin, Erhan

    2014-03-01

    Although reconstructed phase space is one of the most powerful methods for analyzing a time series, it can fail in fault diagnosis of an induction motor when the appropriate pre-processing is not performed. Therefore, boundary analysis based a new feature extraction method in phase space is proposed for diagnosis of induction motor faults. The proposed approach requires the measurement of one phase current signal to construct the phase space representation. Each phase space is converted into an image, and the boundary of each image is extracted by a boundary detection algorithm. A fuzzy decision tree has been designed to detect broken rotor bars and broken connector faults. The results indicate that the proposed approach has a higher recognition rate than other methods on the same dataset. © 2013 ISA Published by ISA All rights reserved.

  9. Tools for automating spacecraft ground systems: The Intelligent Command and Control (ICC) approach

    NASA Technical Reports Server (NTRS)

    Stoffel, A. William; Mclean, David

    1996-01-01

    The practical application of scripting languages and World Wide Web tools to the support of spacecraft ground system automation, is reported on. The mission activities and the automation tools used at the Goddard Space Flight Center (MD) are reviewed. The use of the Tool Command Language (TCL) and the Practical Extraction and Report Language (PERL) scripting tools for automating mission operations is discussed together with the application of different tools for the Compton Gamma Ray Observatory ground system.

  10. Automated extraction of temporal motor activity signals from video recordings of neonatal seizures based on adaptive block matching.

    PubMed

    Karayiannis, Nicolaos B; Sami, Abdul; Frost, James D; Wise, Merrill S; Mizrahi, Eli M

    2005-04-01

    This paper presents an automated procedure developed to extract quantitative information from video recordings of neonatal seizures in the form of motor activity signals. This procedure relies on optical flow computation to select anatomical sites located on the infants' body parts. Motor activity signals are extracted by tracking selected anatomical sites during the seizure using adaptive block matching. A block of pixels is tracked throughout a sequence of frames by searching for the most similar block of pixels in subsequent frames; this search is facilitated by employing various update strategies to account for the changing appearance of the block. The proposed procedure is used to extract temporal motor activity signals from video recordings of neonatal seizures and other events not associated with seizures.

  11. Unsupervised automated high throughput phenotyping of RNAi time-lapse movies.

    PubMed

    Failmezger, Henrik; Fröhlich, Holger; Tresch, Achim

    2013-10-04

    Gene perturbation experiments in combination with fluorescence time-lapse cell imaging are a powerful tool in reverse genetics. High content applications require tools for the automated processing of the large amounts of data. These tools include in general several image processing steps, the extraction of morphological descriptors, and the grouping of cells into phenotype classes according to their descriptors. This phenotyping can be applied in a supervised or an unsupervised manner. Unsupervised methods are suitable for the discovery of formerly unknown phenotypes, which are expected to occur in high-throughput RNAi time-lapse screens. We developed an unsupervised phenotyping approach based on Hidden Markov Models (HMMs) with multivariate Gaussian emissions for the detection of knockdown-specific phenotypes in RNAi time-lapse movies. The automated detection of abnormal cell morphologies allows us to assign a phenotypic fingerprint to each gene knockdown. By applying our method to the Mitocheck database, we show that a phenotypic fingerprint is indicative of a gene's function. Our fully unsupervised HMM-based phenotyping is able to automatically identify cell morphologies that are specific for a certain knockdown. Beyond the identification of genes whose knockdown affects cell morphology, phenotypic fingerprints can be used to find modules of functionally related genes.

  12. Determination of volatile marker compounds in raw ham using headspace-trap gas chromatography.

    PubMed

    Bosse Née Danz, Ramona; Wirth, Melanie; Konstanz, Annette; Becker, Thomas; Weiss, Jochen; Gibis, Monika

    2017-03-15

    A simple, reliable and automated method was developed and optimized for qualification and quantification of aroma-relevant volatile marker compounds of North European raw ham using a headspace (HS)-Trap gas chromatography-mass spectrometry (GC-MS) and GC-flame ionization detector (FID) analysis. A total of 38 volatile compounds were detected with this HS-Trap GC-MS method amongst which the largest groups were ketones (12), alcohols (8), hydrocarbons (7), aldehydes (6) and esters (3). The HS-Trap GC-FID method was optimized for the parameters: thermostatting time and temperature, vial and desorption pressure, number of extraction cycles and salt addition. A validation for 13 volatile marker compounds with limits of detection in ng/g was carried out. The optimized method can serve as alternative to conventional headspace and solid phase micro extraction methods and allows users to determine volatile compounds in raw hams making it of interest to industrial and academic meat scientists. Copyright © 2016 Elsevier Ltd. All rights reserved.

  13. Evaluation and comparison of rapid methods for the detection of Salmonella in naturally contaminated pine nuts using different pre enrichment media.

    PubMed

    Wang, Hua; Gill, Vikas S; Cheng, Chorng-Ming; Gonzalez-Escalona, Narjol; Irvin, Kari A; Zheng, Jie; Bell, Rebecca L; Jacobson, Andrew P; Hammack, Thomas S

    2015-04-01

    Foodborne outbreaks, involving pine nuts and peanut butter, illustrate the need to rapidly detect Salmonella in low moisture foods. However, the current Bacteriological Analytical Manual (BAM) culture method for Salmonella, using lactose broth (LB) as a pre enrichment medium, has not reliably supported real-time quantitative PCR (qPCR) assays for certain foods. We evaluated two qPCR assays in LB and four other pre enrichment media: buffered peptone water (BPW), modified BPW (mBPW), Universal Pre enrichment broth (UPB), and BAX(®) MP media to detect Salmonella in naturally-contaminated pine nuts (2011 outbreak). A four-way comparison among culture method, Pathatrix(®) Auto, VIDAS(®) Easy SLM, and qPCR was conducted. Automated DNA extraction techniques were compared with manual extraction methods (boiling or InstaGene™). There were no significant differences (P > 0.05) among the five pre enrichment media for pine nuts using the culture method. While both qPCR assays produced significantly (P ≤ 0.05) higher false negatives in 24 h pre enriched LB than in the other four media, they were as sensitive as the culture method in BPW, mBPW, UPB, and BAX media. The VIDAS Easy and qPCR were equivalent; Pathatrix was the least effective method. The Automatic PrepSEQ™ DNA extraction, using 1000 μL of pre enrichment, was as effective as manual extraction methods. Published by Elsevier Ltd.

  14. Principal axes estimation using the vibration modes of physics-based deformable models.

    PubMed

    Krinidis, Stelios; Chatzis, Vassilios

    2008-06-01

    This paper addresses the issue of accurate, effective, computationally efficient, fast, and fully automated 2-D object orientation and scaling factor estimation. The object orientation is calculated using object principal axes estimation. The approach relies on the object's frequency-based features. The frequency-based features used by the proposed technique are extracted by a 2-D physics-based deformable model that parameterizes the objects shape. The method was evaluated on synthetic and real images. The experimental results demonstrate the accuracy of the method, both in orientation and the scaling estimations.

  15. Semi-automated surface mapping via unsupervised classification

    NASA Astrophysics Data System (ADS)

    D'Amore, M.; Le Scaon, R.; Helbert, J.; Maturilli, A.

    2017-09-01

    Due to the increasing volume of the returned data from space mission, the human search for correlation and identification of interesting features becomes more and more unfeasible. Statistical extraction of features via machine learning methods will increase the scientific output of remote sensing missions and aid the discovery of yet unknown feature hidden in dataset. Those methods exploit algorithm trained on features from multiple instrument, returning classification maps that explore intra-dataset correlation, allowing for the discovery of unknown features. We present two applications, one for Mercury and one for Vesta.

  16. Extraction of features from ultrasound acoustic emissions: a tool to assess the hydraulic vulnerability of Norway spruce trunkwood?

    PubMed Central

    Rosner, Sabine; Klein, Andrea; Wimmer, Rupert; Karlsson, Bo

    2011-01-01

    Summary • The aim of this study was to assess the hydraulic vulnerability of Norway spruce (Picea abies) trunkwood by extraction of selected features of acoustic emissions (AEs) detected during dehydration of standard size samples. • The hydraulic method was used as the reference method to assess the hydraulic vulnerability of trunkwood of different cambial ages. Vulnerability curves were constructed by plotting the percentage loss of conductivity vs an overpressure of compressed air. • Differences in hydraulic vulnerability were very pronounced between juvenile and mature wood samples; therefore, useful AE features, such as peak amplitude, duration and relative energy, could be filtered out. The AE rates of signals clustered by amplitude and duration ranges and the AE energies differed greatly between juvenile and mature wood at identical relative water losses. • Vulnerability curves could be constructed by relating the cumulated amount of relative AE energy to the relative loss of water and to xylem tension. AE testing in combination with feature extraction offers a readily automated and easy to use alternative to the hydraulic method. PMID:16771986

  17. Smart Extraction and Analysis System for Clinical Research.

    PubMed

    Afzal, Muhammad; Hussain, Maqbool; Khan, Wajahat Ali; Ali, Taqdir; Jamshed, Arif; Lee, Sungyoung

    2017-05-01

    With the increasing use of electronic health records (EHRs), there is a growing need to expand the utilization of EHR data to support clinical research. The key challenge in achieving this goal is the unavailability of smart systems and methods to overcome the issue of data preparation, structuring, and sharing for smooth clinical research. We developed a robust analysis system called the smart extraction and analysis system (SEAS) that consists of two subsystems: (1) the information extraction system (IES), for extracting information from clinical documents, and (2) the survival analysis system (SAS), for a descriptive and predictive analysis to compile the survival statistics and predict the future chance of survivability. The IES subsystem is based on a novel permutation-based pattern recognition method that extracts information from unstructured clinical documents. Similarly, the SAS subsystem is based on a classification and regression tree (CART)-based prediction model for survival analysis. SEAS is evaluated and validated on a real-world case study of head and neck cancer. The overall information extraction accuracy of the system for semistructured text is recorded at 99%, while that for unstructured text is 97%. Furthermore, the automated, unstructured information extraction has reduced the average time spent on manual data entry by 75%, without compromising the accuracy of the system. Moreover, around 88% of patients are found in a terminal or dead state for the highest clinical stage of disease (level IV). Similarly, there is an ∼36% probability of a patient being alive if at least one of the lifestyle risk factors was positive. We presented our work on the development of SEAS to replace costly and time-consuming manual methods with smart automatic extraction of information and survival prediction methods. SEAS has reduced the time and energy of human resources spent unnecessarily on manual tasks.

  18. Automated processing of forensic casework samples using robotic workstations equipped with nondisposable tips: contamination prevention.

    PubMed

    Frégeau, Chantal J; Lett, C Marc; Elliott, Jim; Yensen, Craig; Fourney, Ron M

    2008-05-01

    An automated process has been developed for the analysis of forensic casework samples using TECAN Genesis RSP 150/8 or Freedom EVO liquid handling workstations equipped exclusively with nondisposable tips. Robot tip cleaning routines have been incorporated strategically within the DNA extraction process as well as at the end of each session. Alternative options were examined for cleaning the tips and different strategies were employed to verify cross-contamination. A 2% sodium hypochlorite wash (1/5th dilution of the 10.8% commercial bleach stock) proved to be the best overall approach for preventing cross-contamination of samples processed using our automated protocol. The bleach wash steps do not adversely impact the short tandem repeat (STR) profiles developed from DNA extracted robotically and allow for major cost savings through the implementation of fixed tips. We have demonstrated that robotic workstations equipped with fixed pipette tips can be used with confidence with properly designed tip washing routines to process casework samples using an adapted magnetic bead extraction protocol.

  19. Rapid simultaneous determination of amphetamine, methamphetamine, 3,4-methylenedioxyamphetamine, 3,4-methylenedioxymethamphetamine, and 3,4-methylenedioxyethylamphetamine in urine by solid-phase extraction and GC-MS: a method optimized for high-volume laboratories.

    PubMed

    Stout, Peter R; Horn, Carl K; Klette, Kevin L

    2002-01-01

    To facilitate analysis of high sample volumes, an extraction, derivatization and gas chromatographic-mass spectrometric analysis method was developed to simultaneously determine amphetamine (AMP), methamphetamine (MAMP), 3,4-methylenedioxyamphetamine (MDA) 3,4-methylenedioxymethamphetamine (MDMA), and 3,4-methylenedioxyethylamphetamine (MDEA) in urine. This method utilized a positive-pressure manifold cation-exchange polymer-based solid-phase extraction followed by elution directly into automated liquid sampler (ALS) vials. Rapid derivatization was accomplished using heptafluorobutyric anhydride (HFBA). Recoveries averaged 90% or greater for each of the compounds. Limits of detection were 62.5 ng/mL (AMP and MDEA), 15.6 ng/mL (MAMP), and 31.3 ng/mL (MDA and MDMA) using a 2-mL sample volume. The method was linear to 5000 ng/mL for all compounds using MDMA-d5 and MAMP-d14 as internal standards. Over 200 human urine samples previously determined to contain the target analytes were analyzed using the method. Excellent agreement was seen with previous quantitations. The method was challenged with 75 potentially interfering compounds and no interferences were seen. These interfering compounds included ephedrine, pseudoephedrine, phenylpropanolamine, and phenethylamine. The method resulted in dramatic reductions in processing time and waste production.

  20. Hematocrit-Independent Quantitation of Stimulants in Dried Blood Spots: Pipet versus Microfluidic-Based Volumetric Sampling Coupled with Automated Flow-Through Desorption and Online Solid Phase Extraction-LC-MS/MS Bioanalysis.

    PubMed

    Verplaetse, Ruth; Henion, Jack

    2016-07-05

    A workflow overcoming microsample collection issues and hematocrit (HCT)-related bias would facilitate more widespread use of dried blood spots (DBS). This report describes comparative results between the use of a pipet and a microfluidic-based sampling device for the creation of volumetric DBS. Both approaches were successfully coupled to HCT-independent, fully automated sample preparation and online liquid chromatography coupled to tandem mass spectrometry (LC-MS/MS) analysis allowing detection of five stimulants in finger prick blood. Reproducible, selective, accurate, and precise responses meeting generally accepted regulated bioanalysis guidelines were observed over the range of 5-1000 ng/mL whole blood. The applied heated flow-through solvent desorption of the entire spot and online solid phase extraction (SPE) procedure were unaffected by the blood's HCT value within the tested range of 28.0-61.5% HCT. Enhanced stability for mephedrone on DBS compared to liquid whole blood was observed. Finger prick blood samples were collected using both volumetric sampling approaches over a time course of 25 h after intake of a single oral dose of phentermine. A pharmacokinetic curve for the incurred phentermine was successfully produced using the described validated method. These results suggest that either volumetric sample collection method may be amenable to field-use followed by fully automated, HCT-independent DBS-SPE-LC-MS/MS bioanalysis for the quantitation of these representative controlled substances. Analytical data from DBS prepared with a pipet and microfluidic-based sampling devices were comparable, but the latter is easier to operate, making this approach more suitable for sample collection by unskilled persons.

  1. Monolithic methacrylate packed 96-tips for high throughput bioanalysis.

    PubMed

    Altun, Zeki; Skoglund, Christina; Abdel-Rehim, Mohamed

    2010-04-16

    In the pharmaceutical industry the growing number of samples to be analyzed requires high throughput and fully automated analytical techniques. Commonly used sample-preparation methods are solid-phase extraction (SPE), liquid-liquid extraction (LLE) and protein precipitation. In this paper we will discus a new sample-preparation technique based on SPE for high throughput drug extraction developed and used by our group. This new sample-preparation method is based on monolithic methacrylate polymer as packing sorbent for 96-tip robotic device. Using this device a 96-well plate could be handled in 2-4min. The key aspect of the monolithic phase is that monolithic material can offer both good binding capacity and low back-pressure properties compared to e.g. silica phases. The present paper presents the successful application of monolithic 96-tips and LC-MS/MS by the sample preparation of busulphan, rescovitine, metoprolol, pindolol and local anaesthetics from human plasma samples and cyklophosphamid from mice blood samples. Copyright 2009 Elsevier B.V. All rights reserved.

  2. Stability, structure and scale: improvements in multi-modal vessel extraction for SEEG trajectory planning.

    PubMed

    Zuluaga, Maria A; Rodionov, Roman; Nowell, Mark; Achhala, Sufyan; Zombori, Gergely; Mendelson, Alex F; Cardoso, M Jorge; Miserocchi, Anna; McEvoy, Andrew W; Duncan, John S; Ourselin, Sébastien

    2015-08-01

    Brain vessels are among the most critical landmarks that need to be assessed for mitigating surgical risks in stereo-electroencephalography (SEEG) implantation. Intracranial haemorrhage is the most common complication associated with implantation, carrying significantly associated morbidity. SEEG planning is done pre-operatively to identify avascular trajectories for the electrodes. In current practice, neurosurgeons have no assistance in the planning of electrode trajectories. There is great interest in developing computer-assisted planning systems that can optimise the safety profile of electrode trajectories, maximising the distance to critical structures. This paper presents a method that integrates the concepts of scale, neighbourhood structure and feature stability with the aim of improving robustness and accuracy of vessel extraction within a SEEG planning system. The developed method accounts for scale and vicinity of a voxel by formulating the problem within a multi-scale tensor voting framework. Feature stability is achieved through a similarity measure that evaluates the multi-modal consistency in vesselness responses. The proposed measurement allows the combination of multiple images modalities into a single image that is used within the planning system to visualise critical vessels. Twelve paired data sets from two image modalities available within the planning system were used for evaluation. The mean Dice similarity coefficient was 0.89 ± 0.04, representing a statistically significantly improvement when compared to a semi-automated single human rater, single-modality segmentation protocol used in clinical practice (0.80 ± 0.03). Multi-modal vessel extraction is superior to semi-automated single-modality segmentation, indicating the possibility of safer SEEG planning, with reduced patient morbidity.

  3. Comparative Evaluation of Pavement Crack Detection Using Kernel-Based Techniques in Asphalt Road Surfaces

    NASA Astrophysics Data System (ADS)

    Miraliakbari, A.; Sok, S.; Ouma, Y. O.; Hahn, M.

    2016-06-01

    With the increasing demand for the digital survey and acquisition of road pavement conditions, there is also the parallel growing need for the development of automated techniques for the analysis and evaluation of the actual road conditions. This is due in part to the resulting large volumes of road pavement data captured through digital surveys, and also to the requirements for rapid data processing and evaluations. In this study, the Canon 5D Mark II RGB camera with a resolution of 21 megapixels is used for the road pavement condition mapping. Even though many imaging and mapping sensors are available, the development of automated pavement distress detection, recognition and extraction systems for pavement condition is still a challenge. In order to detect and extract pavement cracks, a comparative evaluation of kernel-based segmentation methods comprising line filtering (LF), local binary pattern (LBP) and high-pass filtering (HPF) is carried out. While the LF and LBP methods are based on the principle of rotation-invariance for pattern matching, the HPF applies the same principle for filtering, but with a rotational invariant matrix. With respect to the processing speeds, HPF is fastest due to the fact that it is based on a single kernel, as compared to LF and LBP which are based on several kernels. Experiments with 20 sample images which contain linear, block and alligator cracks are carried out. On an average a completeness of distress extraction with values of 81.2%, 76.2% and 81.1% have been found for LF, HPF and LBP respectively.

  4. An automated method of on-line extraction coupled with flow injection and capillary electrophoresis for phytochemical analysis.

    PubMed

    Chen, Hongli; Ding, Xiuping; Wang, Min; Chen, Xingguo

    2010-11-01

    In this study, an automated system for phytochemical analysis was successfully fabricated for the first time in our laboratory. The system included on-line decocting, filtering, cooling, sample introducing, separation, and detection, which greatly simplified the sample preparation and shortened the analysis time. Samples from the decoction extract were drawn every 5 min through an on-line filter and a condenser pipe to the sample loop from which 20-μL samples were injected into the running buffer and transported into a split-flow interface coupling the flow injection and capillary electrophoresis systems. The separation of glycyrrhetinic acid (GTA) and glycyrrhizic acid (GA) took less than 5 min by using a 10 mM borate buffer (adjusted pH to 8.8) and +10 kV voltage. Calibration curves showed good linearity with correlation coefficients (R) more than 0.9991. The intra-day repeatabilities (n = 5, expressed as relative standard deviation) of the proposed system, obtained using GTA and GA standards, were 1.1% and 0.8% for migration time and 0.7% and 0.9% for peak area, respectively. The mean recoveries of GTA and GA in the off-line extract of Glycyrrhiza uralensis Fisch root were better than 99.0%. The limits of detection (signal-to-noise ratio = 3) of the proposed method were 6.2 μg/mL and 6.9 μg/mL for GTA and GA, respectively. The dynamic changes of GTA and GA on the decoction time were obtained during the on-line decoction process of Glycyrrhiza uralensis Fisch root.

  5. Redefining the Practice of Peer Review Through Intelligent Automation-Part 3: Automated Report Analysis and Data Reconciliation.

    PubMed

    Reiner, Bruce I

    2018-02-01

    One method for addressing existing peer review limitations is the assignment of peer review cases on a completely blinded basis, in which the peer reviewer would create an independent report which can then be cross-referenced with the primary reader report of record. By leveraging existing computerized data mining techniques, one could in theory automate and objectify the process of report data extraction, classification, and analysis, while reducing time and resource requirements intrinsic to manual peer review report analysis. Once inter-report analysis has been performed, resulting inter-report discrepancies can be presented to the radiologist of record for review, along with the option to directly communicate with the peer reviewer through an electronic data reconciliation tool aimed at collaboratively resolving inter-report discrepancies and improving report accuracy. All associated report and reconciled data could in turn be recorded in a referenceable peer review database, which provides opportunity for context and user-specific education and decision support.

  6. Development of a semi-automated combined PET and CT lung lesion segmentation framework

    NASA Astrophysics Data System (ADS)

    Rossi, Farli; Mokri, Siti Salasiah; Rahni, Ashrani Aizzuddin Abd.

    2017-03-01

    Segmentation is one of the most important steps in automated medical diagnosis applications, which affects the accuracy of the overall system. In this paper, we propose a semi-automated segmentation method for extracting lung lesions from thoracic PET/CT images by combining low level processing and active contour techniques. The lesions are first segmented in PET images which are first converted to standardised uptake values (SUVs). The segmented PET images then serve as an initial contour for subsequent active contour segmentation of corresponding CT images. To evaluate its accuracy, the Jaccard Index (JI) was used as a measure of the accuracy of the segmented lesion compared to alternative segmentations from the QIN lung CT segmentation challenge, which is possible by registering the whole body PET/CT images to the corresponding thoracic CT images. The results show that our proposed technique has acceptable accuracy in lung lesion segmentation with JI values of around 0.8, especially when considering the variability of the alternative segmentations.

  7. Automatic drawing for traffic marking with MMS LIDAR intensity

    NASA Astrophysics Data System (ADS)

    Takahashi, G.; Takeda, H.; Shimano, Y.

    2014-05-01

    Upgrading the database of CYBER JAPAN has been strategically promoted because the "Basic Act on Promotion of Utilization of Geographical Information", was enacted in May 2007. In particular, there is a high demand for road information that comprises a framework in this database. Therefore, road inventory mapping work has to be accurate and eliminate variation caused by individual human operators. Further, the large number of traffic markings that are periodically maintained and possibly changed require an efficient method for updating spatial data. Currently, we apply manual photogrammetry drawing for mapping traffic markings. However, this method is not sufficiently efficient in terms of the required productivity, and data variation can arise from individual operators. In contrast, Mobile Mapping Systems (MMS) and high-density Laser Imaging Detection and Ranging (LIDAR) scanners are rapidly gaining popularity. The aim in this study is to build an efficient method for automatically drawing traffic markings using MMS LIDAR data. The key idea in this method is extracting lines using a Hough transform strategically focused on changes in local reflection intensity along scan lines. However, also note that this method processes every traffic marking. In this paper, we discuss a highly accurate and non-human-operator-dependent method that applies the following steps: (1) Binarizing LIDAR points by intensity and extracting higher intensity points; (2) Generating a Triangulated Irregular Network (TIN) from higher intensity points; (3) Deleting arcs by length and generating outline polygons on the TIN; (4) Generating buffers from the outline polygons; (5) Extracting points from the buffers using the original LIDAR points; (6) Extracting local-intensity-changing points along scan lines using the extracted points; (7) Extracting lines from intensity-changing points through a Hough transform; and (8) Connecting lines to generate automated traffic marking mapping data.

  8. Automated quantification of renal interstitial fibrosis for computer-aided diagnosis: A comprehensive tissue structure segmentation method.

    PubMed

    Tey, Wei Keat; Kuang, Ye Chow; Ooi, Melanie Po-Leen; Khoo, Joon Joon

    2018-03-01

    Interstitial fibrosis in renal biopsy samples is a scarring tissue structure that may be visually quantified by pathologists as an indicator to the presence and extent of chronic kidney disease. The standard method of quantification by visual evaluation presents reproducibility issues in the diagnoses. This study proposes an automated quantification system for measuring the amount of interstitial fibrosis in renal biopsy images as a consistent basis of comparison among pathologists. The system extracts and segments the renal tissue structures based on colour information and structural assumptions of the tissue structures. The regions in the biopsy representing the interstitial fibrosis are deduced through the elimination of non-interstitial fibrosis structures from the biopsy area and quantified as a percentage of the total area of the biopsy sample. A ground truth image dataset has been manually prepared by consulting an experienced pathologist for the validation of the segmentation algorithms. The results from experiments involving experienced pathologists have demonstrated a good correlation in quantification result between the automated system and the pathologists' visual evaluation. Experiments investigating the variability in pathologists also proved the automated quantification error rate to be on par with the average intra-observer variability in pathologists' quantification. Interstitial fibrosis in renal biopsy samples is a scarring tissue structure that may be visually quantified by pathologists as an indicator to the presence and extent of chronic kidney disease. The standard method of quantification by visual evaluation presents reproducibility issues in the diagnoses due to the uncertainties in human judgement. An automated quantification system for accurately measuring the amount of interstitial fibrosis in renal biopsy images is presented as a consistent basis of comparison among pathologists. The system identifies the renal tissue structures through knowledge-based rules employing colour space transformations and structural features extraction from the images. In particular, the renal glomerulus identification is based on a multiscale textural feature analysis and a support vector machine. The regions in the biopsy representing interstitial fibrosis are deduced through the elimination of non-interstitial fibrosis structures from the biopsy area. The experiments conducted evaluate the system in terms of quantification accuracy, intra- and inter-observer variability in visual quantification by pathologists, and the effect introduced by the automated quantification system on the pathologists' diagnosis. A 40-image ground truth dataset has been manually prepared by consulting an experienced pathologist for the validation of the segmentation algorithms. The results from experiments involving experienced pathologists have demonstrated an average error of 9 percentage points in quantification result between the automated system and the pathologists' visual evaluation. Experiments investigating the variability in pathologists involving samples from 70 kidney patients also proved the automated quantification error rate to be on par with the average intra-observer variability in pathologists' quantification. The accuracy of the proposed quantification system has been validated with the ground truth dataset and compared against the pathologists' quantification results. It has been shown that the correlation between different pathologists' estimation of interstitial fibrosis area has significantly improved, demonstrating the effectiveness of the quantification system as a diagnostic aide. Copyright © 2017 Elsevier B.V. All rights reserved.

  9. Automated Cell Detection and Morphometry on Growth Plate Images of Mouse Bone

    PubMed Central

    Ascenzi, Maria-Grazia; Du, Xia; Harding, James I; Beylerian, Emily N; de Silva, Brian M; Gross, Ben J; Kastein, Hannah K; Wang, Weiguang; Lyons, Karen M; Schaeffer, Hayden

    2014-01-01

    Microscopy imaging of mouse growth plates is extensively used in biology to understand the effect of specific molecules on various stages of normal bone development and on bone disease. Until now, such image analysis has been conducted by manual detection. In fact, when existing automated detection techniques were applied, morphological variations across the growth plate and heterogeneity of image background color, including the faint presence of cells (chondrocytes) located deeper in tissue away from the image’s plane of focus, and lack of cell-specific features, interfered with identification of cell. We propose the first method of automated detection and morphometry applicable to images of cells in the growth plate of long bone. Through ad hoc sequential application of the Retinex method, anisotropic diffusion and thresholding, our new cell detection algorithm (CDA) addresses these challenges on bright-field microscopy images of mouse growth plates. Five parameters, chosen by the user in respect of image characteristics, regulate our CDA. Our results demonstrate effectiveness of the proposed numerical method relative to manual methods. Our CDA confirms previously established results regarding chondrocytes’ number, area, orientation, height and shape of normal growth plates. Our CDA also confirms differences previously found between the genetic mutated mouse Smad1/5CKO and its control mouse on fluorescence images. The CDA aims to aid biomedical research by increasing efficiency and consistency of data collection regarding arrangement and characteristics of chondrocytes. Our results suggest that automated extraction of data from microscopy imaging of growth plates can assist in unlocking information on normal and pathological development, key to the underlying biological mechanisms of bone growth. PMID:25525552

  10. SU-F-P-53: RadShield: Semi-Automated Shielding Design for CT Using NCRP 147 and Isodose Curves

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    DeLorenzo, M; Rutel, I; Wu, D

    Purpose: Computed tomography (CT) exam rooms are shielded more quickly and accurately compared to manual calculations using RadShield, a semi-automated diagnostic shielding software package. Last year, we presented RadShield’s approach to shielding radiographic and fluoroscopic rooms calculating air kerma rate and barrier thickness at many points on the floor plan and reporting the maximum values for each barrier. RadShield has now been expanded to include CT shielding design using not only NCRP 147 methodology but also by overlaying vendor provided isodose curves onto the floor plan. Methods: The floor plan image is imported onto the RadShield workspace to serve asmore » a template for drawing barriers, occupied regions and CT locations. SubGUIs are used to set design goals, occupancy factors, workload, and overlay isodose curve files. CTDI and DLP methods are solved following NCRP 147. RadShield’s isodose curve method employs radial scanning to extract data point sets to fit kerma to a generalized power law equation of the form K(r) = ar^b. RadShield’s semi-automated shielding recommendations were compared against a board certified medical physicist’s design using dose length product (DLP) and isodose curves. Results: The percentage error found between the physicist’s manual calculation and RadShield’s semi-automated calculation of lead barrier thickness was 3.42% and 21.17% for the DLP and isodose curve methods, respectively. The medical physicist’s selection of calculation points for recommending lead thickness was roughly the same as those found by RadShield for the DLP method but differed greatly using the isodose method. Conclusion: RadShield improves accuracy in calculating air-kerma rate and barrier thickness over manual calculations using isodose curves. Isodose curves were less intuitive and more prone to error for the physicist than inverse square methods. RadShield can now perform shielding design calculations for general scattering bodies for which isodose curves are provided.« less

  11. Automated dispersive liquid-liquid microextraction coupled to high performance liquid chromatography - cold vapour atomic fluorescence spectroscopy for the determination of mercury species in natural water samples.

    PubMed

    Liu, Yao-Min; Zhang, Feng-Ping; Jiao, Bao-Yu; Rao, Jin-Yu; Leng, Geng

    2017-04-14

    An automated, home-constructed, and low cost dispersive liquid-liquid microextraction (DLLME) device that directly coupled to a high performance liquid chromatography (HPLC) - cold vapour atomic fluorescence spectroscopy (CVAFS) system was designed and developed for the determination of trace concentrations of methylmercury (MeHg + ), ethylmercury (EtHg + ) and inorganic mercury (Hg 2+ ) in natural waters. With a simple, miniaturized and efficient automated DLLME system, nanogram amounts of these mercury species were extracted from natural water samples and injected into a hyphenated HPLC-CVAFS for quantification. The complete analytical procedure, including chelation, extraction, phase separation, collection and injection of the extracts, as well as HPLC-CVAFS quantification, was automated. Key parameters, such as the type and volume of the chelation, extraction and dispersive solvent, aspiration speed, sample pH, salt effect and matrix effect, were thoroughly investigated. Under the optimum conditions, linear range was 10-1200ngL -1 for EtHg + and 5-450ngL -1 for MeHg + and Hg 2+ . Limits of detection were 3.0ngL -1 for EtHg + and 1.5ngL -1 for MeHg + and Hg 2+ . Reproducibility and recoveries were assessed by spiking three natural water samples with different Hg concentrations, giving recoveries from 88.4-96.1%, and relative standard deviations <5.1%. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. Manta Matcher: automated photographic identification of manta rays using keypoint features.

    PubMed

    Town, Christopher; Marshall, Andrea; Sethasathien, Nutthaporn

    2013-07-01

    For species which bear unique markings, such as natural spot patterning, field work has become increasingly more reliant on visual identification to recognize and catalog particular specimens or to monitor individuals within populations. While many species of interest exhibit characteristic markings that in principle allow individuals to be identified from photographs, scientists are often faced with the task of matching observations against databases of hundreds or thousands of images. We present a novel technique for automated identification of manta rays (Manta alfredi and Manta birostris) by means of a pattern-matching algorithm applied to images of their ventral surface area. Automated visual identification has recently been developed for several species. However, such methods are typically limited to animals that can be photographed above water, or whose markings exhibit high contrast and appear in regular constellations. While manta rays bear natural patterning across their ventral surface, these patterns vary greatly in their size, shape, contrast, and spatial distribution. Our method is the first to have proven successful at achieving high matching accuracies on a large corpus of manta ray images taken under challenging underwater conditions. Our method is based on automated extraction and matching of keypoint features using the Scale-Invariant Feature Transform (SIFT) algorithm. In order to cope with the considerable variation in quality of underwater photographs, we also incorporate preprocessing and image enhancement steps. Furthermore, we use a novel pattern-matching approach that results in better accuracy than the standard SIFT approach and other alternative methods. We present quantitative evaluation results on a data set of 720 images of manta rays taken under widely different conditions. We describe a novel automated pattern representation and matching method that can be used to identify individual manta rays from photographs. The method has been incorporated into a website (mantamatcher.org) which will serve as a global resource for ecological and conservation research. It will allow researchers to manage and track sightings data to establish important life-history parameters as well as determine other ecological data such as abundance, range, movement patterns, and structure of manta ray populations across the world.

  13. Manta Matcher: automated photographic identification of manta rays using keypoint features

    PubMed Central

    Town, Christopher; Marshall, Andrea; Sethasathien, Nutthaporn

    2013-01-01

    For species which bear unique markings, such as natural spot patterning, field work has become increasingly more reliant on visual identification to recognize and catalog particular specimens or to monitor individuals within populations. While many species of interest exhibit characteristic markings that in principle allow individuals to be identified from photographs, scientists are often faced with the task of matching observations against databases of hundreds or thousands of images. We present a novel technique for automated identification of manta rays (Manta alfredi and Manta birostris) by means of a pattern-matching algorithm applied to images of their ventral surface area. Automated visual identification has recently been developed for several species. However, such methods are typically limited to animals that can be photographed above water, or whose markings exhibit high contrast and appear in regular constellations. While manta rays bear natural patterning across their ventral surface, these patterns vary greatly in their size, shape, contrast, and spatial distribution. Our method is the first to have proven successful at achieving high matching accuracies on a large corpus of manta ray images taken under challenging underwater conditions. Our method is based on automated extraction and matching of keypoint features using the Scale-Invariant Feature Transform (SIFT) algorithm. In order to cope with the considerable variation in quality of underwater photographs, we also incorporate preprocessing and image enhancement steps. Furthermore, we use a novel pattern-matching approach that results in better accuracy than the standard SIFT approach and other alternative methods. We present quantitative evaluation results on a data set of 720 images of manta rays taken under widely different conditions. We describe a novel automated pattern representation and matching method that can be used to identify individual manta rays from photographs. The method has been incorporated into a website (mantamatcher.org) which will serve as a global resource for ecological and conservation research. It will allow researchers to manage and track sightings data to establish important life-history parameters as well as determine other ecological data such as abundance, range, movement patterns, and structure of manta ray populations across the world. PMID:23919138

  14. Linear feature extraction from radar imagery: SBIR (Small Business Innovative Research), phase 2, option 2

    NASA Astrophysics Data System (ADS)

    Milgram, David L.; Kahn, Philip; Conner, Gary D.; Lawton, Daryl T.

    1988-12-01

    The goal of this effort is to develop and demonstrate prototype processing capabilities for a knowledge-based system to automatically extract and analyze features from Synthetic Aperture Radar (SAR) imagery. This effort constitutes Phase 2 funding through the Defense Small Business Innovative Research (SBIR) Program. Previous work examined the feasibility of and technology issues involved in the development of an automated linear feature extraction system. This final report documents this examination and the technologies involved in automating this image understanding task. In particular, it reports on a major software delivery containing an image processing algorithmic base, a perceptual structures manipulation package, a preliminary hypothesis management framework and an enhanced user interface.

  15. Optimized manual and automated recovery of amplifiable DNA from tissues preserved in buffered formalin and alcohol-based fixative.

    PubMed

    Duval, Kristin; Aubin, Rémy A; Elliott, James; Gorn-Hondermann, Ivan; Birnboim, H Chaim; Jonker, Derek; Fourney, Ron M; Frégeau, Chantal J

    2010-02-01

    Archival tissue preserved in fixative constitutes an invaluable resource for histological examination, molecular diagnostic procedures and for DNA typing analysis in forensic investigations. However, available material is often limited in size and quantity. Moreover, recovery of DNA is often severely compromised by the presence of covalent DNA-protein cross-links generated by formalin, the most prevalent fixative. We describe the evaluation of buffer formulations, sample lysis regimens and DNA recovery strategies and define optimized manual and automated procedures for the extraction of high quality DNA suitable for molecular diagnostics and genotyping. Using a 3-step enzymatic digestion protocol carried out in the absence of dithiothreitol, we demonstrate that DNA can be efficiently released from cells or tissues preserved in buffered formalin or the alcohol-based fixative GenoFix. This preparatory procedure can then be integrated to traditional phenol/chloroform extraction, a modified manual DNA IQ or automated DNA IQ/Te-Shake-based extraction in order to recover DNA for downstream applications. Quantitative recovery of high quality DNA was best achieved from specimens archived in GenoFix and extracted using magnetic bead capture.

  16. ACME, a GIS tool for Automated Cirque Metric Extraction

    NASA Astrophysics Data System (ADS)

    Spagnolo, Matteo; Pellitero, Ramon; Barr, Iestyn D.; Ely, Jeremy C.; Pellicer, Xavier M.; Rea, Brice R.

    2017-02-01

    Regional scale studies of glacial cirque metrics provide key insights on the (palaeo) environment related to the formation of these erosional landforms. The growing availability of high resolution terrain models means that more glacial cirques can be identified and mapped in the future. However, the extraction of their metrics still largely relies on time consuming manual techniques or the combination of, more or less obsolete, GIS tools. In this paper, a newly coded toolbox is provided for the automated, and comparatively quick, extraction of 16 key glacial cirque metrics; including length, width, circularity, planar and 3D area, elevation, slope, aspect, plan closure and hypsometry. The set of tools, named ACME (Automated Cirque Metric Extraction), is coded in Python, runs in one of the most commonly used GIS packages (ArcGIS) and has a user friendly interface. A polygon layer of mapped cirques is required for all metrics, while a Digital Terrain Model and a point layer of cirque threshold midpoints are needed to run some of the tools. Results from ACME are comparable to those from other techniques and can be obtained rapidly, allowing large cirque datasets to be analysed and potentially important regional trends highlighted.

  17. Automated Techniques for Quantification of Coastline Change Rates using Landsat Imagery along Caofeidian, China

    NASA Astrophysics Data System (ADS)

    Dong, Di; Li, Ziwei; Liu, Zhaoqin; Yu, Yang

    2014-03-01

    This paper focuses on automated extraction and monitoring of coastlines by remote sensing techniques using multi-temporal Landsat imagery along Caofeidian, China. Caofeidian, as one of the active economic regions in China, has experienced dramatic change due to enhanced human activities, such as land reclamation. These processes have caused morphological changes of the Caofeidian shoreline. In this study, shoreline extraction and change analysis are researched. An algorithm based on image texture and mathematical morphology is proposed to automate coastline extraction. We tested this approach and found that it's capable of extracting coastlines from TM and ETM+ images with little human modifications. Then, the detected coastline vectors are imported into Arcgis software, and the Digital Shoreline Analysis System (DSAS) is used to calculate the change rate (the end point rate and linear regression rate). The results show that in some parts of the research area, remarkable coastline changes are observed, especially the accretion rate. The abnormal accretion is mostly attributed to the large-scale land reclamation during 2003 and 2004 in Caofeidian. So we can conclude that various construction projects, especially the land reclamation project, have made Caofeidian shorelines change greatly, far above the normal.

  18. Integrated Analysis Platform: An Open-Source Information System for High-Throughput Plant Phenotyping1[C][W][OPEN

    PubMed Central

    Klukas, Christian; Chen, Dijun; Pape, Jean-Michel

    2014-01-01

    High-throughput phenotyping is emerging as an important technology to dissect phenotypic components in plants. Efficient image processing and feature extraction are prerequisites to quantify plant growth and performance based on phenotypic traits. Issues include data management, image analysis, and result visualization of large-scale phenotypic data sets. Here, we present Integrated Analysis Platform (IAP), an open-source framework for high-throughput plant phenotyping. IAP provides user-friendly interfaces, and its core functions are highly adaptable. Our system supports image data transfer from different acquisition environments and large-scale image analysis for different plant species based on real-time imaging data obtained from different spectra. Due to the huge amount of data to manage, we utilized a common data structure for efficient storage and organization of data for both input data and result data. We implemented a block-based method for automated image processing to extract a representative list of plant phenotypic traits. We also provide tools for build-in data plotting and result export. For validation of IAP, we performed an example experiment that contains 33 maize (Zea mays ‘Fernandez’) plants, which were grown for 9 weeks in an automated greenhouse with nondestructive imaging. Subsequently, the image data were subjected to automated analysis with the maize pipeline implemented in our system. We found that the computed digital volume and number of leaves correlate with our manually measured data in high accuracy up to 0.98 and 0.95, respectively. In summary, IAP provides a multiple set of functionalities for import/export, management, and automated analysis of high-throughput plant phenotyping data, and its analysis results are highly reliable. PMID:24760818

  19. Quantitative determination of opioids in whole blood using fully automated dried blood spot desorption coupled to on-line SPE-LC-MS/MS.

    PubMed

    Verplaetse, Ruth; Henion, Jack

    2016-01-01

    Opioids are well known, widely used painkillers. Increased stability of opioids in the dried blood spot (DBS) matrix compared to blood/plasma has been described. Other benefits provided by DBS techniques include point-of-care collection, less invasive micro sampling, more economical shipment, and convenient storage. Current methodology for analysis of micro whole blood samples for opioids is limited to the classical DBS workflow, including tedious manual punching of the DBS cards followed by extraction and liquid chromatography-tandem mass spectrometry (LC-MS/MS) bioanalysis. The goal of this study was to develop and validate a fully automated on-line sample preparation procedure for the analysis of DBS micro samples relevant to the detection of opioids in finger prick blood. To this end, automated flow-through elution of DBS cards was followed by on-line solid-phase extraction (SPE) and analysis by LC-MS/MS. Selective, sensitive, accurate, and reproducible quantitation of five representative opioids in human blood at sub-therapeutic, therapeutic, and toxic levels was achieved. The range of reliable response (R(2)  ≥0.997) was 1 to 500 ng/mL whole blood for morphine, codeine, oxycodone, hydrocodone; and 0.1 to 50 ng/mL for fentanyl. Inter-day, intra-day, and matrix inter-lot accuracy and precision was less than 15% (even at lower limits of quantitation (LLOQ) level). The method was successfully used to measure hydrocodone and its major metabolite norhydrocodone in incurred human samples. Our data support the enormous potential of DBS sampling and automated analysis for monitoring opioids as well as other pharmaceuticals in both anti-doping and pain management regimens. Copyright © 2015 John Wiley & Sons, Ltd.

  20. Automated detection of videotaped neonatal seizures based on motion segmentation methods.

    PubMed

    Karayiannis, Nicolaos B; Tao, Guozhi; Frost, James D; Wise, Merrill S; Hrachovy, Richard A; Mizrahi, Eli M

    2006-07-01

    This study was aimed at the development of a seizure detection system by training neural networks using quantitative motion information extracted by motion segmentation methods from short video recordings of infants monitored for seizures. The motion of the infants' body parts was quantified by temporal motion strength signals extracted from video recordings by motion segmentation methods based on optical flow computation. The area of each frame occupied by the infants' moving body parts was segmented by direct thresholding, by clustering of the pixel velocities, and by clustering the motion parameters obtained by fitting an affine model to the pixel velocities. The computational tools and procedures developed for automated seizure detection were tested and evaluated on 240 short video segments selected and labeled by physicians from a set of video recordings of 54 patients exhibiting myoclonic seizures (80 segments), focal clonic seizures (80 segments), and random infant movements (80 segments). The experimental study described in this paper provided the basis for selecting the most effective strategy for training neural networks to detect neonatal seizures as well as the decision scheme used for interpreting the responses of the trained neural networks. Depending on the decision scheme used for interpreting the responses of the trained neural networks, the best neural networks exhibited sensitivity above 90% or specificity above 90%. The best among the motion segmentation methods developed in this study produced quantitative features that constitute a reliable basis for detecting myoclonic and focal clonic neonatal seizures. The performance targets of this phase of the project may be achieved by combining the quantitative features described in this paper with those obtained by analyzing motion trajectory signals produced by motion tracking methods. A video system based upon automated analysis potentially offers a number of advantages. Infants who are at risk for seizures could be monitored continuously using relatively inexpensive and non-invasive video techniques that supplement direct observation by nursery personnel. This would represent a major advance in seizure surveillance and offers the possibility for earlier identification of potential neurological problems and subsequent intervention.

  1. Clinical Performance of a Matrix-Assisted Laser Desorption Ionization–Time of Flight Mass Spectrometry Method for Detection of Certain blaKPC-Containing Plasmids

    PubMed Central

    Youn, Jung-Ho; Drake, Steven K.; Weingarten, Rebecca A.; Frank, Karen M.; Dekker, John P.

    2015-01-01

    Rapid detection of blaKPC-containing organisms can significantly impact infection control and clinical practices, as well as therapeutic choices. Current molecular and phenotypic methods to detect these organisms, however, require additional testing beyond routine organism identification. In this study, we evaluated the clinical performance of matrix-assisted laser desorption ionization–time of flight mass spectrometry (MALDI-TOF MS) to detect pKpQIL_p019 (p019)—an ∼11,109-Da protein associated with certain blaKPC-containing plasmids that was previously shown to successfully track a clonal outbreak of blaKPC-pKpQIL-Klebsiella pneumoniae in a proof-of-principle study (A. F. Lau, H. Wang, R. A. Weingarten, S. K. Drake, A. F. Suffredini, M. K. Garfield, Y. Chen, M. Gucek, J. H. Youn, F. Stock, H. Tso, J. DeLeo, J. J. Cimino, K. M. Frank, and J. P. Dekker, J Clin Microbiol 52:2804–2812, 2014, http://dx.doi.org/10.1128/JCM.00694-14). PCR for the p019 gene was used as the reference method. Here, blind analysis of 140 characterized Enterobacteriaceae isolates using two protein extraction methods (plate extraction and tube extraction) and two peak detection methods (manual and automated) showed sensitivities and specificities ranging from 96% to 100% and from 95% to 100%, respectively (2,520 spectra analyzed). Feasible laboratory implementation methods (plate extraction and automated analysis) demonstrated 96% sensitivity and 99% specificity. All p019-positive isolates (n = 26) contained blaKPC and were carbapenem resistant. Retrospective analysis of an additional 720 clinical Enterobacteriaceae spectra found an ∼11,109-Da signal in nine spectra (1.3%), including seven from p019-containing, carbapenem-resistant isolates (positive predictive value [PPV], 78%). Instrument tuning had a significant effect on assay sensitivity, highlighting important factors that must be considered as MALDI-TOF MS moves into applications beyond microbial identification. Using a large blind clinical data set, we have shown that spectra acquired for routine organism identification can also be analyzed automatically in real time at high throughput, at no additional expense to the laboratory, to enable rapid detection of potentially blaKPC-containing carbapenem-resistant isolates, providing early and clinically actionable results. PMID:26338858

  2. Development of an automated method for determining oil in water by direct aqueous supercritical fluid extraction coupled on-line with infrared spectroscopy.

    PubMed

    Minty, B; Ramsey, E D; Davies, I

    2000-12-01

    A direct aqueous supercritical fluid extraction (SFE) system was developed which can be directly interfaced to an infrared spectrometer for the determination of oil in water. The technique is designed to provide an environmentally clean, automated alternative to established IR methods for oil in water analysis which require the use of restricted organic solvents. The SFE-FTIR method involves minimum sample handling stages, with on-line analysis of a 500 ml water sample being complete within 15 min. Method accuracy for determining water samples spiked with gasoline, white spirit, kerosene, diesel or engine oil was 81-100% with precision (RSD) ranging from 3 to 17%. An independent evaluation determined a 2 ppm limit of quantification for diesel in industrial effluents. The results of a comparative study involving an established IR method and the SFE-FTIR method indicate that oil levels calculated using an accepted equation which includes coefficients derived from reference hydrocarbon standards may result in significant errors. A new approach permitted the derivation of quantification coefficients for the SFE-FTIR analyses which provided improved results. In situations where the identity of the oil to be analysed is known, a rapid off-line SFE-FTIR system calibration procedure was developed and successfully applied to various oils. An optional in-line silica gel clean-up procedure incorporated within the SFE-FTIR system enables the same water sample to be analysed for total oil content including vegetable oils and selectively for petroleum oil content within a total of 20 min. At the end of an analysis the SFE system is cleaned using an in situ 3 min clean cycle.

  3. [A method for rapid extracting three-dimensional root model of vivo tooth from cone beam computed tomography data based on the anatomical characteristics of periodontal ligament].

    PubMed

    Zhao, Y J; Wang, S W; Liu, Y; Wang, Y

    2017-02-18

    To explore a new method for rapid extracting and rebuilding three-dimensional (3D) digital root model of vivo tooth from cone beam computed tomography (CBCT) data based on the anatomical characteristics of periodontal ligament, and to evaluate the extraction accuracy of the method. In the study, 15 extracted teeth (11 with single root, 4 with double roots) were collected from oral clinic and 3D digital root models of each tooth were obtained by 3D dental scanner with a high accuracy 0.02 mm in STL format. CBCT data for each patient were acquired before tooth extraction, DICOM data with a voxel size 0.3 mm were input to Mimics 18.0 software. Segmentation, Morphology operations, Boolean operations and Smart expanded function in Mimics software were used to edit teeth, bone and periodontal ligament threshold mask, and root threshold mask were automatically acquired after a series of mask operations. 3D digital root models were extracted in STL format finally. 3D morphology deviation between the extracted root models and corresponding vivo root models were compared in Geomagic Studio 2012 software. The 3D size errors in long axis, bucco-lingual direction and mesio-distal direction were also calculated. The average value of the 3D morphology deviation for 15 roots by calculating Root Mean Square (RMS) value was 0.22 mm, the average size errors in the mesio-distal direction, the bucco-lingual direction and the long axis were 0.46 mm, 0.36 mm and -0.68 mm separately. The average time of this new method for extracting single root was about 2-3 min. It could meet the accuracy requirement of the root 3D reconstruction fororal clinical use. This study established a new method for rapid extracting 3D root model of vivo tooth from CBCT data. It could simplify the traditional manual operation and improve the efficiency and automation of single root extraction. The strategy of this method for complete dentition extraction needs further research.

  4. [Comparison of manual and automated (MagNA Pure) nucleic acid isolation methods in molecular diagnosis of HIV infections].

    PubMed

    Alp, Alpaslan; Us, Dürdal; Hasçelik, Gülşen

    2004-01-01

    Rapid quantitative molecular methods are very important for the diagnosis of human immunodeficiency virus (HIV) infections, assessment of prognosis and follow up. The purpose of this study was to compare and evaluate the performances of conventional manual extraction method and automated MagNA Pure system, for the nucleic acid isolation step which is the first and most important step in molecular diagnosis of HIV infections. Plasma samples of 35 patients in which anti-HIV antibodies were found as positive by microparticule enzyme immunoassay and confirmed by immunoblotting method, were included in the study. The nucleic acids obtained simultaneously by manual isolation kit (Cobas Amplicor, HIV-1 Monitor Test, version 1.5, Roche Diagnostics) and automated system (MagNA Pure LC Total Nucleic Acid Isolation Kit, Roche Diagnostics), were amplified and detected in Cobas Amplicor (Roche Diagnostics) instrument. Twenty three of 35 samples (65.7%) were found to be positive, and 9 (25.7%) were negative by both of the methods. The agreement between the methods were detected as 91.4%, for qualitative results. Viral RNA copies detected by manual and MagNA Pure isolation methods were found between 76.0-7.590.000 (mean: 487.143) and 113.0-20.300.0000 (mean: 2.174.097) copies/ml, respectively. When both of the overall and individual results were evaluated, the number of RNA copies obtained with automatized system, were found higher than the manual method (p<0.05). Three samples which had low numbers of nucleic acids (113, 773, 857, respectively) with MagNA Pure, yielded negative results with manual method. In conclusion, the automatized MagNA Pure system was found to be a reliable, rapid and practical method for the isolation of HIV-RNA.

  5. Analysis of trace contamination of phthalate esters in ultrapure water using a modified solid-phase extraction procedure and automated thermal desorption-gas chromatography/mass spectrometry.

    PubMed

    Liu, Hsu-Chuan; Den, Walter; Chan, Shu-Fei; Kin, Kuan Tzu

    2008-04-25

    The present study was aimed to develop a procedure modified from the conventional solid-phase extraction (SPE) method for the analysis of trace concentration of phthalate esters in industrial ultrapure water (UPW). The proposed procedure allows UPW sample to be drawn through a sampling tube containing hydrophobic sorbent (Tenax TA) to concentrate the aqueous phthalate esters. The solid trap was then demoisturized by two-stage gas drying before subjecting to thermal desorption and analysis by gas chromatography-mass spectrometry. This process removes the solvent extraction procedure necessary for the conventional SPE method, and permits automation of the analytical procedure for high-volume analyses. Several important parameters, including desorption temperature and duration, packing quantity and demoisturizing procedure, were optimized in this study based on the analytical sensitivity for a standard mixture containing five different phthalate esters. The method detection limits for the five phthalate esters were between 36 ng l(-1) and 95 ng l(-1) and recovery rates between 15% and 101%. Dioctyl phthalate (DOP) was not recovered adequately because the compound was both poorly adsorbed and desorbed on and off Tenax TA sorbents. Furthermore, analyses of material leaching from poly(vinyl chloride) (PVC) tubes as well as the actual water samples showed that di-n-butyl phthalate (DBP) and di(2-ethylhexyl) phthalate (DEHP) were the common contaminants detected from PVC contaminated UPW and the actual UPW, as well as in tap water. The reduction of DEHP in the production processes of actual UPW was clearly observed, however a DEHP concentration of 0.20 microg l(-1) at the point of use was still being quantified, suggesting that the contamination of phthalate esters could present a barrier to the future cleanliness requirement of UPW. The work demonstrated that the proposed modified SPE procedure provided an effective method for rapid analysis and contamination identification in UPW production lines.

  6. Observation of sea-ice dynamics using synthetic aperture radar images: Automated analysis

    NASA Technical Reports Server (NTRS)

    Vesecky, John F.; Samadani, Ramin; Smith, Martha P.; Daida, Jason M.; Bracewell, Ronald N.

    1988-01-01

    The European Space Agency's ERS-1 satellite, as well as others planned to follow, is expected to carry synthetic-aperture radars (SARs) over the polar regions beginning in 1989. A key component in utilization of these SAR data is an automated scheme for extracting the sea-ice velocity field from a time sequence of SAR images of the same geographical region. Two techniques for automated sea-ice tracking, image pyramid area correlation (hierarchical correlation) and feature tracking, are described. Each technique is applied to a pair of Seasat SAR sea-ice images. The results compare well with each other and with manually tracked estimates of the ice velocity. The advantages and disadvantages of these automated methods are pointed out. Using these ice velocity field estimates it is possible to construct one sea-ice image from the other member of the pair. Comparing the reconstructed image with the observed image, errors in the estimated velocity field can be recognized and a useful probable error display created automatically to accompany ice velocity estimates. It is suggested that this error display may be useful in segmenting the sea ice observed into regions that move as rigid plates of significant ice velocity shear and distortion.

  7. Respiratory Artefact Removal in Forced Oscillation Measurements: A Machine Learning Approach.

    PubMed

    Pham, Thuy T; Thamrin, Cindy; Robinson, Paul D; McEwan, Alistair L; Leong, Philip H W

    2017-08-01

    Respiratory artefact removal for the forced oscillation technique can be treated as an anomaly detection problem. Manual removal is currently considered the gold standard, but this approach is laborious and subjective. Most existing automated techniques used simple statistics and/or rejected anomalous data points. Unfortunately, simple statistics are insensitive to numerous artefacts, leading to low reproducibility of results. Furthermore, rejecting anomalous data points causes an imbalance between the inspiratory and expiratory contributions. From a machine learning perspective, such methods are unsupervised and can be considered simple feature extraction. We hypothesize that supervised techniques can be used to find improved features that are more discriminative and more highly correlated with the desired output. Features thus found are then used for anomaly detection by applying quartile thresholding, which rejects complete breaths if one of its features is out of range. The thresholds are determined by both saliency and performance metrics rather than qualitative assumptions as in previous works. Feature ranking indicates that our new landmark features are among the highest scoring candidates regardless of age across saliency criteria. F1-scores, receiver operating characteristic, and variability of the mean resistance metrics show that the proposed scheme outperforms previous simple feature extraction approaches. Our subject-independent detector, 1IQR-SU, demonstrated approval rates of 80.6% for adults and 98% for children, higher than existing methods. Our new features are more relevant. Our removal is objective and comparable to the manual method. This is a critical work to automate forced oscillation technique quality control.

  8. Confirmatory and quantitative analysis of fatty acid esters of hydroxy fatty acids in serum by solid phase extraction coupled to liquid chromatography tandem mass spectrometry.

    PubMed

    López-Bascón, María Asunción; Calderón-Santiago, Mónica; Priego-Capote, Feliciano

    2016-11-02

    A novel class of endogenous mammalian lipids endowed with antidiabetic and anti-inflammatory properties has been recently discovered. These are fatty acid esters of hydroxy fatty acids (FAHFAs) formed by condensation between a hydroxy fatty acid and a fatty acid. FAHFAs are present in human serum and tissues at low nanomolar concentrations. Therefore, high sensitivity and selectivity profiling analysis of these compounds in clinical samples is demanded. An automated qualitative and quantitative method based on on-line coupling between solid phase extraction and liquid chromatography-tandem mass spectrometry has been here developed for determination of FAHFAs in serum with the required sensitivity and selectivity. Matrix effects were evaluated by preparation of calibration models in serum and methanol. Recovery factors ranged between 73.8 and 100% in serum. The within-day variability ranged from 7.1 to 13.8%, and the between-days variability varied from 9.3 to 21.6%, which are quite acceptable values taking into account the low concentration levels at which the target analytes are found. The method has been applied to a cohort of human serum samples to estimate the concentrations profiles as a function of the glycaemic state and obesity. Statistical analysis revealed three FAHFAs with levels significantly different depending on the glycaemic state or the body mass index. This automated method could be implemented in high-throughput analysis with minimum user assistance. Copyright © 2016 Elsevier B.V. All rights reserved.

  9. All-paths graph kernel for protein-protein interaction extraction with evaluation of cross-corpus learning.

    PubMed

    Airola, Antti; Pyysalo, Sampo; Björne, Jari; Pahikkala, Tapio; Ginter, Filip; Salakoski, Tapio

    2008-11-19

    Automated extraction of protein-protein interactions (PPI) is an important and widely studied task in biomedical text mining. We propose a graph kernel based approach for this task. In contrast to earlier approaches to PPI extraction, the introduced all-paths graph kernel has the capability to make use of full, general dependency graphs representing the sentence structure. We evaluate the proposed method on five publicly available PPI corpora, providing the most comprehensive evaluation done for a machine learning based PPI-extraction system. We additionally perform a detailed evaluation of the effects of training and testing on different resources, providing insight into the challenges involved in applying a system beyond the data it was trained on. Our method is shown to achieve state-of-the-art performance with respect to comparable evaluations, with 56.4 F-score and 84.8 AUC on the AImed corpus. We show that the graph kernel approach performs on state-of-the-art level in PPI extraction, and note the possible extension to the task of extracting complex interactions. Cross-corpus results provide further insight into how the learning generalizes beyond individual corpora. Further, we identify several pitfalls that can make evaluations of PPI-extraction systems incomparable, or even invalid. These include incorrect cross-validation strategies and problems related to comparing F-score results achieved on different evaluation resources. Recommendations for avoiding these pitfalls are provided.

  10. Automated extraction of single H atoms with STM: tip state dependency

    NASA Astrophysics Data System (ADS)

    Møller, Morten; Jarvis, Samuel P.; Guérinet, Laurent; Sharp, Peter; Woolley, Richard; Rahe, Philipp; Moriarty, Philip

    2017-02-01

    The atomistic structure of the tip apex plays a crucial role in performing reliable atomic-scale surface and adsorbate manipulation using scanning probe techniques. We have developed an automated extraction routine for controlled removal of single hydrogen atoms from the H:Si(100) surface. The set of atomic extraction protocols detect a variety of desorption events during scanning tunneling microscope (STM)-induced modification of the hydrogen-passivated surface. The influence of the tip state on the probability for hydrogen removal was examined by comparing the desorption efficiency for various classifications of STM topographs (rows, dimers, atoms, etc). We find that dimer-row-resolving tip apices extract hydrogen atoms most readily and reliably (and with least spurious desorption), while tip states which provide atomic resolution counter-intuitively have a lower probability for single H atom removal.

  11. Text Mining in Biomedical Domain with Emphasis on Document Clustering.

    PubMed

    Renganathan, Vinaitheerthan

    2017-07-01

    With the exponential increase in the number of articles published every year in the biomedical domain, there is a need to build automated systems to extract unknown information from the articles published. Text mining techniques enable the extraction of unknown knowledge from unstructured documents. This paper reviews text mining processes in detail and the software tools available to carry out text mining. It also reviews the roles and applications of text mining in the biomedical domain. Text mining processes, such as search and retrieval of documents, pre-processing of documents, natural language processing, methods for text clustering, and methods for text classification are described in detail. Text mining techniques can facilitate the mining of vast amounts of knowledge on a given topic from published biomedical research articles and draw meaningful conclusions that are not possible otherwise.

  12. Nucleus and cytoplasm segmentation in microscopic images using K-means clustering and region growing

    PubMed Central

    Sarrafzadeh, Omid; Dehnavi, Alireza Mehri

    2015-01-01

    Background: Segmentation of leukocytes acts as the foundation for all automated image-based hematological disease recognition systems. Most of the time, hematologists are interested in evaluation of white blood cells only. Digital image processing techniques can help them in their analysis and diagnosis. Materials and Methods: The main objective of this paper is to detect leukocytes from a blood smear microscopic image and segment them into their two dominant elements, nucleus and cytoplasm. The segmentation is conducted using two stages of applying K-means clustering. First, the nuclei are segmented using K-means clustering. Then, a proposed method based on region growing is applied to separate the connected nuclei. Next, the nuclei are subtracted from the original image. Finally, the cytoplasm is segmented using the second stage of K-means clustering. Results: The results indicate that the proposed method is able to extract the nucleus and cytoplasm regions accurately and works well even though there is no significant contrast between the components in the image. Conclusions: In this paper, a method based on K-means clustering and region growing is proposed in order to detect leukocytes from a blood smear microscopic image and segment its components, the nucleus and the cytoplasm. As region growing step of the algorithm relies on the information of edges, it will not able to separate the connected nuclei more accurately in poor edges and it requires at least a weak edge to exist between the nuclei. The nucleus and cytoplasm segments of a leukocyte can be used for feature extraction and classification which leads to automated leukemia detection. PMID:26605213

  13. Fast trace determination of nine odorant and estrogenic chloro- and bromo-phenolic compounds in real water samples through automated solid-phase extraction coupled with liquid chromatography tandem mass spectrometry.

    PubMed

    Yuan, Su-Fen; Liu, Ze-Hua; Lian, Hai-Xian; Yang, Chuang-Tao; Lin, Qing; Yin, Hua; Lin, Zhang; Dang, Zhi

    2018-02-01

    A fast and reliable method was developed for simultaneous trace determination of nine odorous and estrogenic chloro- and bromo-phenolic compounds (CPs and BPs) in water samples using solid-phase extraction (SPE) coupled with liquid chromatography tandem mass spectrometry (LC-MS/MS). For sample preparation, the extraction efficiencies of two widely applied cartridges Oasis HLB and Sep-Pak C18 were compared, and the Oasis HLB cartridge showed much better extraction performance; pH of water sample also plays important role on extraction, and pH = 2-3 was found to be most appropriate. For separation of the target compounds, small addition of ammonium hydroxide can obviously improve the detection sensitivity, and the optimized addition concentration was determined as 0.2%. The developed efficient method was validated and showed excellent linearity (R 2  > 0.995), low limit of detection (LOD, 1.9-6.2 ng/L), and good recovery efficiencies of 57-95% in surface and tap water with low relative standard deviation (RSD, 1.3-17.4%). The developed method was finally applied to one tap and one surface water samples and most of these nine targets were detected, but all of them were below their odor thresholds, and their estrogen equivalent (EEQ) were also very low.

  14. Creation of a virtual cutaneous tissue bank

    NASA Astrophysics Data System (ADS)

    LaFramboise, William A.; Shah, Sujal; Hoy, R. W.; Letbetter, D.; Petrosko, P.; Vennare, R.; Johnson, Peter C.

    2000-04-01

    Cellular and non-cellular constituents of skin contain fundamental morphometric features and structural patterns that correlate with tissue function. High resolution digital image acquisitions performed using an automated system and proprietary software to assemble adjacent images and create a contiguous, lossless, digital representation of individual microscope slide specimens. Serial extraction, evaluation and statistical analysis of cutaneous feature is performed utilizing an automated analysis system, to derive normal cutaneous parameters comprising essential structural skin components. Automated digital cutaneous analysis allows for fast extraction of microanatomic dat with accuracy approximating manual measurement. The process provides rapid assessment of feature both within individual specimens and across sample populations. The images, component data, and statistical analysis comprise a bioinformatics database to serve as an architectural blueprint for skin tissue engineering and as a diagnostic standard of comparison for pathologic specimens.

  15. Automated labeling of bibliographic data extracted from biomedical online journals

    NASA Astrophysics Data System (ADS)

    Kim, Jongwoo; Le, Daniel X.; Thoma, George R.

    2003-01-01

    A prototype system has been designed to automate the extraction of bibliographic data (e.g., article title, authors, abstract, affiliation and others) from online biomedical journals to populate the National Library of Medicine"s MEDLINE database. This paper describes a key module in this system: the labeling module that employs statistics and fuzzy rule-based algorithms to identify segmented zones in an article"s HTML pages as specific bibliographic data. Results from experiments conducted with 1,149 medical articles from forty-seven journal issues are presented.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kurt Derr; Milos Manic

    Time and location data play a very significant role in a variety of factory automation scenarios, such as automated vehicles and robots, their navigation, tracking, and monitoring, to services of optimization and security. In addition, pervasive wireless capabilities combined with time and location information are enabling new applications in areas such as transportation systems, health care, elder care, military, emergency response, critical infrastructure, and law enforcement. A person/object in proximity to certain areas for specific durations of time may pose a risk hazard either to themselves, others, or the environment. This paper presents a novel fuzzy based spatio-temporal risk calculationmore » DSTiPE method that an object with wireless communications presents to the environment. The presented Matlab based application for fuzzy spatio-temporal risk cluster extraction is verified on a diagonal vehicle movement example.« less

  17. Analysis of nitrosamines in water by automated SPE and isotope dilution GC/HRMS Occurrence in the different steps of a drinking water treatment plant, and in chlorinated samples from a reservoir and a sewage treatment plant effluent.

    PubMed

    Planas, Carles; Palacios, Oscar; Ventura, Francesc; Rivera, Josep; Caixach, Josep

    2008-08-15

    A method based on automated solid-phase extraction (SPE) and isotope dilution gas chromatography/high resolution mass spectrometry (GC/HRMS) has been developed for the analysis of nine nitrosamines in water samples. The combination of automated SPE and GC/HRMS for the analysis of nitrosamines has not been reported previously. The method shows as advantages the selectivity and sensitivity of GC/HRMS analysis and the high efficiency of automated SPE with coconut charcoal EPA 521 cartridges. Low method detection limits (MDLs) were achieved, along with a greater facility of the procedure and less dependence on the operator with regard to the methods based on manual SPE. Quality requirements for isotope dilution-based methods were accomplished for most analysed nitrosamines, regarding to trueness (80-120%), method precision (<15%) and MDLs (0.08-1.7 ng/L). Nineteen water samples (16 samples from a drinking water treatment plant {DWTP}, 2 chlorinated samples from a sewage treatment plant {STP} effluent, and 1 chlorinated sample from a reservoir) were analysed. Concentrations of nitrosamines in the STP effluent were 309.4 and 730.2 ng/L, being higher when higher doses of chlorine were applied. N-Nitrosodimethylamine (NDMA) and N-nitrosodiethylamine (NDEA) were the main compounds identified in the STP effluent, and NDEA was detected above 200 ng/L, regulatory level for NDMA in effluents stated in Ontario (Canada). Lower concentrations of nitrosamines were found in the reservoir (20.3 ng/L) and in the DWTP samples (n.d. -28.6 ng/L). NDMA and NDEA were respectively found in the reservoir and in treated and highly chlorinated DWTP samples at concentrations above 10 ng/L (guide value established in different countries). The highest concentrations of nitrosamines were found after chlorination and ozonation processes (ozonated, treated and highly chlorinated water) in DWTP samples.

  18. Automation of Silica Bead-based Nucleic Acid Extraction on a Centrifugal Lab-on-a-Disc Platform

    NASA Astrophysics Data System (ADS)

    Kinahan, David J.; Mangwanya, Faith; Garvey, Robert; Chung, Danielle WY; Lipinski, Artur; Julius, Lourdes AN; King, Damien; Mohammadi, Mehdi; Mishra, Rohit; Al-Ofi, May; Miyazaki, Celina; Ducrée, Jens

    2016-10-01

    We describe a centrifugal microfluidic ‘Lab-on-a-Disc’ (LoaD) technology for DNA purification towards eventual integration into a Sample-to-Answer platform for detection of the pathogen Escherichia coli O157:H7 from food samples. For this application, we use a novel microfluidic architecture which combines ‘event-triggered’ dissolvable film (DF) valves with a reaction chamber gated by a centrifugo-pneumatic siphon valve (CPSV). This architecture permits comprehensive flow control by simple changes in the speed of the platform innate spindle motor. Even before method optimisation, characterisation by DNA fluorescence reveals an extraction efficiency of 58%, which is close to commercial spin columns.

  19. A volumetric pulmonary CT segmentation method with applications in emphysema assessment

    NASA Astrophysics Data System (ADS)

    Silva, José Silvestre; Silva, Augusto; Santos, Beatriz S.

    2006-03-01

    A segmentation method is a mandatory pre-processing step in many automated or semi-automated analysis tasks such as region identification and densitometric analysis, or even for 3D visualization purposes. In this work we present a fully automated volumetric pulmonary segmentation algorithm based on intensity discrimination and morphologic procedures. Our method first identifies the trachea as well as primary bronchi and then the pulmonary region is identified by applying a threshold and morphologic operations. When both lungs are in contact, additional procedures are performed to obtain two separated lung volumes. To evaluate the performance of the method, we compared contours extracted from 3D lung surfaces with reference contours, using several figures of merit. Results show that the worst case generally occurs at the middle sections of high resolution CT exams, due the presence of aerial and vascular structures. Nevertheless, the average error is inferior to the average error associated with radiologist inter-observer variability, which suggests that our method produces lung contours similar to those drawn by radiologists. The information created by our segmentation algorithm is used by an identification and representation method in pulmonary emphysema that also classifies emphysema according to its severity degree. Two clinically proved thresholds are applied which identify regions with severe emphysema, and with highly severe emphysema. Based on this thresholding strategy, an application for volumetric emphysema assessment was developed offering new display paradigms concerning the visualization of classification results. This framework is easily extendable to accommodate other classifiers namely those related with texture based segmentation as it is often the case with interstitial diseases.

  20. Morphological Feature Extraction for Automatic Registration of Multispectral Images

    NASA Technical Reports Server (NTRS)

    Plaza, Antonio; LeMoigne, Jacqueline; Netanyahu, Nathan S.

    2007-01-01

    The task of image registration can be divided into two major components, i.e., the extraction of control points or features from images, and the search among the extracted features for the matching pairs that represent the same feature in the images to be matched. Manual extraction of control features can be subjective and extremely time consuming, and often results in few usable points. On the other hand, automated feature extraction allows using invariant target features such as edges, corners, and line intersections as relevant landmarks for registration purposes. In this paper, we present an extension of a recently developed morphological approach for automatic extraction of landmark chips and corresponding windows in a fully unsupervised manner for the registration of multispectral images. Once a set of chip-window pairs is obtained, a (hierarchical) robust feature matching procedure, based on a multiresolution overcomplete wavelet decomposition scheme, is used for registration purposes. The proposed method is validated on a pair of remotely sensed scenes acquired by the Advanced Land Imager (ALI) multispectral instrument and the Hyperion hyperspectral instrument aboard NASA's Earth Observing-1 satellite.

  1. An improved active contour model for glacial lake extraction

    NASA Astrophysics Data System (ADS)

    Zhao, H.; Chen, F.; Zhang, M.

    2017-12-01

    Active contour model is a widely used method in visual tracking and image segmentation. Under the driven of objective function, the initial curve defined in active contour model will evolve to a stable condition - a desired result in given image. As a typical region-based active contour model, C-V model has a good effect on weak boundaries detection and anti noise ability which shows great potential in glacial lake extraction. Glacial lake is a sensitive indicator for reflecting global climate change, therefore accurate delineate glacial lake boundaries is essential to evaluate hydrologic environment and living environment. However, the current method in glacial lake extraction mainly contains water index method and recognition classification method are diffcult to directly applied in large scale glacial lake extraction due to the diversity of glacial lakes and masses impacted factors in the image, such as image noise, shadows, snow and ice, etc. Regarding the abovementioned advantanges of C-V model and diffcults in glacial lake extraction, we introduce the signed pressure force function to improve the C-V model for adapting to processing of glacial lake extraction. To inspect the effect of glacial lake extraction results, three typical glacial lake development sites were selected, include Altai mountains, Centre Himalayas, South-eastern Tibet, and Landsat8 OLI imagery was conducted as experiment data source, Google earth imagery as reference data for varifying the results. The experiment consequence suggests that improved active contour model we proposed can effectively discriminate the glacial lakes from complex backgound with a higher Kappa Coefficient - 0.895, especially in some small glacial lakes which belongs to weak information in the image. Our finding provide a new approach to improved accuracy under the condition of large proportion of small glacial lakes and the possibility for automated glacial lake mapping in large-scale area.

  2. Teleoperated robotic sorting system

    DOEpatents

    Roos, Charles E.; Sommer, Jr., Edward J.; Parrish, Robert H.; Russell, James R.

    2008-06-24

    A method and apparatus are disclosed for classifying materials utilizing a computerized touch sensitive screen or other computerized pointing device for operator identification and electronic marking of spatial coordinates of materials to be extracted. An operator positioned at a computerized touch sensitive screen views electronic images of the mixture of materials to be sorted as they are conveyed past a sensor array which transmits sequences of images of the mixture either directly or through a computer to the touch sensitive display screen. The operator manually "touches" objects displayed on the screen to be extracted from the mixture thereby registering the spatial coordinates of the objects within the computer. The computer then tracks the registered objects as they are conveyed and directs automated devices including mechanical means such as air jets, robotic arms, or other mechanical diverters to extract the registered objects.

  3. Teleoperated robotic sorting system

    DOEpatents

    Roos, Charles E.; Sommer, Edward J.; Parrish, Robert H.; Russell, James R.

    2000-01-01

    A method and apparatus are disclosed for classifying materials utilizing a computerized touch sensitive screen or other computerized pointing device for operator identification and electronic marking of spatial coordinates of materials to be extracted. An operator positioned at a computerized touch sensitive screen views electronic images of the mixture of materials to be sorted as they are conveyed past a sensor array which transmits sequences of images of the mixture either directly or through a computer to the touch sensitive display screen. The operator manually "touches" objects displayed on the screen to be extracted from the mixture thereby registering the spatial coordinates of the objects within the computer. The computer then tracks the registered objects as they are conveyed and directs automated devices including mechanical means such as air jets, robotic arms, or other mechanical diverters to extract the registered objects.

  4. Detection of reflecting surfaces by a statistical model

    NASA Astrophysics Data System (ADS)

    He, Qiang; Chu, Chee-Hung H.

    2009-02-01

    Remote sensing is widely used assess the destruction from natural disasters and to plan relief and recovery operations. How to automatically extract useful features and segment interesting objects from digital images, including remote sensing imagery, becomes a critical task for image understanding. Unfortunately, current research on automated feature extraction is ignorant of contextual information. As a result, the fidelity of populating attributes corresponding to interesting features and objects cannot be satisfied. In this paper, we present an exploration on meaningful object extraction integrating reflecting surfaces. Detection of specular reflecting surfaces can be useful in target identification and then can be applied to environmental monitoring, disaster prediction and analysis, military, and counter-terrorism. Our method is based on a statistical model to capture the statistical properties of specular reflecting surfaces. And then the reflecting surfaces are detected through cluster analysis.

  5. Automated segmentation of foveal avascular zone in fundus fluorescein angiography.

    PubMed

    Zheng, Yalin; Gandhi, Jagdeep Singh; Stangos, Alexandros N; Campa, Claudio; Broadbent, Deborah M; Harding, Simon P

    2010-07-01

    PURPOSE. To describe and evaluate the performance of a computerized automated segmentation technique for use in quantification of the foveal avascular zone (FAZ). METHODS. A computerized technique for automated segmentation of the FAZ using images from fundus fluorescein angiography (FFA) was applied to 26 transit-phase images obtained from patients with various grades of diabetic retinopathy. The area containing the FAZ zone was first extracted from the original image and smoothed by a Gaussian kernel (sigma = 1.5). An initializing contour was manually placed inside the FAZ of the smoothed image and iteratively moved by the segmentation program toward the FAZ boundary. Five tests with different initializing curves were run on each of 26 images to assess reproducibility. The accuracy of the program was also validated by comparing results obtained by the program with the FAZ boundaries manually delineated by medical retina specialists. Interobserver performance was then evaluated by comparing delineations from two of the experts. RESULTS. One-way analysis of variance indicated that the disparities between different tests were not statistically significant, signifying excellent reproducibility for the computer program. There was a statistically significant linear correlation between the results obtained by automation and manual delineations by experts. CONCLUSIONS. This automated segmentation program can produce highly reproducible results that are comparable to those made by clinical experts. It has the potential to assist in the detection and management of foveal ischemia and to be integrated into automated grading systems.

  6. The RABiT: A Rapid Automated Biodosimetry Tool For Radiological Triage. II. Technological Developments

    PubMed Central

    Garty, Guy; Chen, Youhua; Turner, Helen; Zhang, Jian; Lyulko, Oleksandra; Bertucci, Antonella; Xu, Yanping; Wang, Hongliang; Simaan, Nabil; Randers-Pehrson, Gerhard; Yao, Y. Lawrence; Brenner, David J.

    2011-01-01

    Purpose Over the past five years the Center for Minimally Invasive Radiation Biodosimetry at Columbia University has developed the Rapid Automated Biodosimetry Tool (RABiT), a completely automated, ultra-high throughput biodosimetry workstation. This paper describes recent upgrades and reliability testing of the RABiT. Materials and methods The RABiT analyzes fingerstick-derived blood samples to estimate past radiation exposure or to identify individuals exposed above or below a cutoff dose. Through automated robotics, lymphocytes are extracted from fingerstick blood samples into filter-bottomed multi-well plates. Depending on the time since exposure, the RABiT scores either micronuclei or phosphorylation of the histone H2AX, in an automated robotic system, using filter-bottomed multi-well plates. Following lymphocyte culturing, fixation and staining, the filter bottoms are removed from the multi-well plates and sealed prior to automated high-speed imaging. Image analysis is performed online using dedicated image processing hardware. Both the sealed filters and the images are archived. Results We have developed a new robotic system for lymphocyte processing, making use of an upgraded laser power and parallel processing of four capillaries at once. This system has allowed acceleration of lymphocyte isolation, the main bottleneck of the RABiT operation, from 12 to 2 sec/sample. Reliability tests have been performed on all robotic subsystems. Conclusions Parallel handling of multiple samples through the use of dedicated, purpose-built, robotics and high speed imaging allows analysis of up to 30,000 samples per day. PMID:21557703

  7. A novel automated device for rapid nucleic acid extraction utilizing a zigzag motion of magnetic silica beads.

    PubMed

    Yamaguchi, Akemi; Matsuda, Kazuyuki; Uehara, Masayuki; Honda, Takayuki; Saito, Yasunori

    2016-02-04

    We report a novel automated device for nucleic acid extraction, which consists of a mechanical control system and a disposable cassette. The cassette is composed of a bottle, a capillary tube, and a chamber. After sample injection in the bottle, the sample is lysed, and nucleic acids are adsorbed on the surface of magnetic silica beads. These magnetic beads are transported and are vibrated through the washing reagents in the capillary tube under the control of the mechanical control system, and thus, the nucleic acid is purified without centrifugation. The purified nucleic acid is automatically extracted in 3 min for the polymerase chain reaction (PCR). The nucleic acid extraction is dependent on the transport speed and the vibration frequency of the magnetic beads, and optimizing these two parameters provided better PCR efficiency than the conventional manual procedure. There was no difference between the detection limits of our novel device and that of the conventional manual procedure. We have already developed the droplet-PCR machine, which can amplify and detect specific nucleic acids rapidly and automatically. Connecting the droplet-PCR machine to our novel automated extraction device enables PCR analysis within 15 min, and this system can be made available as a point-of-care testing in clinics as well as general hospitals. Copyright © 2015 Elsevier B.V. All rights reserved.

  8. Rapid non-enzymatic extraction method for isolating PCR-quality camelpox virus DNA from skin.

    PubMed

    Yousif, A Ausama; Al-Naeem, A Abdelmohsen; Al-Ali, M Ahmad

    2010-10-01

    Molecular diagnostic investigations of orthopoxvirus (OPV) infections are performed using a variety of clinical samples including skin lesions, tissues from internal organs, blood and secretions. Skin samples are particularly convenient for rapid diagnosis and molecular epidemiological investigations of camelpox virus (CMLV). Classical extraction procedures and commercial spin-column-based kits are time consuming, relatively expensive, and require multiple extraction and purification steps in addition to proteinase K digestion. A rapid non-enzymatic procedure for extracting CMLV DNA from dried scabs or pox lesions was developed to overcome some of the limitations of the available DNA extraction techniques. The procedure requires as little as 10mg of tissue and produces highly purified DNA [OD(260)/OD(280) ratios between 1.47 and 1.79] with concentrations ranging from 6.5 to 16 microg/ml. The extracted CMLV DNA was proven suitable for virus-specific qualitative and, semi-quantitative PCR applications. Compared to spin-column and conventional viral DNA extraction techniques, the two-step extraction procedure saves money and time, and retains the potential for automation without compromising CMLV PCR sensitivity. Copyright (c) 2010 Elsevier B.V. All rights reserved.

  9. A new method for automated discontinuity trace mapping on rock mass 3D surface model

    NASA Astrophysics Data System (ADS)

    Li, Xiaojun; Chen, Jianqin; Zhu, Hehua

    2016-04-01

    This paper presents an automated discontinuity trace mapping method on a 3D surface model of rock mass. Feature points of discontinuity traces are first detected using the Normal Tensor Voting Theory, which is robust to noisy point cloud data. Discontinuity traces are then extracted from feature points in four steps: (1) trace feature point grouping, (2) trace segment growth, (3) trace segment connection, and (4) redundant trace segment removal. A sensitivity analysis is conducted to identify optimal values for the parameters used in the proposed method. The optimal triangular mesh element size is between 5 cm and 6 cm; the angle threshold in the trace segment growth step is between 70° and 90°; the angle threshold in the trace segment connection step is between 50° and 70°, and the distance threshold should be at least 15 times the mean triangular mesh element size. The method is applied to the excavation face trace mapping of a drill-and-blast tunnel. The results show that the proposed discontinuity trace mapping method is fast and effective and could be used as a supplement to traditional direct measurement of discontinuity traces.

  10. Application of texture analysis method for mammogram density classification

    NASA Astrophysics Data System (ADS)

    Nithya, R.; Santhi, B.

    2017-07-01

    Mammographic density is considered a major risk factor for developing breast cancer. This paper proposes an automated approach to classify breast tissue types in digital mammogram. The main objective of the proposed Computer-Aided Diagnosis (CAD) system is to investigate various feature extraction methods and classifiers to improve the diagnostic accuracy in mammogram density classification. Texture analysis methods are used to extract the features from the mammogram. Texture features are extracted by using histogram, Gray Level Co-Occurrence Matrix (GLCM), Gray Level Run Length Matrix (GLRLM), Gray Level Difference Matrix (GLDM), Local Binary Pattern (LBP), Entropy, Discrete Wavelet Transform (DWT), Wavelet Packet Transform (WPT), Gabor transform and trace transform. These extracted features are selected using Analysis of Variance (ANOVA). The features selected by ANOVA are fed into the classifiers to characterize the mammogram into two-class (fatty/dense) and three-class (fatty/glandular/dense) breast density classification. This work has been carried out by using the mini-Mammographic Image Analysis Society (MIAS) database. Five classifiers are employed namely, Artificial Neural Network (ANN), Linear Discriminant Analysis (LDA), Naive Bayes (NB), K-Nearest Neighbor (KNN), and Support Vector Machine (SVM). Experimental results show that ANN provides better performance than LDA, NB, KNN and SVM classifiers. The proposed methodology has achieved 97.5% accuracy for three-class and 99.37% for two-class density classification.

  11. Real-time ultrasonic weld evaluation system

    NASA Astrophysics Data System (ADS)

    Katragadda, Gopichand; Nair, Satish; Liu, Harry; Brown, Lawrence M.

    1996-11-01

    Ultrasonic testing techniques are currently used as an alternative to radiography for detecting, classifying,and sizing weld defects, and for evaluating weld quality. Typically, ultrasonic weld inspections are performed manually, which require significant operator expertise and time. Thus, in recent years, the emphasis is to develop automated methods to aid or replace operators in critical weld inspections where inspection time, reliability, and operator safety are major issues. During this period, significant advances wee made in the areas of weld defect classification and sizing. Very few of these methods, however have found their way into the market, largely due to the lack of an integrated approach enabling real-time implementation. Also, not much research effort was directed in improving weld acceptance criteria. This paper presents an integrated system utilizing state-of-the-art techniques for a complete automation of the weld inspection procedure. The modules discussed include transducer tracking, classification, sizing, and weld acceptance criteria. Transducer tracking was studied by experimentally evaluating sonic and optical position tracking techniques. Details for this evaluation are presented. Classification is obtained using a multi-layer perceptron. Results from different feature extraction schemes, including a new method based on a combination of time and frequency-domain signal representations are given. Algorithms developed to automate defect registration and sizing are discussed. A fuzzy-logic acceptance criteria for weld acceptance is presented describing how this scheme provides improved robustness compared to the traditional flow-diagram standards.

  12. Implementation of a Flexible Tool for Automated Literature-Mining and Knowledgebase Development (DevToxMine)

    EPA Science Inventory

    Deriving novel relationships from the scientific literature is an important adjunct to datamining activities for complex datasets in genomics and high-throughput screening activities. Automated text-mining algorithms can be used to extract relevant content from the literature and...

  13. Arsenic species determination in human scalp hair by pressurized hot water extraction and high performance liquid chromatography-inductively coupled plasma-mass spectrometry.

    PubMed

    Morado Piñeiro, Andrés; Moreda-Piñeiro, Jorge; Alonso-Rodríguez, Elia; López-Mahía, Purificación; Muniategui-Lorenzo, Soledad; Prada-Rodríguez, Darío

    2013-02-15

    Analytical methods for the determination of total arsenic and arsenic species (mainly As(III) and As(V)) in human scalp hair have been developed. Inductively coupled plasma-mass spectrometry (ICP-MS) and high performance liquid chromatography (HPLC) coupled to ICP-MS have been used for total arsenic and arsenic species determination, respectively. The proposed methods include a "green", fast, high efficient and automated species leaching procedure by pressurized hot water extraction (PHWE). The operating parameters for PHWE including modifier concentration, extraction temperature, static time, extraction steps, pressure, mean particle size, diatomaceous earth (DE) mass/sample mass ratio and flush volume were studied using design of experiments (Plackett-Burman design PBD). Optimum condition implies a modifier concentration (acetic acid) of 150 mM and powdered hair samples fully mixed with diatomaceous earth (DE) as a dispersing agent at a DE mass/sample mass ratio of 5. The extraction has been carried out at 100°C and at an extraction pressure of 1500 psi for 5 min in four extraction step. Under optimised conditions, limits of quantification of 7.0, 6.3 and 50.3 ng g(-1) for total As, As(III) and As(V), respectively were achieved. Repeatability of the overall procedure (4.4, 7.2 and 2.1% for total As, As(III) and As(V), respectively) was achieved. The analysis of GBW-07601 (human hair) certified reference material was used for validation. The optimised method has been finally applied to several human scalp hair samples. Copyright © 2012 Elsevier B.V. All rights reserved.

  14. Determination of 74 new psychoactive substances in serum using automated in-line solid-phase extraction-liquid chromatography-tandem mass spectrometry.

    PubMed

    Lehmann, Sabrina; Kieliba, Tobias; Beike, Justus; Thevis, Mario; Mercer-Chalmers-Bender, Katja

    2017-10-01

    A detailed description is given of the development and validation of a fully automated in-line solid-phase extraction-liquid chromatography-tandem mass spectrometry (SPE-LC-MS/MS) method capable of detecting 90 central-stimulating new psychoactive substances (NPS) and 5 conventional amphetamine-type stimulants (amphetamine, 3,4-methylenedioxy-methamphetamine (MDMA), 3,4-methylenedioxy-amphetamine (MDA), 3,4-methylenedioxy-N-ethyl-amphetamine (MDEA), methamphetamine) in serum. The aim was to apply the validated method to forensic samples. The preparation of 150μL of serum was performed by an Instrument Top Sample Preparation (ITSP)-SPE with mixed mode cation exchanger cartridges. The extracts were directly injected into an LC-MS/MS system, using a biphenyl column and gradient elution with 2mM ammonium formate/0.1% formic acid and acetonitrile/0.1% formic acid as mobile phases. The chromatographic run time amounts to 9.3min (including re-equilibration). The total cycle time is 11min, due to the interlacing between sample preparation and analysis. The method was fully validated using 69 NPS and five conventional amphetamine-type stimulants, according to the guidelines of the Society of Toxicological and Forensic Chemistry (GTFCh). The guidelines were fully achieved for 62 analytes (with a limit of detection (LOD) between 0.2 and 4μg/L), whilst full validation was not feasible for the remaining 12 analytes. For the fully validated analytes, the method achieved linearity in the 5μg/L (lower limit of quantification, LLOQ) to 250μg/L range (coefficients of determination>0.99). Recoveries for 69 of these compounds were greater than 50%, with relative standard deviations≤15%. The validated method was then tested for its capability in detecting a further 21 NPS, thus totalling 95 tested substances. An LOD between 0.4 and 1.6μg/L was obtained for these 21 additional qualitatively-measured substances. The method was subsequently successfully applied to 28 specimens from routine forensic case work, of which 7 samples were determined to be positive for NPS consumption. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. Use of Magnetic Bead Resin and Automated Liquid Handler Extraction Methods to Robotically Isolate Nucleic Acids of Biological Agent Simulates

    DTIC Science & Technology

    2003-09-01

    concentration, and Bacillus subtilis var. niger spores were detectable at 10,000 CFU/ml. When combined with bead beating, these spores were consistently...Bioloeical Aaent Simulants. Cell suspensions of Bacillus subtilis var. niger spores (BG spores ) and Erwinia herbicola vegetative cells were prepared for...use as biological simulants. BG spores were prepared by inoculating 1 g spores of Bacillus subtilis var. niger (Merck & Co., Inc., Whitehouse Station

  16. Fast targeted analysis of 132 acidic and neutral drugs and poisons in whole blood using LC-MS/MS.

    PubMed

    Di Rago, Matthew; Saar, Eva; Rodda, Luke N; Turfus, Sophie; Kotsos, Alex; Gerostamoulos, Dimitri; Drummer, Olaf H

    2014-10-01

    The aim of this study was to develop an LC-MS/MS based screening technique that covers a broad range of acidic and neutral drugs and poisons by combining a small sample volume and efficient extraction technique with simple automated data processing. After protein precipitation of 100μL of whole blood, 132 common acidic and neutral drugs and poisons including non-steroidal anti-inflammatory drugs, barbiturates, anticonvulsants, antidiabetics, muscle relaxants, diuretics and superwarfarin rodenticides (47 quantitated, 85 reported as detected) were separated using a Shimadzu Prominence HPLC system with a C18 separation column (Kinetex XB-C18, 4.6mm×150mm, 5μm), using gradient elution with a mobile phase of 25mM ammonium acetate buffer (pH 7.5)/acetonitrile. The drugs were detected using an ABSciex(®) API 2000 LC-MS/MS system (ESI+ and -, MRM mode, two transitions per analyte). The method was fully validated in accordance with international guidelines. Quantification data obtained using one-point calibration compared favorably to that using multiple calibrants. The presented LC-MS/MS assay has proven to be applicable for determination of the analytes in blood. The fast and reliable extraction method combined with automated processing gives the opportunity for high throughput and fast turnaround times for forensic and clinical toxicology. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  17. Image-based path planning for automated virtual colonoscopy navigation

    NASA Astrophysics Data System (ADS)

    Hong, Wei

    2008-03-01

    Virtual colonoscopy (VC) is a noninvasive method for colonic polyp screening, by reconstructing three-dimensional models of the colon using computerized tomography (CT). In virtual colonoscopy fly-through navigation, it is crucial to generate an optimal camera path for efficient clinical examination. In conventional methods, the centerline of the colon lumen is usually used as the camera path. In order to extract colon centerline, some time consuming pre-processing algorithms must be performed before the fly-through navigation, such as colon segmentation, distance transformation, or topological thinning. In this paper, we present an efficient image-based path planning algorithm for automated virtual colonoscopy fly-through navigation without the requirement of any pre-processing. Our algorithm only needs the physician to provide a seed point as the starting camera position using 2D axial CT images. A wide angle fisheye camera model is used to generate a depth image from the current camera position. Two types of navigational landmarks, safe regions and target regions are extracted from the depth images. Camera position and its corresponding view direction are then determined using these landmarks. The experimental results show that the generated paths are accurate and increase the user comfort during the fly-through navigation. Moreover, because of the efficiency of our path planning algorithm and rendering algorithm, our VC fly-through navigation system can still guarantee 30 FPS.

  18. Bayesian convolutional neural network based MRI brain extraction on nonhuman primates.

    PubMed

    Zhao, Gengyan; Liu, Fang; Oler, Jonathan A; Meyerand, Mary E; Kalin, Ned H; Birn, Rasmus M

    2018-07-15

    Brain extraction or skull stripping of magnetic resonance images (MRI) is an essential step in neuroimaging studies, the accuracy of which can severely affect subsequent image processing procedures. Current automatic brain extraction methods demonstrate good results on human brains, but are often far from satisfactory on nonhuman primates, which are a necessary part of neuroscience research. To overcome the challenges of brain extraction in nonhuman primates, we propose a fully-automated brain extraction pipeline combining deep Bayesian convolutional neural network (CNN) and fully connected three-dimensional (3D) conditional random field (CRF). The deep Bayesian CNN, Bayesian SegNet, is used as the core segmentation engine. As a probabilistic network, it is not only able to perform accurate high-resolution pixel-wise brain segmentation, but also capable of measuring the model uncertainty by Monte Carlo sampling with dropout in the testing stage. Then, fully connected 3D CRF is used to refine the probability result from Bayesian SegNet in the whole 3D context of the brain volume. The proposed method was evaluated with a manually brain-extracted dataset comprising T1w images of 100 nonhuman primates. Our method outperforms six popular publicly available brain extraction packages and three well-established deep learning based methods with a mean Dice coefficient of 0.985 and a mean average symmetric surface distance of 0.220 mm. A better performance against all the compared methods was verified by statistical tests (all p-values < 10 -4 , two-sided, Bonferroni corrected). The maximum uncertainty of the model on nonhuman primate brain extraction has a mean value of 0.116 across all the 100 subjects. The behavior of the uncertainty was also studied, which shows the uncertainty increases as the training set size decreases, the number of inconsistent labels in the training set increases, or the inconsistency between the training set and the testing set increases. Copyright © 2018 Elsevier Inc. All rights reserved.

  19. Orbital transfer vehicle launch operations study: Automated technology knowledge base, volume 4

    NASA Technical Reports Server (NTRS)

    1986-01-01

    A simplified retrieval strategy for compiling automation-related bibliographies from NASA/RECON is presented. Two subsets of NASA Thesaurus subject terms were extracted: a primary list, which is used to obtain an initial set of citations; and a secondary list, which is used to limit or further specify a large initial set of citations. These subject term lists are presented in Appendix A as the Automated Technology Knowledge Base (ATKB) Thesaurus.

  20. [Preparation and applications of a supported liquid-liquid extraction column with a composite diatomite material].

    PubMed

    Bao, Jianmin; Ma, Zhishuang; Sun, Ying; Wang, Yongzun; Li, Youxin

    2012-08-01

    A rapid and special supported liquid-liquid extraction (SLE) column was developed with a composite diatomite material. The SLE column was evaluated by high performance liquid chromatography (HPLC) with acidic, neutral and alkaline compounds dissolved in water. Furthermore, some real complex samples were also analyzed by HPLC with the SLE method. The recoveries of benzoic acid (acidic), p-nitroaniline (alkaline) and 4-hydroxy-benzoic methyl ester (neutral) treated by the SLE column were 90.6%, 98.1% and 97.7%. However, the recoveries of the three compounds treated by traditional liquid-liquid extraction (LLE) method were 71.9%, 81.9% and 83.9%. The results showed that the SLE technique had higher recoveries than the traditional LLE method. The spiked recoveries of the complex samples, such as benzoic acid in Sprite and dexamethasone acetate, chlorphenamine maleate, indomethacin in bovine serum, were between 80% and 110% and the relative standard deviations (RSDs) were less than 15%. For biological specimen, the results could be accepted. Meantime, many disadvantages associated with traditional LLE method, such as emulsion formation, didn't occur using SLE column. The SLE column technique is a good sample preparation method with many advantages, such as rapid, simple, robust, easily automated, high recovery and high-throughput, which would be widely used in the future.

Top