Sample records for manual extraction methods

  1. A quality score for coronary artery tree extraction results

    NASA Astrophysics Data System (ADS)

    Cao, Qing; Broersen, Alexander; Kitslaar, Pieter H.; Lelieveldt, Boudewijn P. F.; Dijkstra, Jouke

    2018-02-01

    Coronary artery trees (CATs) are often extracted to aid the fully automatic analysis of coronary artery disease on coronary computed tomography angiography (CCTA) images. Automatically extracted CATs often miss some arteries or include wrong extractions which require manual corrections before performing successive steps. For analyzing a large number of datasets, a manual quality check of the extraction results is time-consuming. This paper presents a method to automatically calculate quality scores for extracted CATs in terms of clinical significance of the extracted arteries and the completeness of the extracted CAT. Both right dominant (RD) and left dominant (LD) anatomical statistical models are generated and exploited in developing the quality score. To automatically determine which model should be used, a dominance type detection method is also designed. Experiments are performed on the automatically extracted and manually refined CATs from 42 datasets to evaluate the proposed quality score. In 39 (92.9%) cases, the proposed method is able to measure the quality of the manually refined CATs with higher scores than the automatically extracted CATs. In a 100-point scale system, the average scores for automatically and manually refined CATs are 82.0 (+/-15.8) and 88.9 (+/-5.4) respectively. The proposed quality score will assist the automatic processing of the CAT extractions for large cohorts which contain both RD and LD cases. To the best of our knowledge, this is the first time that a general quality score for an extracted CAT is presented.

  2. Development of representative magnetic resonance imaging-based atlases of the canine brain and evaluation of three methods for atlas-based segmentation.

    PubMed

    Milne, Marjorie E; Steward, Christopher; Firestone, Simon M; Long, Sam N; O'Brien, Terrence J; Moffat, Bradford A

    2016-04-01

    To develop representative MRI atlases of the canine brain and to evaluate 3 methods of atlas-based segmentation (ABS). 62 dogs without clinical signs of epilepsy and without MRI evidence of structural brain disease. The MRI scans from 44 dogs were used to develop 4 templates on the basis of brain shape (brachycephalic, mesaticephalic, dolichocephalic, and combined mesaticephalic and dolichocephalic). Atlas labels were generated by segmenting the brain, ventricular system, hippocampal formation, and caudate nuclei. The MRI scans from the remaining 18 dogs were used to evaluate 3 methods of ABS (manual brain extraction and application of a brain shape-specific template [A], automatic brain extraction and application of a brain shape-specific template [B], and manual brain extraction and application of a combined template [C]). The performance of each ABS method was compared by calculation of the Dice and Jaccard coefficients, with manual segmentation used as the gold standard. Method A had the highest mean Jaccard coefficient and was the most accurate ABS method assessed. Measures of overlap for ABS methods that used manual brain extraction (A and C) ranged from 0.75 to 0.95 and compared favorably with repeated measures of overlap for manual extraction, which ranged from 0.88 to 0.97. Atlas-based segmentation was an accurate and repeatable method for segmentation of canine brain structures. It could be performed more rapidly than manual segmentation, which should allow the application of computer-assisted volumetry to large data sets and clinical cases and facilitate neuroimaging research and disease diagnosis.

  3. Semiautomated Device for Batch Extraction of Metabolites from Tissue Samples

    PubMed Central

    2012-01-01

    Metabolomics has become a mainstream analytical strategy for investigating metabolism. The quality of data derived from these studies is proportional to the consistency of the sample preparation. Although considerable research has been devoted to finding optimal extraction protocols, most of the established methods require extensive sample handling. Manual sample preparation can be highly effective in the hands of skilled technicians, but an automated tool for purifying metabolites from complex biological tissues would be of obvious utility to the field. Here, we introduce the semiautomated metabolite batch extraction device (SAMBED), a new tool designed to simplify metabolomics sample preparation. We discuss SAMBED’s design and show that SAMBED-based extractions are of comparable quality to extracts produced through traditional methods (13% mean coefficient of variation from SAMBED versus 16% from manual extractions). Moreover, we show that aqueous SAMBED-based methods can be completed in less than a quarter of the time required for manual extractions. PMID:22292466

  4. [DNA extraction from bones and teeth using AutoMate Express forensic DNA extraction system].

    PubMed

    Gao, Lin-Lin; Xu, Nian-Lai; Xie, Wei; Ding, Shao-Cheng; Wang, Dong-Jing; Ma, Li-Qin; Li, You-Ying

    2013-04-01

    To explore a new method in order to extract DNA from bones and teeth automatically. Samples of 33 bones and 15 teeth were acquired by freeze-mill method and manual method, respectively. DNA materials were extracted and quantified from the triturated samples by AutoMate Express forensic DNA extraction system. DNA extraction from bones and teeth were completed in 3 hours using the AutoMate Express forensic DNA extraction system. There was no statistical difference between the two methods in the DNA concentration of bones. Both bones and teeth got the good STR typing by freeze-mill method, and the DNA concentration of teeth was higher than those by manual method. AutoMate Express forensic DNA extraction system is a new method to extract DNA from bones and teeth, which can be applied in forensic practice.

  5. Extracting DNA from FFPE Tissue Biospecimens Using User-Friendly Automated Technology: Is There an Impact on Yield or Quality?

    PubMed

    Mathieson, William; Guljar, Nafia; Sanchez, Ignacio; Sroya, Manveer; Thomas, Gerry A

    2018-05-03

    DNA extracted from formalin-fixed, paraffin-embedded (FFPE) tissue blocks is amenable to analytical techniques, including sequencing. DNA extraction protocols are typically long and complex, often involving an overnight proteinase K digest. Automated platforms that shorten and simplify the process are therefore an attractive proposition for users wanting a faster turn-around or to process large numbers of biospecimens. It is, however, unclear whether automated extraction systems return poorer DNA yields or quality than manual extractions performed by experienced technicians. We extracted DNA from 42 FFPE clinical tissue biospecimens using the QiaCube (Qiagen) and ExScale (ExScale Biospecimen Solutions) automated platforms, comparing DNA yields and integrities with those from manual extractions. The QIAamp DNA FFPE Spin Column Kit was used for manual and QiaCube DNA extractions and the ExScale extractions were performed using two of the manufacturer's magnetic bead kits: one extracting DNA only and the other simultaneously extracting DNA and RNA. In all automated extraction methods, DNA yields and integrities (assayed using DNA Integrity Numbers from a 4200 TapeStation and the qPCR-based Illumina FFPE QC Assay) were poorer than in the manual method, with the QiaCube system performing better than the ExScale system. However, ExScale was fastest, offered the highest reproducibility when extracting DNA only, and required the least intervention or technician experience. Thus, the extraction methods have different strengths and weaknesses, would appeal to different users with different requirements, and therefore, we cannot recommend one method over another.

  6. Automatic sentence extraction for the detection of scientific paper relations

    NASA Astrophysics Data System (ADS)

    Sibaroni, Y.; Prasetiyowati, S. S.; Miftachudin, M.

    2018-03-01

    The relations between scientific papers are very useful for researchers to see the interconnection between scientific papers quickly. By observing the inter-article relationships, researchers can identify, among others, the weaknesses of existing research, performance improvements achieved to date, and tools or data typically used in research in specific fields. So far, methods that have been developed to detect paper relations include machine learning and rule-based methods. However, a problem still arises in the process of sentence extraction from scientific paper documents, which is still done manually. This manual process causes the detection of scientific paper relations longer and inefficient. To overcome this problem, this study performs an automatic sentences extraction while the paper relations are identified based on the citation sentence. The performance of the built system is then compared with that of the manual extraction system. The analysis results suggested that the automatic sentence extraction indicates a very high level of performance in the detection of paper relations, which is close to that of manual sentence extraction.

  7. Extracting data from figures with software was faster, with higher interrater reliability than manual extraction.

    PubMed

    Jelicic Kadic, Antonia; Vucic, Katarina; Dosenovic, Svjetlana; Sapunar, Damir; Puljak, Livia

    2016-06-01

    To compare speed and accuracy of graphical data extraction using manual estimation and open source software. Data points from eligible graphs/figures published in randomized controlled trials (RCTs) from 2009 to 2014 were extracted by two authors independently, both by manual estimation and with the Plot Digitizer, open source software. Corresponding authors of each RCT were contacted up to four times via e-mail to obtain exact numbers that were used to create graphs. Accuracy of each method was compared against the source data from which the original graphs were produced. Software data extraction was significantly faster, reducing time for extraction for 47%. Percent agreement between the two raters was 51% for manual and 53.5% for software data extraction. Percent agreement between the raters and original data was 66% vs. 75% for the first rater and 69% vs. 73% for the second rater, for manual and software extraction, respectively. Data extraction from figures should be conducted using software, whereas manual estimation should be avoided. Using software for data extraction of data presented only in figures is faster and enables higher interrater reliability. Copyright © 2016 Elsevier Inc. All rights reserved.

  8. Automated extraction for the analysis of 11-nor-delta9-tetrahydrocannabinol-9-carboxylic acid (THCCOOH) in urine using a six-head probe Hamilton Microlab 2200 system and gas chromatography-mass spectrometry.

    PubMed

    Whitter, P D; Cary, P L; Leaton, J I; Johnson, J E

    1999-01-01

    An automated extraction scheme for the analysis of 11 -nor-delta9-tetrahydrocannabinol-9-carboxylic acid using the Hamilton Microlab 2200, which was modified for gravity-flow solid-phase extraction, has been evaluated. The Hamilton was fitted with a six-head probe, a modular valve positioner, and a peristaltic pump. The automated method significantly increased sample throughput, improved assay consistency, and reduced the time spent performing the extraction. Extraction recovery for the automated method was > 90%. The limit of detection, limit of quantitation, and upper limit of linearity were equivalent to the manual method: 1.5, 3.0, and 300 ng/mL, respectively. Precision at the 15-ng/mL cut-off was as follows: mean = 14.4, standard deviation = 0.5, coefficient of variation = 3.5%. Comparison of 38 patient samples, extracted by the manual and automated extraction methods, demonstrated the following correlation statistics: r = .991, slope 1.029, and y-intercept -2.895. Carryover was < 0.3% at 1000 ng/mL. Aliquoting/extraction time for the automated method (48 urine samples) was 50 min, and the manual procedure required approximately 2.5 h. The automated aliquoting/extraction method on the Hamilton Microlab 2200 and its use in forensic applications are reviewed.

  9. Comparison of manual and automated nucleic acid extraction methods from clinical specimens for microbial diagnosis purposes.

    PubMed

    Wozniak, Aniela; Geoffroy, Enrique; Miranda, Carolina; Castillo, Claudia; Sanhueza, Francia; García, Patricia

    2016-11-01

    The choice of nucleic acids (NAs) extraction method for molecular diagnosis in microbiology is of major importance because of the low microbial load, different nature of microorganisms, and clinical specimens. The NA yield of different extraction methods has been mostly studied using spiked samples. However, information from real human clinical specimens is scarce. The purpose of this study was to compare the performance of a manual low-cost extraction method (Qiagen kit or salting-out extraction method) with the automated high-cost MagNAPure Compact method. According to cycle threshold values for different pathogens, MagNAPure is as efficient as Qiagen for NA extraction from noncomplex clinical specimens (nasopharyngeal swab, skin swab, plasma, respiratory specimens). In contrast, according to cycle threshold values for RNAseP, MagNAPure method may not be an appropriate method for NA extraction from blood. We believe that MagNAPure versatility reduced risk of cross-contamination and reduced hands-on time compensates its high cost. Copyright © 2016 Elsevier Inc. All rights reserved.

  10. Natural Language Processing As an Alternative to Manual Reporting of Colonoscopy Quality Metrics

    PubMed Central

    RAJU, GOTTUMUKKALA S.; LUM, PHILLIP J.; SLACK, REBECCA; THIRUMURTHI, SELVI; LYNCH, PATRICK M.; MILLER, ETHAN; WESTON, BRIAN R.; DAVILA, MARTA L.; BHUTANI, MANOOP S.; SHAFI, MEHNAZ A.; BRESALIER, ROBERT S.; DEKOVICH, ALEXANDER A.; LEE, JEFFREY H.; GUHA, SUSHOVAN; PANDE, MALA; BLECHACZ, BORIS; RASHID, ASIF; ROUTBORT, MARK; SHUTTLESWORTH, GLADIS; MISHRA, LOPA; STROEHLEIN, JOHN R.; ROSS, WILLIAM A.

    2015-01-01

    BACKGROUND & AIMS The adenoma detection rate (ADR) is a quality metric tied to interval colon cancer occurrence. However, manual extraction of data to calculate and track the ADR in clinical practice is labor-intensive. To overcome this difficulty, we developed a natural language processing (NLP) method to identify patients, who underwent their first screening colonoscopy, identify adenomas and sessile serrated adenomas (SSA). We compared the NLP generated results with that of manual data extraction to test the accuracy of NLP, and report on colonoscopy quality metrics using NLP. METHODS Identification of screening colonoscopies using NLP was compared with that using the manual method for 12,748 patients who underwent colonoscopies from July 2010 to February 2013. Also, identification of adenomas and SSAs using NLP was compared with that using the manual method with 2259 matched patient records. Colonoscopy ADRs using these methods were generated for each physician. RESULTS NLP correctly identified 91.3% of the screening examinations, whereas the manual method identified 87.8% of them. Both the manual method and NLP correctly identified examinations of patients with adenomas and SSAs in the matched records almost perfectly. Both NLP and manual method produce comparable values for ADR for each endoscopist as well as the group as a whole. CONCLUSIONS NLP can correctly identify screening colonoscopies, accurately identify adenomas and SSAs in a pathology database, and provide real-time quality metrics for colonoscopy. PMID:25910665

  11. An Improved Manual Method for NOx Emission Measurement.

    ERIC Educational Resources Information Center

    Dee, L. A.; And Others

    The current manual NO (x) sampling and analysis method was evaluated. Improved time-integrated sampling and rapid analysis methods were developed. In the new method, the sample gas is drawn through a heated bed of uniquely active, crystalline, Pb02 where NO (x) is quantitatively absorbed. Nitrate ion is later extracted with water and the…

  12. Benefits of a clinical data warehouse with data mining tools to collect data for a radiotherapy trial

    PubMed Central

    Roelofs, Erik; Persoon, Lucas; Nijsten, Sebastiaan; Wiessler, Wolfgang; Dekker, André; Lambin, Philippe

    2016-01-01

    Introduction Collecting trial data in a medical environment is at present mostly performed manually and therefore time-consuming, prone to errors and often incomplete with the complex data considered. Faster and more accurate methods are needed to improve the data quality and to shorten data collection times where information is often scattered over multiple data sources. The purpose of this study is to investigate the possible benefit of modern data warehouse technology in the radiation oncology field. Material and methods In this study, a Computer Aided Theragnostics (CAT) data warehouse combined with automated tools for feature extraction was benchmarked against the regular manual data-collection processes. Two sets of clinical parameters were compiled for non-small cell lung cancer (NSCLC) and rectal cancer, using 27 patients per disease. Data collection times and inconsistencies were compared between the manual and the automated extraction method. Results The average time per case to collect the NSCLC data manually was 10.4 ± 2.1 min and 4.3 ± 1.1 min when using the automated method (p < 0.001). For rectal cancer, these times were 13.5 ± 4.1 and 6.8 ± 2.4 min, respectively (p < 0.001). In 3.2% of the data collected for NSCLC and 5.3% for rectal cancer, there was a discrepancy between the manual and automated method. Conclusions Aggregating multiple data sources in a data warehouse combined with tools for extraction of relevant parameters is beneficial for data collection times and offers the ability to improve data quality. The initial investments in digitizing the data are expected to be compensated due to the flexibility of the data analysis. Furthermore, successive investigations can easily select trial candidates and extract new parameters from the existing databases. PMID:23394741

  13. Using Computer-Extracted Data from Electronic Health Records to Measure the Quality of Adolescent Well-Care

    PubMed Central

    Gardner, William; Morton, Suzanne; Byron, Sepheen C; Tinoco, Aldo; Canan, Benjamin D; Leonhart, Karen; Kong, Vivian; Scholle, Sarah Hudson

    2014-01-01

    Objective To determine whether quality measures based on computer-extracted EHR data can reproduce findings based on data manually extracted by reviewers. Data Sources We studied 12 measures of care indicated for adolescent well-care visits for 597 patients in three pediatric health systems. Study Design Observational study. Data Collection/Extraction Methods Manual reviewers collected quality data from the EHR. Site personnel programmed their EHR systems to extract the same data from structured fields in the EHR according to national health IT standards. Principal Findings Overall performance measured via computer-extracted data was 21.9 percent, compared with 53.2 percent for manual data. Agreement measures were high for immunizations. Otherwise, agreement between computer extraction and manual review was modest (Kappa = 0.36) because computer-extracted data frequently missed care events (sensitivity = 39.5 percent). Measure validity varied by health care domain and setting. A limitation of our findings is that we studied only three domains and three sites. Conclusions The accuracy of computer-extracted EHR quality reporting depends on the use of structured data fields, with the highest agreement found for measures and in the setting that had the greatest concentration of structured fields. We need to improve documentation of care, data extraction, and adaptation of EHR systems to practice workflow. PMID:24471935

  14. Benefits of a clinical data warehouse with data mining tools to collect data for a radiotherapy trial.

    PubMed

    Roelofs, Erik; Persoon, Lucas; Nijsten, Sebastiaan; Wiessler, Wolfgang; Dekker, André; Lambin, Philippe

    2013-07-01

    Collecting trial data in a medical environment is at present mostly performed manually and therefore time-consuming, prone to errors and often incomplete with the complex data considered. Faster and more accurate methods are needed to improve the data quality and to shorten data collection times where information is often scattered over multiple data sources. The purpose of this study is to investigate the possible benefit of modern data warehouse technology in the radiation oncology field. In this study, a Computer Aided Theragnostics (CAT) data warehouse combined with automated tools for feature extraction was benchmarked against the regular manual data-collection processes. Two sets of clinical parameters were compiled for non-small cell lung cancer (NSCLC) and rectal cancer, using 27 patients per disease. Data collection times and inconsistencies were compared between the manual and the automated extraction method. The average time per case to collect the NSCLC data manually was 10.4 ± 2.1 min and 4.3 ± 1.1 min when using the automated method (p<0.001). For rectal cancer, these times were 13.5 ± 4.1 and 6.8 ± 2.4 min, respectively (p<0.001). In 3.2% of the data collected for NSCLC and 5.3% for rectal cancer, there was a discrepancy between the manual and automated method. Aggregating multiple data sources in a data warehouse combined with tools for extraction of relevant parameters is beneficial for data collection times and offers the ability to improve data quality. The initial investments in digitizing the data are expected to be compensated due to the flexibility of the data analysis. Furthermore, successive investigations can easily select trial candidates and extract new parameters from the existing databases. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  15. Multicenter Comparative Evaluation of Five Commercial Methods for Toxoplasma DNA Extraction from Amniotic Fluid▿

    PubMed Central

    Yera, H.; Filisetti, D.; Bastien, P.; Ancelle, T.; Thulliez, P.; Delhaes, L.

    2009-01-01

    Over the past few years, a number of new nucleic acid extraction methods and extraction platforms using chemistry combined with magnetic or silica particles have been developed, in combination with instruments to facilitate the extraction procedure. The objective of the present study was to investigate the suitability of these automated methods for the isolation of Toxoplasma gondii DNA from amniotic fluid (AF). Therefore, three automated procedures were compared to two commercialized manual extraction methods. The MagNA Pure Compact (Roche), BioRobot EZ1 (Qiagen), and easyMAG (bioMérieux) automated procedures were compared to two manual DNA extraction kits, the QIAamp DNA minikit (Qiagen) and the High Pure PCR template preparation kit (Roche). Evaluation was carried out with two specific Toxoplasma PCRs (targeting the 529-bp repeat element), inhibitor search PCRs, and human beta-globin PCRs. The samples each consisted of 4 ml of AF with or without a calibrated Toxoplasma gondii RH strain suspension (0, 1, 2.5, 5, and 25 tachyzoites/ml). All PCR assays were laboratory-developed real-time PCR assays, using either TaqMan or fluorescent resonance energy transfer probes. A total of 1,178 PCRs were performed, including 978 Toxoplasma PCRs. The automated and manual methods were similar in sensitivity for DNA extraction from T. gondii at the highest concentration (25 Toxoplasma gondii cells/ml). However, our results showed that the DNA extraction procedures led to variable efficacy in isolating low concentrations of tachyzoites in AF samples (<5 Toxoplasma gondii cells/ml), a difference that might have repercussions since low parasite concentrations in AF exist and can lead to congenital toxoplasmosis. PMID:19846633

  16. [Development of an automated processing method to detect coronary motion for coronary magnetic resonance angiography].

    PubMed

    Asou, Hiroya; Imada, N; Sato, T

    2010-06-20

    On coronary MR angiography (CMRA), cardiac motions worsen the image quality. To improve the image quality, detection of cardiac especially for individual coronary motion is very important. Usually, scan delay and duration were determined manually by the operator. We developed a new evaluation method to calculate static time of individual coronary artery. At first, coronary cine MRI was taken at the level of about 3 cm below the aortic valve (80 images/R-R). Chronological change of the signals were evaluated with Fourier transformation of each pixel of the images were done. Noise reduction with subtraction process and extraction process were done. To extract higher motion such as coronary arteries, morphological filter process and labeling process were added. Using these imaging processes, individual coronary motion was extracted and individual coronary static time was calculated automatically. We compared the images with ordinary manual method and new automated method in 10 healthy volunteers. Coronary static times were calculated with our method. Calculated coronary static time was shorter than that of ordinary manual method. And scan time became about 10% longer than that of ordinary method. Image qualities were improved in our method. Our automated detection method for coronary static time with chronological Fourier transformation has a potential to improve the image quality of CMRA and easy processing.

  17. Automated extraction of DNA from biological stains on fabric from crime cases. A comparison of a manual and three automated methods.

    PubMed

    Stangegaard, Michael; Hjort, Benjamin B; Hansen, Thomas N; Hoflund, Anders; Mogensen, Helle S; Hansen, Anders J; Morling, Niels

    2013-05-01

    The presence of PCR inhibitors in extracted DNA may interfere with the subsequent quantification and short tandem repeat (STR) reactions used in forensic genetic DNA typing. DNA extraction from fabric for forensic genetic purposes may be challenging due to the occasional presence of PCR inhibitors that may be co-extracted with the DNA. Using 120 forensic trace evidence samples consisting of various types of fabric, we compared three automated DNA extraction methods based on magnetic beads (PrepFiler Express Forensic DNA Extraction Kit on an AutoMate Express, QIAsyphony DNA Investigator kit either with the sample pre-treatment recommended by Qiagen or an in-house optimized sample pre-treatment on a QIAsymphony SP) and one manual method (Chelex) with the aim of reducing the amount of PCR inhibitors in the DNA extracts and increasing the proportion of reportable STR-profiles. A total of 480 samples were processed. The highest DNA recovery was obtained with the PrepFiler Express kit on an AutoMate Express while the lowest DNA recovery was obtained using a QIAsymphony SP with the sample pre-treatment recommended by Qiagen. Extraction using a QIAsymphony SP with the sample pre-treatment recommended by Qiagen resulted in the lowest percentage of PCR inhibition (0%) while extraction using manual Chelex resulted in the highest percentage of PCR inhibition (51%). The largest number of reportable STR-profiles was obtained with DNA from samples extracted with the PrepFiler Express kit (75%) while the lowest number was obtained with DNA from samples extracted using a QIAsymphony SP with the sample pre-treatment recommended by Qiagen (41%). Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  18. Evaluation and comparison of rapid methods for the detection of Salmonella in naturally contaminated pine nuts using different pre enrichment media.

    PubMed

    Wang, Hua; Gill, Vikas S; Cheng, Chorng-Ming; Gonzalez-Escalona, Narjol; Irvin, Kari A; Zheng, Jie; Bell, Rebecca L; Jacobson, Andrew P; Hammack, Thomas S

    2015-04-01

    Foodborne outbreaks, involving pine nuts and peanut butter, illustrate the need to rapidly detect Salmonella in low moisture foods. However, the current Bacteriological Analytical Manual (BAM) culture method for Salmonella, using lactose broth (LB) as a pre enrichment medium, has not reliably supported real-time quantitative PCR (qPCR) assays for certain foods. We evaluated two qPCR assays in LB and four other pre enrichment media: buffered peptone water (BPW), modified BPW (mBPW), Universal Pre enrichment broth (UPB), and BAX(®) MP media to detect Salmonella in naturally-contaminated pine nuts (2011 outbreak). A four-way comparison among culture method, Pathatrix(®) Auto, VIDAS(®) Easy SLM, and qPCR was conducted. Automated DNA extraction techniques were compared with manual extraction methods (boiling or InstaGene™). There were no significant differences (P > 0.05) among the five pre enrichment media for pine nuts using the culture method. While both qPCR assays produced significantly (P ≤ 0.05) higher false negatives in 24 h pre enriched LB than in the other four media, they were as sensitive as the culture method in BPW, mBPW, UPB, and BAX media. The VIDAS Easy and qPCR were equivalent; Pathatrix was the least effective method. The Automatic PrepSEQ™ DNA extraction, using 1000 μL of pre enrichment, was as effective as manual extraction methods. Published by Elsevier Ltd.

  19. Smart Extraction and Analysis System for Clinical Research.

    PubMed

    Afzal, Muhammad; Hussain, Maqbool; Khan, Wajahat Ali; Ali, Taqdir; Jamshed, Arif; Lee, Sungyoung

    2017-05-01

    With the increasing use of electronic health records (EHRs), there is a growing need to expand the utilization of EHR data to support clinical research. The key challenge in achieving this goal is the unavailability of smart systems and methods to overcome the issue of data preparation, structuring, and sharing for smooth clinical research. We developed a robust analysis system called the smart extraction and analysis system (SEAS) that consists of two subsystems: (1) the information extraction system (IES), for extracting information from clinical documents, and (2) the survival analysis system (SAS), for a descriptive and predictive analysis to compile the survival statistics and predict the future chance of survivability. The IES subsystem is based on a novel permutation-based pattern recognition method that extracts information from unstructured clinical documents. Similarly, the SAS subsystem is based on a classification and regression tree (CART)-based prediction model for survival analysis. SEAS is evaluated and validated on a real-world case study of head and neck cancer. The overall information extraction accuracy of the system for semistructured text is recorded at 99%, while that for unstructured text is 97%. Furthermore, the automated, unstructured information extraction has reduced the average time spent on manual data entry by 75%, without compromising the accuracy of the system. Moreover, around 88% of patients are found in a terminal or dead state for the highest clinical stage of disease (level IV). Similarly, there is an ∼36% probability of a patient being alive if at least one of the lifestyle risk factors was positive. We presented our work on the development of SEAS to replace costly and time-consuming manual methods with smart automatic extraction of information and survival prediction methods. SEAS has reduced the time and energy of human resources spent unnecessarily on manual tasks.

  20. Extracting BI-RADS Features from Portuguese Clinical Texts.

    PubMed

    Nassif, Houssam; Cunha, Filipe; Moreira, Inês C; Cruz-Correia, Ricardo; Sousa, Eliana; Page, David; Burnside, Elizabeth; Dutra, Inês

    2012-01-01

    In this work we build the first BI-RADS parser for Portuguese free texts, modeled after existing approaches to extract BI-RADS features from English medical records. Our concept finder uses a semantic grammar based on the BIRADS lexicon and on iterative transferred expert knowledge. We compare the performance of our algorithm to manual annotation by a specialist in mammography. Our results show that our parser's performance is comparable to the manual method.

  1. High-throughput DNA extraction of forensic adhesive tapes.

    PubMed

    Forsberg, Christina; Jansson, Linda; Ansell, Ricky; Hedman, Johannes

    2016-09-01

    Tape-lifting has since its introduction in the early 2000's become a well-established sampling method in forensic DNA analysis. Sampling is quick and straightforward while the following DNA extraction is more challenging due to the "stickiness", rigidity and size of the tape. We have developed, validated and implemented a simple and efficient direct lysis DNA extraction protocol for adhesive tapes that requires limited manual labour. The method uses Chelex beads and is applied with SceneSafe FAST tape. This direct lysis protocol provided higher mean DNA yields than PrepFiler Express BTA on Automate Express, although the differences were not significant when using clothes worn in a controlled fashion as reference material (p=0.13 and p=0.34 for T-shirts and button-down shirts, respectively). Through in-house validation we show that the method is fit-for-purpose for application in casework, as it provides high DNA yields and amplifiability, as well as good reproducibility and DNA extract stability. After implementation in casework, the proportion of extracts with DNA concentrations above 0.01ng/μL increased from 71% to 76%. Apart from providing higher DNA yields compared with the previous method, the introduction of the developed direct lysis protocol also reduced the amount of manual labour by half and doubled the potential throughput for tapes at the laboratory. Generally, simplified manual protocols can serve as a cost-effective alternative to sophisticated automation solutions when the aim is to enable high-throughput DNA extraction of complex crime scene samples. Copyright © 2016 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.

  2. Automated data mining: an innovative and efficient web-based approach to maintaining resident case logs.

    PubMed

    Bhattacharya, Pratik; Van Stavern, Renee; Madhavan, Ramesh

    2010-12-01

    Use of resident case logs has been considered by the Residency Review Committee for Neurology of the Accreditation Council for Graduate Medical Education (ACGME). This study explores the effectiveness of a data-mining program for creating resident logs and compares the results to a manual data-entry system. Other potential applications of data mining to enhancing resident education are also explored. Patient notes dictated by residents were extracted from the Hospital Information System and analyzed using an unstructured mining program. History, examination and ICD codes were obtained and compared to the existing manual log. The automated data History, examination, and ICD codes were gathered for a 30-day period and compared to manual case logs. The automated method extracted all resident dictations with the dates of encounter and transcription. The automated data-miner processed information from all 19 residents, while only 4 residents logged manually. The manual method identified only broad categories of diseases; the major categories were stroke or vascular disorder 53 (27.6%), epilepsy 28 (14.7%), and pain syndromes 26 (13.5%). In the automated method, epilepsy 114 (21.1%), cerebral atherosclerosis 114 (21.1%), and headache 105 (19.4%) were the most frequent primary diagnoses, and headache 89 (16.5%), seizures 94 (17.4%), and low back pain 47 (9%) were the most common chief complaints. More detailed patient information such as tobacco use 227 (42%), alcohol use 205 (38%), and drug use 38 (7%) were extracted by the data-mining method. Manual case logs are time-consuming, provide limited information, and may be unpopular with residents. Data mining is a time-effective tool that may aid in the assessment of resident experience or the ACGME core competencies or in resident clinical research. More study of this method in larger numbers of residency programs is needed.

  3. Achieving Accurate Automatic Sleep Staging on Manually Pre-processed EEG Data Through Synchronization Feature Extraction and Graph Metrics.

    PubMed

    Chriskos, Panteleimon; Frantzidis, Christos A; Gkivogkli, Polyxeni T; Bamidis, Panagiotis D; Kourtidou-Papadeli, Chrysoula

    2018-01-01

    Sleep staging, the process of assigning labels to epochs of sleep, depending on the stage of sleep they belong, is an arduous, time consuming and error prone process as the initial recordings are quite often polluted by noise from different sources. To properly analyze such data and extract clinical knowledge, noise components must be removed or alleviated. In this paper a pre-processing and subsequent sleep staging pipeline for the sleep analysis of electroencephalographic signals is described. Two novel methods of functional connectivity estimation (Synchronization Likelihood/SL and Relative Wavelet Entropy/RWE) are comparatively investigated for automatic sleep staging through manually pre-processed electroencephalographic recordings. A multi-step process that renders signals suitable for further analysis is initially described. Then, two methods that rely on extracting synchronization features from electroencephalographic recordings to achieve computerized sleep staging are proposed, based on bivariate features which provide a functional overview of the brain network, contrary to most proposed methods that rely on extracting univariate time and frequency features. Annotation of sleep epochs is achieved through the presented feature extraction methods by training classifiers, which are in turn able to accurately classify new epochs. Analysis of data from sleep experiments on a randomized, controlled bed-rest study, which was organized by the European Space Agency and was conducted in the "ENVIHAB" facility of the Institute of Aerospace Medicine at the German Aerospace Center (DLR) in Cologne, Germany attains high accuracy rates, over 90% based on ground truth that resulted from manual sleep staging by two experienced sleep experts. Therefore, it can be concluded that the above feature extraction methods are suitable for semi-automatic sleep staging.

  4. Achieving Accurate Automatic Sleep Staging on Manually Pre-processed EEG Data Through Synchronization Feature Extraction and Graph Metrics

    PubMed Central

    Chriskos, Panteleimon; Frantzidis, Christos A.; Gkivogkli, Polyxeni T.; Bamidis, Panagiotis D.; Kourtidou-Papadeli, Chrysoula

    2018-01-01

    Sleep staging, the process of assigning labels to epochs of sleep, depending on the stage of sleep they belong, is an arduous, time consuming and error prone process as the initial recordings are quite often polluted by noise from different sources. To properly analyze such data and extract clinical knowledge, noise components must be removed or alleviated. In this paper a pre-processing and subsequent sleep staging pipeline for the sleep analysis of electroencephalographic signals is described. Two novel methods of functional connectivity estimation (Synchronization Likelihood/SL and Relative Wavelet Entropy/RWE) are comparatively investigated for automatic sleep staging through manually pre-processed electroencephalographic recordings. A multi-step process that renders signals suitable for further analysis is initially described. Then, two methods that rely on extracting synchronization features from electroencephalographic recordings to achieve computerized sleep staging are proposed, based on bivariate features which provide a functional overview of the brain network, contrary to most proposed methods that rely on extracting univariate time and frequency features. Annotation of sleep epochs is achieved through the presented feature extraction methods by training classifiers, which are in turn able to accurately classify new epochs. Analysis of data from sleep experiments on a randomized, controlled bed-rest study, which was organized by the European Space Agency and was conducted in the “ENVIHAB” facility of the Institute of Aerospace Medicine at the German Aerospace Center (DLR) in Cologne, Germany attains high accuracy rates, over 90% based on ground truth that resulted from manual sleep staging by two experienced sleep experts. Therefore, it can be concluded that the above feature extraction methods are suitable for semi-automatic sleep staging. PMID:29628883

  5. Extracting BI-RADS Features from Portuguese Clinical Texts

    PubMed Central

    Nassif, Houssam; Cunha, Filipe; Moreira, Inês C.; Cruz-Correia, Ricardo; Sousa, Eliana; Page, David; Burnside, Elizabeth; Dutra, Inês

    2013-01-01

    In this work we build the first BI-RADS parser for Portuguese free texts, modeled after existing approaches to extract BI-RADS features from English medical records. Our concept finder uses a semantic grammar based on the BIRADS lexicon and on iterative transferred expert knowledge. We compare the performance of our algorithm to manual annotation by a specialist in mammography. Our results show that our parser’s performance is comparable to the manual method. PMID:23797461

  6. Active learning for ontological event extraction incorporating named entity recognition and unknown word handling.

    PubMed

    Han, Xu; Kim, Jung-jae; Kwoh, Chee Keong

    2016-01-01

    Biomedical text mining may target various kinds of valuable information embedded in the literature, but a critical obstacle to the extension of the mining targets is the cost of manual construction of labeled data, which are required for state-of-the-art supervised learning systems. Active learning is to choose the most informative documents for the supervised learning in order to reduce the amount of required manual annotations. Previous works of active learning, however, focused on the tasks of entity recognition and protein-protein interactions, but not on event extraction tasks for multiple event types. They also did not consider the evidence of event participants, which might be a clue for the presence of events in unlabeled documents. Moreover, the confidence scores of events produced by event extraction systems are not reliable for ranking documents in terms of informativity for supervised learning. We here propose a novel committee-based active learning method that supports multi-event extraction tasks and employs a new statistical method for informativity estimation instead of using the confidence scores from event extraction systems. Our method is based on a committee of two systems as follows: We first employ an event extraction system to filter potential false negatives among unlabeled documents, from which the system does not extract any event. We then develop a statistical method to rank the potential false negatives of unlabeled documents 1) by using a language model that measures the probabilities of the expression of multiple events in documents and 2) by using a named entity recognition system that locates the named entities that can be event arguments (e.g. proteins). The proposed method further deals with unknown words in test data by using word similarity measures. We also apply our active learning method for the task of named entity recognition. We evaluate the proposed method against the BioNLP Shared Tasks datasets, and show that our method can achieve better performance than such previous methods as entropy and Gibbs error based methods and a conventional committee-based method. We also show that the incorporation of named entity recognition into the active learning for event extraction and the unknown word handling further improve the active learning method. In addition, the adaptation of the active learning method into named entity recognition tasks also improves the document selection for manual annotation of named entities.

  7. [Medical image elastic registration smoothed by unconstrained optimized thin-plate spline].

    PubMed

    Zhang, Yu; Li, Shuxiang; Chen, Wufan; Liu, Zhexing

    2003-12-01

    Elastic registration of medical image is an important subject in medical image processing. Previous work has concentrated on selecting the corresponding landmarks manually and then using thin-plate spline interpolating to gain the elastic transformation. However, the landmarks extraction is always prone to error, which will influence the registration results. Localizing the landmarks manually is also difficult and time-consuming. We the optimization theory to improve the thin-plate spline interpolation, and based on it, used an automatic method to extract the landmarks. Combining these two steps, we have proposed an automatic, exact and robust registration method and have gained satisfactory registration results.

  8. Application of Machine Learning in Urban Greenery Land Cover Extraction

    NASA Astrophysics Data System (ADS)

    Qiao, X.; Li, L. L.; Li, D.; Gan, Y. L.; Hou, A. Y.

    2018-04-01

    Urban greenery is a critical part of the modern city and the greenery coverage information is essential for land resource management, environmental monitoring and urban planning. It is a challenging work to extract the urban greenery information from remote sensing image as the trees and grassland are mixed with city built-ups. In this paper, we propose a new automatic pixel-based greenery extraction method using multispectral remote sensing images. The method includes three main steps. First, a small part of the images is manually interpreted to provide prior knowledge. Secondly, a five-layer neural network is trained and optimised with the manual extraction results, which are divided to serve as training samples, verification samples and testing samples. Lastly, the well-trained neural network will be applied to the unlabelled data to perform the greenery extraction. The GF-2 and GJ-1 high resolution multispectral remote sensing images were used to extract greenery coverage information in the built-up areas of city X. It shows a favourable performance in the 619 square kilometers areas. Also, when comparing with the traditional NDVI method, the proposed method gives a more accurate delineation of the greenery region. Due to the advantage of low computational load and high accuracy, it has a great potential for large area greenery auto extraction, which saves a lot of manpower and resources.

  9. The Extraction of Terrace in the Loess Plateau Based on radial method

    NASA Astrophysics Data System (ADS)

    Liu, W.; Li, F.

    2016-12-01

    The terrace of Loess Plateau, as a typical kind of artificial landform and an important measure of soil and water conservation, its positioning and automatic extraction will simplify the work of land use investigation. The existing methods of terrace extraction mainly include visual interpretation and automatic extraction. The manual method is used in land use investigation, but it is time-consuming and laborious. Researchers put forward some automatic extraction methods. For example, Fourier transform method can recognize terrace and find accurate position from frequency domain image, but it is more affected by the linear objects in the same direction of terrace; Texture analysis method is simple and have a wide range application of image processing. The disadvantage of texture analysis method is unable to recognize terraces' edge; Object-oriented is a new method of image classification, but when introduce it to terrace extracting, fracture polygons will be the most serious problem and it is difficult to explain its geological meaning. In order to positioning the terraces, we use high- resolution remote sensing image to extract and analyze the gray value of the pixels which the radial went through. During the recognition process, we firstly use the DEM data analysis or by manual selecting, to roughly confirm the position of peak points; secondly, take each of the peak points as the center to make radials in all directions; finally, extracting the gray values of the pixels which the radials went through, and analyzing its changing characteristics to confirm whether the terrace exists. For the purpose of getting accurate position of terrace, terraces' discontinuity, extension direction, ridge width, image processing algorithm, remote sensing image illumination and other influence factors were fully considered when designing the algorithms.

  10. A Customized Attention-Based Long Short-Term Memory Network for Distant Supervised Relation Extraction.

    PubMed

    He, Dengchao; Zhang, Hongjun; Hao, Wenning; Zhang, Rui; Cheng, Kai

    2017-07-01

    Distant supervision, a widely applied approach in the field of relation extraction can automatically generate large amounts of labeled training corpus with minimal manual effort. However, the labeled training corpus may have many false-positive data, which would hurt the performance of relation extraction. Moreover, in traditional feature-based distant supervised approaches, extraction models adopt human design features with natural language processing. It may also cause poor performance. To address these two shortcomings, we propose a customized attention-based long short-term memory network. Our approach adopts word-level attention to achieve better data representation for relation extraction without manually designed features to perform distant supervision instead of fully supervised relation extraction, and it utilizes instance-level attention to tackle the problem of false-positive data. Experimental results demonstrate that our proposed approach is effective and achieves better performance than traditional methods.

  11. Literature mining of protein-residue associations with graph rules learned through distant supervision.

    PubMed

    Ravikumar, Ke; Liu, Haibin; Cohn, Judith D; Wall, Michael E; Verspoor, Karin

    2012-10-05

    We propose a method for automatic extraction of protein-specific residue mentions from the biomedical literature. The method searches text for mentions of amino acids at specific sequence positions and attempts to correctly associate each mention with a protein also named in the text. The methods presented in this work will enable improved protein functional site extraction from articles, ultimately supporting protein function prediction. Our method made use of linguistic patterns for identifying the amino acid residue mentions in text. Further, we applied an automated graph-based method to learn syntactic patterns corresponding to protein-residue pairs mentioned in the text. We finally present an approach to automated construction of relevant training and test data using the distant supervision model. The performance of the method was assessed by extracting protein-residue relations from a new automatically generated test set of sentences containing high confidence examples found using distant supervision. It achieved a F-measure of 0.84 on automatically created silver corpus and 0.79 on a manually annotated gold data set for this task, outperforming previous methods. The primary contributions of this work are to (1) demonstrate the effectiveness of distant supervision for automatic creation of training data for protein-residue relation extraction, substantially reducing the effort and time involved in manual annotation of a data set and (2) show that the graph-based relation extraction approach we used generalizes well to the problem of protein-residue association extraction. This work paves the way towards effective extraction of protein functional residues from the literature.

  12. Object-oriented feature extraction approach for mapping supraglacial debris in Schirmacher Oasis using very high-resolution satellite data

    NASA Astrophysics Data System (ADS)

    Jawak, Shridhar D.; Jadhav, Ajay; Luis, Alvarinho J.

    2016-05-01

    Supraglacial debris was mapped in the Schirmacher Oasis, east Antarctica, by using WorldView-2 (WV-2) high resolution optical remote sensing data consisting of 8-band calibrated Gram Schmidt (GS)-sharpened and atmospherically corrected WV-2 imagery. This study is a preliminary attempt to develop an object-oriented rule set to extract supraglacial debris for Antarctic region using 8-spectral band imagery. Supraglacial debris was manually digitized from the satellite imagery to generate the ground reference data. Several trials were performed using few existing traditional pixel-based classification techniques and color-texture based object-oriented classification methods to extract supraglacial debris over a small domain of the study area. Multi-level segmentation and attributes such as scale, shape, size, compactness along with spectral information from the data were used for developing the rule set. The quantitative analysis of error was carried out against the manually digitized reference data to test the practicability of our approach over the traditional pixel-based methods. Our results indicate that OBIA-based approach (overall accuracy: 93%) for extracting supraglacial debris performed better than all the traditional pixel-based methods (overall accuracy: 80-85%). The present attempt provides a comprehensive improved method for semiautomatic feature extraction in supraglacial environment and a new direction in the cryospheric research.

  13. Information retrieval and terminology extraction in online resources for patients with diabetes.

    PubMed

    Seljan, Sanja; Baretić, Maja; Kucis, Vlasta

    2014-06-01

    Terminology use, as a mean for information retrieval or document indexing, plays an important role in health literacy. Specific types of users, i.e. patients with diabetes need access to various online resources (on foreign and/or native language) searching for information on self-education of basic diabetic knowledge, on self-care activities regarding importance of dietetic food, medications, physical exercises and on self-management of insulin pumps. Automatic extraction of corpus-based terminology from online texts, manuals or professional papers, can help in building terminology lists or list of "browsing phrases" useful in information retrieval or in document indexing. Specific terminology lists represent an intermediate step between free text search and controlled vocabulary, between user's demands and existing online resources in native and foreign language. The research aiming to detect the role of terminology in online resources, is conducted on English and Croatian manuals and Croatian online texts, and divided into three interrelated parts: i) comparison of professional and popular terminology use ii) evaluation of automatic statistically-based terminology extraction on English and Croatian texts iii) comparison and evaluation of extracted terminology performed on English manual using statistical and hybrid approaches. Extracted terminology candidates are evaluated by comparison with three types of reference lists: list created by professional medical person, list of highly professional vocabulary contained in MeSH and list created by non-medical persons, made as intersection of 15 lists. Results report on use of popular and professional terminology in online diabetes resources, on evaluation of automatically extracted terminology candidates in English and Croatian texts and on comparison of statistical and hybrid extraction methods in English text. Evaluation of automatic and semi-automatic terminology extraction methods is performed by recall, precision and f-measure.

  14. Computer-Aided Diagnosis of Acute Lymphoblastic Leukaemia

    PubMed Central

    2018-01-01

    Leukaemia is a form of blood cancer which affects the white blood cells and damages the bone marrow. Usually complete blood count (CBC) and bone marrow aspiration are used to diagnose the acute lymphoblastic leukaemia. It can be a fatal disease if not diagnosed at the earlier stage. In practice, manual microscopic evaluation of stained sample slide is used for diagnosis of leukaemia. But manual diagnostic methods are time-consuming, less accurate, and prone to errors due to various human factors like stress, fatigue, and so forth. Therefore, different automated systems have been proposed to wrestle the glitches in the manual diagnostic methods. In recent past, some computer-aided leukaemia diagnosis methods are presented. These automated systems are fast, reliable, and accurate as compared to manual diagnosis methods. This paper presents review of computer-aided diagnosis systems regarding their methodologies that include enhancement, segmentation, feature extraction, classification, and accuracy. PMID:29681996

  15. Evaluation of a manual DNA extraction protocol and an isothermal amplification assay for detecting HIV-1 DNA from dried blood spots for use in resource-limited settings.

    PubMed

    Jordan, Jeanne A; Ibe, Christine O; Moore, Miranda S; Host, Christel; Simon, Gary L

    2012-05-01

    In resource-limited settings (RLS) dried blood spots (DBS) are collected on infants and transported through provincial laboratories to a central facility where HIV-1 DNA PCR testing is performed using specialized equipment. Implementing a simpler approach not requiring such equipment or skilled personnel could allow the more numerous provincial laboratories to offer testing, improving turn-around-time to identify and treat infected infants sooner. Assess performances of a manual DNA extraction method and helicase-dependent amplification (HDA) assay for detecting HIV-1 DNA from DBS. 60 HIV-1 infected adults were enrolled, blood samples taken and DBS made. DBS extracts were assessed for DNA concentration and beta globin amplification using PCR and melt-curve analysis. These same extracts were then tested for HIV-1 DNA using HDA and compared to results generated by PCR and pyrosequencing. Finally, HDA limit of detection (LOD) studies were performed using DBS extracts prepared with known numbers of 8E5 cells. The manual extraction protocol consistently yielded high concentrations of amplifiable DNA from DBS. LOD assessment demonstrated HDA detected ∼470 copies/ml of HIV-1 DNA extracts in 4/4 replicates. No statistical difference was found using the McNemar's test when comparing HDA to PCR for detecting HIV-1 DNA from DBS. Using just a magnet, heat block and pipettes, the manual extraction protocol and HDA assay detected HIV-1 DNA from DBS at levels that would be useful for early infant diagnosis. Next steps will include assessing HDA for non-B HIV-1 subtypes recognition and comparison to Roche HIV-1 DNA v1.5 PCR assay. Copyright © 2012 Elsevier B.V. All rights reserved.

  16. Evaluation of mericon E. coli O157 Screen Plus and mericon E. coli STEC O-Type Pathogen Detection Assays in Select Foods: Collaborative Study, First Action 2017.05.

    PubMed

    Bird, Patrick; Benzinger, M Joseph; Bastin, Benjamin; Crowley, Erin; Agin, James; Goins, David; Armstrong, Marcia

    2018-05-01

    QIAGEN mericon Escherichia coli O157 Screen Plus and mericon E. coli Shiga toxin-producing E. coli (STEC) O-Type Pathogen Detection Assays use Real-Time PCR technology for the rapid, accurate detection of E. coli O157 and the "big six" (O26, O45, O103, O111, O121, O145) (non-O157 STEC) in select food types. Using a paired study design, the assays were compared with the U.S. Department of Agriculture, Food Safety Inspection Service Microbiology Laboratory Guidebook Chapter 5.09 reference method for the detection of E. coli O157:H7 in raw ground beef. Both mericon assays were evaluated using the manual and an automated DNA extraction method. Thirteen technicians from five laboratories located within the continental United States participated in the collaborative study. Three levels of contamination were evaluated. Statistical analysis was conducted according to the probability of detection (POD) statistical model. Results obtained for the low-inoculum level test portions produced a difference between laboratories POD (dLPOD) value with a 95% confidence interval of 0.00 (-0.12, 0.12) for the mericon E. coli O157 Screen Plus with manual and automated extraction and mericon E. coli STEC O-Type with manual extraction and -0.01 (-0.13, 0.10) for the mericon E. coli STEC O-Type with automated extraction. The dLPOD results indicate equivalence between the candidate methods and the reference method.

  17. EXTRACT: interactive extraction of environment metadata and term suggestion for metagenomic sample annotation.

    PubMed

    Pafilis, Evangelos; Buttigieg, Pier Luigi; Ferrell, Barbra; Pereira, Emiliano; Schnetzer, Julia; Arvanitidis, Christos; Jensen, Lars Juhl

    2016-01-01

    The microbial and molecular ecology research communities have made substantial progress on developing standards for annotating samples with environment metadata. However, sample manual annotation is a highly labor intensive process and requires familiarity with the terminologies used. We have therefore developed an interactive annotation tool, EXTRACT, which helps curators identify and extract standard-compliant terms for annotation of metagenomic records and other samples. Behind its web-based user interface, the system combines published methods for named entity recognition of environment, organism, tissue and disease terms. The evaluators in the BioCreative V Interactive Annotation Task found the system to be intuitive, useful, well documented and sufficiently accurate to be helpful in spotting relevant text passages and extracting organism and environment terms. Comparison of fully manual and text-mining-assisted curation revealed that EXTRACT speeds up annotation by 15-25% and helps curators to detect terms that would otherwise have been missed. Database URL: https://extract.hcmr.gr/. © The Author(s) 2016. Published by Oxford University Press.

  18. Comparison of various RNA extraction methods, cDNA preparation and isolation of calmodulin gene from a highly melanized isolate of apple leaf blotch fungus Marssonina coronaria.

    PubMed

    Chauhan, Arjun; Sharma, J N; Modgil, Manju; Siddappa, Sundaresha

    2018-05-29

    Marssonina coronaria causes apple blotch disease resulting in severe premature defoliation, and is distributed in many leading apple-growing areas in the world. Effective, reliable and high-quality RNA extraction is an indispensable procedure in any molecular biology study. No method currently exists for RNA extraction from M. coronaria that produces a high quantity of melanin-free RNA. Therefore, we evaluated eight RNA extraction methods including manual and commercial kits, to yield a sufficient quantity of high-quality and melanin-free RNA. Manual methods used here resulted in low quality and black colored RNA pellets showing the presence of melanin, despite all the modifications employed to original procedures. However, these methods when coupled with clean up resulted in melanin-free RNA. On the other hand, all commercial kits used were able to yield high-quality melanin-free RNA having variable yields. TRIzol™ Reagent + RNA Clean & Concentrator™-5 and Ambion-PureLink® RNA Mini Kit were found to be the best methods as the RNA extracted with these methods from 15 day old fungal culture grown on solid medium were free of melanin with good yield. RNA extracted by this improved methodology was applied for RT-PCR, subsequent PCR amplification, and isolation of calmodulin gene sequences from M. coronaria and infected apple leaf pieces. These methods are more time effective than traditional methods and take only an hour to complete. To our knowledge, this is the first report on the method of isolation of high-quality RNA for cDNA synthesis as well as isolation of the calmodulin gene sequence from this fungus. Copyright © 2018 Elsevier B.V. All rights reserved.

  19. Assessing the Agreement Between Eo-Based Semi-Automated Landslide Maps with Fuzzy Manual Landslide Delineation

    NASA Astrophysics Data System (ADS)

    Albrecht, F.; Hölbling, D.; Friedl, B.

    2017-09-01

    Landslide mapping benefits from the ever increasing availability of Earth Observation (EO) data resulting from programmes like the Copernicus Sentinel missions and improved infrastructure for data access. However, there arises the need for improved automated landslide information extraction processes from EO data while the dominant method is still manual delineation. Object-based image analysis (OBIA) provides the means for the fast and efficient extraction of landslide information. To prove its quality, automated results are often compared to manually delineated landslide maps. Although there is awareness of the uncertainties inherent in manual delineations, there is a lack of understanding how they affect the levels of agreement in a direct comparison of OBIA-derived landslide maps and manually derived landslide maps. In order to provide an improved reference, we present a fuzzy approach for the manual delineation of landslides on optical satellite images, thereby making the inherent uncertainties of the delineation explicit. The fuzzy manual delineation and the OBIA classification are compared by accuracy metrics accepted in the remote sensing community. We have tested this approach for high resolution (HR) satellite images of three large landslides in Austria and Italy. We were able to show that the deviation of the OBIA result from the manual delineation can mainly be attributed to the uncertainty inherent in the manual delineation process, a relevant issue for the design of validation processes for OBIA-derived landslide maps.

  20. A New Method for Automated Identification and Morphometry of Myelinated Fibers Through Light Microscopy Image Analysis.

    PubMed

    Novas, Romulo Bourget; Fazan, Valeria Paula Sassoli; Felipe, Joaquim Cezar

    2016-02-01

    Nerve morphometry is known to produce relevant information for the evaluation of several phenomena, such as nerve repair, regeneration, implant, transplant, aging, and different human neuropathies. Manual morphometry is laborious, tedious, time consuming, and subject to many sources of error. Therefore, in this paper, we propose a new method for the automated morphometry of myelinated fibers in cross-section light microscopy images. Images from the recurrent laryngeal nerve of adult rats and the vestibulocochlear nerve of adult guinea pigs were used herein. The proposed pipeline for fiber segmentation is based on the techniques of competitive clustering and concavity analysis. The evaluation of the proposed method for segmentation of images was done by comparing the automatic segmentation with the manual segmentation. To further evaluate the proposed method considering morphometric features extracted from the segmented images, the distributions of these features were tested for statistical significant difference. The method achieved a high overall sensitivity and very low false-positive rates per image. We detect no statistical difference between the distribution of the features extracted from the manual and the pipeline segmentations. The method presented a good overall performance, showing widespread potential in experimental and clinical settings allowing large-scale image analysis and, thus, leading to more reliable results.

  1. EXTRACT: Interactive extraction of environment metadata and term suggestion for metagenomic sample annotation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pafilis, Evangelos; Buttigieg, Pier Luigi; Ferrell, Barbra

    The microbial and molecular ecology research communities have made substantial progress on developing standards for annotating samples with environment metadata. However, sample manual annotation is a highly labor intensive process and requires familiarity with the terminologies used. We have therefore developed an interactive annotation tool, EXTRACT, which helps curators identify and extract standard-compliant terms for annotation of metagenomic records and other samples. Behind its web-based user interface, the system combines published methods for named entity recognition of environment, organism, tissue and disease terms. The evaluators in the BioCreative V Interactive Annotation Task found the system to be intuitive, useful, wellmore » documented and sufficiently accurate to be helpful in spotting relevant text passages and extracting organism and environment terms. Here the comparison of fully manual and text-mining-assisted curation revealed that EXTRACT speeds up annotation by 15–25% and helps curators to detect terms that would otherwise have been missed.« less

  2. EXTRACT: Interactive extraction of environment metadata and term suggestion for metagenomic sample annotation

    DOE PAGES

    Pafilis, Evangelos; Buttigieg, Pier Luigi; Ferrell, Barbra; ...

    2016-01-01

    The microbial and molecular ecology research communities have made substantial progress on developing standards for annotating samples with environment metadata. However, sample manual annotation is a highly labor intensive process and requires familiarity with the terminologies used. We have therefore developed an interactive annotation tool, EXTRACT, which helps curators identify and extract standard-compliant terms for annotation of metagenomic records and other samples. Behind its web-based user interface, the system combines published methods for named entity recognition of environment, organism, tissue and disease terms. The evaluators in the BioCreative V Interactive Annotation Task found the system to be intuitive, useful, wellmore » documented and sufficiently accurate to be helpful in spotting relevant text passages and extracting organism and environment terms. Here the comparison of fully manual and text-mining-assisted curation revealed that EXTRACT speeds up annotation by 15–25% and helps curators to detect terms that would otherwise have been missed.« less

  3. Wide coverage biomedical event extraction using multiple partially overlapping corpora

    PubMed Central

    2013-01-01

    Background Biomedical events are key to understanding physiological processes and disease, and wide coverage extraction is required for comprehensive automatic analysis of statements describing biomedical systems in the literature. In turn, the training and evaluation of extraction methods requires manually annotated corpora. However, as manual annotation is time-consuming and expensive, any single event-annotated corpus can only cover a limited number of semantic types. Although combined use of several such corpora could potentially allow an extraction system to achieve broad semantic coverage, there has been little research into learning from multiple corpora with partially overlapping semantic annotation scopes. Results We propose a method for learning from multiple corpora with partial semantic annotation overlap, and implement this method to improve our existing event extraction system, EventMine. An evaluation using seven event annotated corpora, including 65 event types in total, shows that learning from overlapping corpora can produce a single, corpus-independent, wide coverage extraction system that outperforms systems trained on single corpora and exceeds previously reported results on two established event extraction tasks from the BioNLP Shared Task 2011. Conclusions The proposed method allows the training of a wide-coverage, state-of-the-art event extraction system from multiple corpora with partial semantic annotation overlap. The resulting single model makes broad-coverage extraction straightforward in practice by removing the need to either select a subset of compatible corpora or semantic types, or to merge results from several models trained on different individual corpora. Multi-corpus learning also allows annotation efforts to focus on covering additional semantic types, rather than aiming for exhaustive coverage in any single annotation effort, or extending the coverage of semantic types annotated in existing corpora. PMID:23731785

  4. Frequency of data extraction errors and methods to increase data extraction quality: a methodological review.

    PubMed

    Mathes, Tim; Klaßen, Pauline; Pieper, Dawid

    2017-11-28

    Our objective was to assess the frequency of data extraction errors and its potential impact on results in systematic reviews. Furthermore, we evaluated the effect of different extraction methods, reviewer characteristics and reviewer training on error rates and results. We performed a systematic review of methodological literature in PubMed, Cochrane methodological registry, and by manual searches (12/2016). Studies were selected by two reviewers independently. Data were extracted in standardized tables by one reviewer and verified by a second. The analysis included six studies; four studies on extraction error frequency, one study comparing different reviewer extraction methods and two studies comparing different reviewer characteristics. We did not find a study on reviewer training. There was a high rate of extraction errors (up to 50%). Errors often had an influence on effect estimates. Different data extraction methods and reviewer characteristics had moderate effect on extraction error rates and effect estimates. The evidence base for established standards of data extraction seems weak despite the high prevalence of extraction errors. More comparative studies are needed to get deeper insights into the influence of different extraction methods.

  5. Simultaneous extraction of centerlines, stenosis, and thrombus detection in renal CT angiography

    NASA Astrophysics Data System (ADS)

    Subramanyan, Krishna; Durgan, Jacob; Hodgkiss, Thomas D.; Chandra, Shalabh

    2004-05-01

    The Renal Artery Stenosis (RAS) is the major cause of renovascular hypertension and CT angiography has shown tremendous promise as a noninvasive method for reliably detecting renal artery stenosis. The purpose of this study was to validate the semi-automated methods to assist in extraction of renal branches and characterizing the associated renal artery stenosis. Automatically computed diagnostic images such as straight MIP, curved MPR, cross-sections, and diameters from multi-slice CT are presented and evaluated for its acceptance. We used vessel-tracking image processing methods to extract the aortic-renal vessel tree in a CT data in axial slice images. Next, from the topology and anatomy of the aortic vessel tree, the stenosis, and thrombus section and branching of the renal arteries are extracted. The results are presented in curved MPR and continuously variable MIP images. In this study, 15 patients were scanned with contrast on Mx8000 CT scanner (Philips Medical Systems), with 1.0 mm thickness, 0.5mm slice spacing, and 120kVp and a stack of 512x512x150 volume sets were reconstructed. The automated image processing took less than 50 seconds to compute the centerline and borders of the aortic/renal vessel tree. The overall assessment of manual and automatically generated stenosis yielded a weighted kappa statistic of 0.97 at right renal arteries, 0.94 at the left renal branches. The thrombus region contoured manually and semi-automatically agreed upon at 0.93. The manual time to process each case is approximately 25 to 30 minutes.

  6. Isolation of Mitochondrial DNA from Single, Short Hairs without Roots Using Pressure Cycling Technology.

    PubMed

    Harper, Kathryn A; Meiklejohn, Kelly A; Merritt, Richard T; Walker, Jessica; Fisher, Constance L; Robertson, James M

    2018-02-01

    Hairs are commonly submitted as evidence to forensic laboratories, but standard nuclear DNA analysis is not always possible. Mitochondria (mt) provide another source of genetic material; however, manual isolation is laborious. In a proof-of-concept study, we assessed pressure cycling technology (PCT; an automated approach that subjects samples to varying cycles of high and low pressure) for extracting mtDNA from single, short hairs without roots. Using three microscopically similar donors, we determined the ideal PCT conditions and compared those yields to those obtained using the traditional manual micro-tissue grinder method. Higher yields were recovered from grinder extracts, but yields from PCT extracts exceeded the requirements for forensic analysis, with the DNA quality confirmed through sequencing. Automated extraction of mtDNA from hairs without roots using PCT could be useful for forensic laboratories processing numerous samples.

  7. Semi-Supervised Recurrent Neural Network for Adverse Drug Reaction mention extraction.

    PubMed

    Gupta, Shashank; Pawar, Sachin; Ramrakhiyani, Nitin; Palshikar, Girish Keshav; Varma, Vasudeva

    2018-06-13

    Social media is a useful platform to share health-related information due to its vast reach. This makes it a good candidate for public-health monitoring tasks, specifically for pharmacovigilance. We study the problem of extraction of Adverse-Drug-Reaction (ADR) mentions from social media, particularly from Twitter. Medical information extraction from social media is challenging, mainly due to short and highly informal nature of text, as compared to more technical and formal medical reports. Current methods in ADR mention extraction rely on supervised learning methods, which suffer from labeled data scarcity problem. The state-of-the-art method uses deep neural networks, specifically a class of Recurrent Neural Network (RNN) which is Long-Short-Term-Memory network (LSTM). Deep neural networks, due to their large number of free parameters rely heavily on large annotated corpora for learning the end task. But in the real-world, it is hard to get large labeled data, mainly due to the heavy cost associated with the manual annotation. To this end, we propose a novel semi-supervised learning based RNN model, which can leverage unlabeled data also present in abundance on social media. Through experiments we demonstrate the effectiveness of our method, achieving state-of-the-art performance in ADR mention extraction. In this study, we tackle the problem of labeled data scarcity for Adverse Drug Reaction mention extraction from social media and propose a novel semi-supervised learning based method which can leverage large unlabeled corpus available in abundance on the web. Through empirical study, we demonstrate that our proposed method outperforms fully supervised learning based baseline which relies on large manually annotated corpus for a good performance.

  8. Comparison of methods of DNA extraction for real-time PCR in a model of pleural tuberculosis.

    PubMed

    Santos, Ana; Cremades, Rosa; Rodríguez, Juan Carlos; García-Pachón, Eduardo; Ruiz, Montserrat; Royo, Gloria

    2010-01-01

    Molecular methods have been reported to have different sensitivities in the diagnosis of pleural tuberculosis and this may in part be caused by the use of different methods of DNA extraction. Our study compares nine DNA extraction systems in an experimental model of pleural tuberculosis. An inoculum of Mycobacterium tuberculosis was added to 23 pleural liquid samples with different characteristics. DNA was subsequently extracted using nine different methods (seven manual and two automatic) for analysis with real-time PCR. Only two methods were able to detect the presence of M. tuberculosis DNA in all the samples: extraction using columns (Qiagen) and automated extraction with the TNAI system (Roche). The automatic method is more expensive, but requires less time. Almost all the false negatives were because of the difficulty involved in extracting M. tuberculosis DNA, as in general, all the methods studied are capable of eliminating inhibitory substances that block the amplification reaction. The method of M. tuberculosis DNA extraction used affects the results of the diagnosis of pleural tuberculosis by molecular methods. DNA extraction systems that have been shown to be effective in pleural liquid should be used.

  9. An Approach to Evaluate Blurriness in Retinal Images with Vitreous Opacity for Cataract Diagnosis

    PubMed Central

    Xu, Liang

    2017-01-01

    Cataract is one of the leading causes of blindness in the world's population. A method to evaluate blurriness for cataract diagnosis in retinal images with vitreous opacity is proposed in this paper. Three types of features are extracted, which include pixel number of visible structures, mean contrast between vessels and background, and local standard deviation. To avoid the wrong detection of vitreous opacity as retinal structures, a morphological method is proposed to detect and remove such lesions from retinal visible structure segmentation. Based on the extracted features, a decision tree is trained to classify retinal images into five grades of blurriness. The proposed approach was tested using 1355 clinical retinal images, and the accuracies of two-class classification and five-grade grading compared with that of manual grading are 92.8% and 81.1%, respectively. The kappa value between automatic grading and manual grading is 0.74 in five-grade grading, in which both variance and P value are less than 0.001. Experimental results show that the grading difference between automatic grading and manual grading is all within 1 grade, which is much improvement compared with that of other available methods. The proposed grading method provides a universal measure of cataract severity and can facilitate the decision of cataract surgery. PMID:29065620

  10. Colorimetric-Solid Phase Extraction Technology for Water Quality Monitoring: Evaluation of C-SPE and Debubbling Methods in Microgravity

    NASA Technical Reports Server (NTRS)

    Hazen-Bosveld, April; Lipert, Robert J.; Nordling, John; Shih, Chien-Ju; Siperko, Lorraine; Porter, Marc D.; Gazda, Daniel B.; Rutz, Jeff A.; Straub, John E.; Schultz, John R.; hide

    2007-01-01

    Colorimetric-solid phase extraction (C-SPE) is being developed as a method for in-flight monitoring of spacecraft water quality. C-SPE is based on measuring the change in the diffuse reflectance spectrum of indicator disks following exposure to a water sample. Previous microgravity testing has shown that air bubbles suspended in water samples can cause uncertainty in the volume of liquid passed through the disks, leading to errors in the determination of water quality parameter concentrations. We report here the results of a recent series of C-9 microgravity experiments designed to evaluate manual manipulation as a means to collect bubble-free water samples of specified volumes from water sample bags containing up to 47% air. The effectiveness of manual manipulation was verified by comparing the results from C-SPE analyses of silver(I) and iodine performed in-flight using samples collected and debubbled in microgravity to those performed on-ground using bubble-free samples. The ground and flight results showed excellent agreement, demonstrating that manual manipulation is an effective means for collecting bubble-free water samples in microgravity.

  11. Comparison of commercial systems for extraction of nucleic acids from DNA/RNA respiratory pathogens.

    PubMed

    Yang, Genyan; Erdman, Dean E; Kodani, Maja; Kools, John; Bowen, Michael D; Fields, Barry S

    2011-01-01

    This study compared six automated nucleic acid extraction systems and one manual kit for their ability to recover nucleic acids from human nasal wash specimens spiked with five respiratory pathogens, representing Gram-positive bacteria (Streptococcus pyogenes), Gram-negative bacteria (Legionella pneumophila), DNA viruses (adenovirus), segmented RNA viruses (human influenza virus A), and non-segmented RNA viruses (respiratory syncytial virus). The robots and kit evaluated represent major commercially available methods that are capable of simultaneous extraction of DNA and RNA from respiratory specimens, and included platforms based on magnetic-bead technology (KingFisher mL, Biorobot EZ1, easyMAG, KingFisher Flex, and MagNA Pure Compact) or glass fiber filter technology (Biorobot MDX and the manual kit Allprep). All methods yielded extracts free of cross-contamination and RT-PCR inhibition. All automated systems recovered L. pneumophila and adenovirus DNA equivalently. However, the MagNA Pure protocol demonstrated more than 4-fold higher DNA recovery from the S. pyogenes than other methods. The KingFisher mL and easyMAG protocols provided 1- to 3-log wider linearity and extracted 3- to 4-fold more RNA from the human influenza virus and respiratory syncytial virus. These findings suggest that systems differed in nucleic acid recovery, reproducibility, and linearity in a pathogen specific manner. Published by Elsevier B.V.

  12. HEp-2 cell image classification method based on very deep convolutional networks with small datasets

    NASA Astrophysics Data System (ADS)

    Lu, Mengchi; Gao, Long; Guo, Xifeng; Liu, Qiang; Yin, Jianping

    2017-07-01

    Human Epithelial-2 (HEp-2) cell images staining patterns classification have been widely used to identify autoimmune diseases by the anti-Nuclear antibodies (ANA) test in the Indirect Immunofluorescence (IIF) protocol. Because manual test is time consuming, subjective and labor intensive, image-based Computer Aided Diagnosis (CAD) systems for HEp-2 cell classification are developing. However, methods proposed recently are mostly manual features extraction with low accuracy. Besides, the scale of available benchmark datasets is small, which does not exactly suitable for using deep learning methods. This issue will influence the accuracy of cell classification directly even after data augmentation. To address these issues, this paper presents a high accuracy automatic HEp-2 cell classification method with small datasets, by utilizing very deep convolutional networks (VGGNet). Specifically, the proposed method consists of three main phases, namely image preprocessing, feature extraction and classification. Moreover, an improved VGGNet is presented to address the challenges of small-scale datasets. Experimental results over two benchmark datasets demonstrate that the proposed method achieves superior performance in terms of accuracy compared with existing methods.

  13. The current role of on-line extraction approaches in clinical and forensic toxicology.

    PubMed

    Mueller, Daniel M

    2014-08-01

    In today's clinical and forensic toxicological laboratories, automation is of interest because of its ability to optimize processes, to reduce manual workload and handling errors and to minimize exposition to potentially infectious samples. Extraction is usually the most time-consuming step; therefore, automation of this step is reasonable. Currently, from the field of clinical and forensic toxicology, methods using the following on-line extraction techniques have been published: on-line solid-phase extraction, turbulent flow chromatography, solid-phase microextraction, microextraction by packed sorbent, single-drop microextraction and on-line desorption of dried blood spots. Most of these published methods are either single-analyte or multicomponent procedures; methods intended for systematic toxicological analysis are relatively scarce. However, the use of on-line extraction will certainly increase in the near future.

  14. Self-Supervised Chinese Ontology Learning from Online Encyclopedias

    PubMed Central

    Shao, Zhiqing; Ruan, Tong

    2014-01-01

    Constructing ontology manually is a time-consuming, error-prone, and tedious task. We present SSCO, a self-supervised learning based chinese ontology, which contains about 255 thousand concepts, 5 million entities, and 40 million facts. We explore the three largest online Chinese encyclopedias for ontology learning and describe how to transfer the structured knowledge in encyclopedias, including article titles, category labels, redirection pages, taxonomy systems, and InfoBox modules, into ontological form. In order to avoid the errors in encyclopedias and enrich the learnt ontology, we also apply some machine learning based methods. First, we proof that the self-supervised machine learning method is practicable in Chinese relation extraction (at least for synonymy and hyponymy) statistically and experimentally and train some self-supervised models (SVMs and CRFs) for synonymy extraction, concept-subconcept relation extraction, and concept-instance relation extraction; the advantages of our methods are that all training examples are automatically generated from the structural information of encyclopedias and a few general heuristic rules. Finally, we evaluate SSCO in two aspects, scale and precision; manual evaluation results show that the ontology has excellent precision, and high coverage is concluded by comparing SSCO with other famous ontologies and knowledge bases; the experiment results also indicate that the self-supervised models obviously enrich SSCO. PMID:24715819

  15. Self-supervised Chinese ontology learning from online encyclopedias.

    PubMed

    Hu, Fanghuai; Shao, Zhiqing; Ruan, Tong

    2014-01-01

    Constructing ontology manually is a time-consuming, error-prone, and tedious task. We present SSCO, a self-supervised learning based chinese ontology, which contains about 255 thousand concepts, 5 million entities, and 40 million facts. We explore the three largest online Chinese encyclopedias for ontology learning and describe how to transfer the structured knowledge in encyclopedias, including article titles, category labels, redirection pages, taxonomy systems, and InfoBox modules, into ontological form. In order to avoid the errors in encyclopedias and enrich the learnt ontology, we also apply some machine learning based methods. First, we proof that the self-supervised machine learning method is practicable in Chinese relation extraction (at least for synonymy and hyponymy) statistically and experimentally and train some self-supervised models (SVMs and CRFs) for synonymy extraction, concept-subconcept relation extraction, and concept-instance relation extraction; the advantages of our methods are that all training examples are automatically generated from the structural information of encyclopedias and a few general heuristic rules. Finally, we evaluate SSCO in two aspects, scale and precision; manual evaluation results show that the ontology has excellent precision, and high coverage is concluded by comparing SSCO with other famous ontologies and knowledge bases; the experiment results also indicate that the self-supervised models obviously enrich SSCO.

  16. Quantitative assessment of tumour extraction from dermoscopy images and evaluation of computer-based extraction methods for an automatic melanoma diagnostic system.

    PubMed

    Iyatomi, Hitoshi; Oka, Hiroshi; Saito, Masataka; Miyake, Ayako; Kimoto, Masayuki; Yamagami, Jun; Kobayashi, Seiichiro; Tanikawa, Akiko; Hagiwara, Masafumi; Ogawa, Koichi; Argenziano, Giuseppe; Soyer, H Peter; Tanaka, Masaru

    2006-04-01

    The aims of this study were to provide a quantitative assessment of the tumour area extracted by dermatologists and to evaluate computer-based methods from dermoscopy images for refining a computer-based melanoma diagnostic system. Dermoscopic images of 188 Clark naevi, 56 Reed naevi and 75 melanomas were examined. Five dermatologists manually drew the border of each lesion with a tablet computer. The inter-observer variability was evaluated and the standard tumour area (STA) for each dermoscopy image was defined. Manual extractions by 10 non-medical individuals and by two computer-based methods were evaluated with STA-based assessment criteria: precision and recall. Our new computer-based method introduced the region-growing approach in order to yield results close to those obtained by dermatologists. The effectiveness of our extraction method with regard to diagnostic accuracy was evaluated. Two linear classifiers were built using the results of conventional and new computer-based tumour area extraction methods. The final diagnostic accuracy was evaluated by drawing the receiver operating curve (ROC) of each classifier, and the area under each ROC was evaluated. The standard deviations of the tumour area extracted by five dermatologists and 10 non-medical individuals were 8.9% and 10.7%, respectively. After assessment of the extraction results by dermatologists, the STA was defined as the area that was selected by more than two dermatologists. Dermatologists selected the melanoma area with statistically smaller divergence than that of Clark naevus or Reed naevus (P = 0.05). By contrast, non-medical individuals did not show this difference. Our new computer-based extraction algorithm showed superior performance (precision, 94.1%; recall, 95.3%) to the conventional thresholding method (precision, 99.5%; recall, 87.6%). These results indicate that our new algorithm extracted a tumour area close to that obtained by dermatologists and, in particular, the border part of the tumour was adequately extracted. With this refinement, the area under the ROC increased from 0.795 to 0.875 and the diagnostic accuracy showed an increase of approximately 20% in specificity when the sensitivity was 80%. It can be concluded that our computer-based tumour extraction algorithm extracted almost the same area as that obtained by dermatologists and provided improved computer-based diagnostic accuracy.

  17. Object extraction method for image synthesis

    NASA Astrophysics Data System (ADS)

    Inoue, Seiki

    1991-11-01

    The extraction of component objects from images is fundamentally important for image synthesis. In TV program production, one useful method is the Video-Matte technique for specifying the necessary boundary of an object. This, however, involves some manually intricate and tedious processes. A new method proposed in this paper can reduce the needed level of operator skill and simplify object extraction. The object is automatically extracted by just a simple drawing of a thick boundary line. The basic principle involves a thinning of the thick boundary line binary image using the edge intensity of the original image. This method has many practical advantages, including the simplicity of specifying an object, the high accuracy of thinned-out boundary line, its ease of application to moving images, and the lack of any need for adjustment.

  18. A novel image processing technique for 3D volumetric analysis of severely resorbed alveolar sockets with CBCT.

    PubMed

    Manavella, Valeria; Romano, Federica; Garrone, Federica; Terzini, Mara; Bignardi, Cristina; Aimetti, Mario

    2017-06-01

    The aim of this study was to present and validate a novel procedure for the quantitative volumetric assessment of extraction sockets that combines cone-beam computed tomography (CBCT) and image processing techniques. The CBCT dataset of 9 severely resorbed extraction sockets was analyzed by means of two image processing software, Image J and Mimics, using manual and automated segmentation techniques. They were also applied on 5-mm spherical aluminum markers of known volume and on a polyvinyl chloride model of one alveolar socket scanned with Micro-CT to test the accuracy. Statistical differences in alveolar socket volume were found between the different methods of volumetric analysis (P<0.0001). The automated segmentation using Mimics was the most reliable and accurate method with a relative error of 1.5%, considerably smaller than the error of 7% and of 10% introduced by the manual method using Mimics and by the automated method using ImageJ. The currently proposed automated segmentation protocol for the three-dimensional rendering of alveolar sockets showed more accurate results, excellent inter-observer similarity and increased user friendliness. The clinical application of this method enables a three-dimensional evaluation of extraction socket healing after the reconstructive procedures and during the follow-up visits.

  19. Comprehensive automation of the solid phase extraction gas chromatographic mass spectrometric analysis (SPE-GC/MS) of opioids, cocaine, and metabolites from serum and other matrices.

    PubMed

    Lerch, Oliver; Temme, Oliver; Daldrup, Thomas

    2014-07-01

    The analysis of opioids, cocaine, and metabolites from blood serum is a routine task in forensic laboratories. Commonly, the employed methods include many manual or partly automated steps like protein precipitation, dilution, solid phase extraction, evaporation, and derivatization preceding a gas chromatography (GC)/mass spectrometry (MS) or liquid chromatography (LC)/MS analysis. In this study, a comprehensively automated method was developed from a validated, partly automated routine method. This was possible by replicating method parameters on the automated system. Only marginal optimization of parameters was necessary. The automation relying on an x-y-z robot after manual protein precipitation includes the solid phase extraction, evaporation of the eluate, derivatization (silylation with N-methyl-N-trimethylsilyltrifluoroacetamide, MSTFA), and injection into a GC/MS. A quantitative analysis of almost 170 authentic serum samples and more than 50 authentic samples of other matrices like urine, different tissues, and heart blood on cocaine, benzoylecgonine, methadone, morphine, codeine, 6-monoacetylmorphine, dihydrocodeine, and 7-aminoflunitrazepam was conducted with both methods proving that the analytical results are equivalent even near the limits of quantification (low ng/ml range). To our best knowledge, this application is the first one reported in the literature employing this sample preparation system.

  20. Forest Roadidentification and Extractionof Through Advanced Log Matching Techniques

    NASA Astrophysics Data System (ADS)

    Zhang, W.; Hu, B.; Quist, L.

    2017-10-01

    A novel algorithm for forest road identification and extraction was developed. The algorithm utilized Laplacian of Gaussian (LoG) filter and slope calculation on high resolution multispectral imagery and LiDAR data respectively to extract both primary road and secondary road segments in the forest area. The proposed method used road shape feature to extract the road segments, which have been further processed as objects with orientation preserved. The road network was generated after post processing with tensor voting. The proposed method was tested on Hearst forest, located in central Ontario, Canada. Based on visual examination against manually digitized roads, the majority of roads from the test area have been identified and extracted from the process.

  1. Semi-automation of Doppler Spectrum Image Analysis for Grading Aortic Valve Stenosis Severity.

    PubMed

    Niakšu, O; Balčiunaitė, G; Kizlaitis, R J; Treigys, P

    2016-01-01

    Doppler echocardiography analysis has become a golden standard in the modern diagnosis of heart diseases. In this paper, we propose a set of techniques for semi-automated parameter extraction for aortic valve stenosis severity grading. The main objectives of the study is to create echocardiography image processing techniques, which minimize manual image processing work of clinicians and leads to reduced human error rates. Aortic valve and left ventricle output tract spectrogram images have been processed and analyzed. A novel method was developed to trace systoles and to extract diagnostic relevant features. The results of the introduced method have been compared to the findings of the participating cardiologists. The experimental results showed the accuracy of the proposed method is comparable to the manual measurement performed by medical professionals. Linear regression analysis of the calculated parameters and the measurements manually obtained by the cardiologists resulted in the strongly correlated values: peak systolic velocity's and mean pressure gradient's R2 both equal to 0.99, their means' differences equal to 0.02 m/s and 4.09 mmHg, respectively, and aortic valve area's R2 of 0.89 with the two methods means' difference of 0.19 mm. The introduced Doppler echocardiography images processing method can be used as a computer-aided assistance in the aortic valve stenosis diagnostics. In our future work, we intend to improve precision of left ventricular outflow tract spectrogram measurements and apply data mining methods to propose a clinical decision support system for diagnosing aortic valve stenosis.

  2. Document Form and Character Recognition using SVM

    NASA Astrophysics Data System (ADS)

    Park, Sang-Sung; Shin, Young-Geun; Jung, Won-Kyo; Ahn, Dong-Kyu; Jang, Dong-Sik

    2009-08-01

    Because of development of computer and information communication, EDI (Electronic Data Interchange) has been developing. There is OCR (Optical Character Recognition) of Pattern recognition technology for EDI. OCR contributed to changing many manual in the past into automation. But for the more perfect database of document, much manual is needed for excluding unnecessary recognition. To resolve this problem, we propose document form based character recognition method in this study. Proposed method is divided into document form recognition part and character recognition part. Especially, in character recognition, change character into binarization by using SVM algorithm and extract more correct feature value.

  3. A multiple distributed representation method based on neural network for biomedical event extraction.

    PubMed

    Wang, Anran; Wang, Jian; Lin, Hongfei; Zhang, Jianhai; Yang, Zhihao; Xu, Kan

    2017-12-20

    Biomedical event extraction is one of the most frontier domains in biomedical research. The two main subtasks of biomedical event extraction are trigger identification and arguments detection which can both be considered as classification problems. However, traditional state-of-the-art methods are based on support vector machine (SVM) with massive manually designed one-hot represented features, which require enormous work but lack semantic relation among words. In this paper, we propose a multiple distributed representation method for biomedical event extraction. The method combines context consisting of dependency-based word embedding, and task-based features represented in a distributed way as the input of deep learning models to train deep learning models. Finally, we used softmax classifier to label the example candidates. The experimental results on Multi-Level Event Extraction (MLEE) corpus show higher F-scores of 77.97% in trigger identification and 58.31% in overall compared to the state-of-the-art SVM method. Our distributed representation method for biomedical event extraction avoids the problems of semantic gap and dimension disaster from traditional one-hot representation methods. The promising results demonstrate that our proposed method is effective for biomedical event extraction.

  4. [Comparison of manual and automated (MagNA Pure) nucleic acid isolation methods in molecular diagnosis of HIV infections].

    PubMed

    Alp, Alpaslan; Us, Dürdal; Hasçelik, Gülşen

    2004-01-01

    Rapid quantitative molecular methods are very important for the diagnosis of human immunodeficiency virus (HIV) infections, assessment of prognosis and follow up. The purpose of this study was to compare and evaluate the performances of conventional manual extraction method and automated MagNA Pure system, for the nucleic acid isolation step which is the first and most important step in molecular diagnosis of HIV infections. Plasma samples of 35 patients in which anti-HIV antibodies were found as positive by microparticule enzyme immunoassay and confirmed by immunoblotting method, were included in the study. The nucleic acids obtained simultaneously by manual isolation kit (Cobas Amplicor, HIV-1 Monitor Test, version 1.5, Roche Diagnostics) and automated system (MagNA Pure LC Total Nucleic Acid Isolation Kit, Roche Diagnostics), were amplified and detected in Cobas Amplicor (Roche Diagnostics) instrument. Twenty three of 35 samples (65.7%) were found to be positive, and 9 (25.7%) were negative by both of the methods. The agreement between the methods were detected as 91.4%, for qualitative results. Viral RNA copies detected by manual and MagNA Pure isolation methods were found between 76.0-7.590.000 (mean: 487.143) and 113.0-20.300.0000 (mean: 2.174.097) copies/ml, respectively. When both of the overall and individual results were evaluated, the number of RNA copies obtained with automatized system, were found higher than the manual method (p<0.05). Three samples which had low numbers of nucleic acids (113, 773, 857, respectively) with MagNA Pure, yielded negative results with manual method. In conclusion, the automatized MagNA Pure system was found to be a reliable, rapid and practical method for the isolation of HIV-RNA.

  5. Microtiter miniature shaken bioreactor system as a scale-down model for process development of production of therapeutic alpha-interferon2b by recombinant Escherichia coli.

    PubMed

    Tan, Joo Shun; Abbasiliasi, Sahar; Kadkhodaei, Saeid; Tam, Yew Joon; Tang, Teck-Kim; Lee, Yee-Ying; Ariff, Arbakariya B

    2018-01-04

    Demand for high-throughput bioprocessing has dramatically increased especially in the biopharmaceutical industry because the technologies are of vital importance to process optimization and media development. This can be efficiently boosted by using microtiter plate (MTP) cultivation setup embedded into an automated liquid-handling system. The objective of this study was to establish an automated microscale method for upstream and downstream bioprocessing of α-IFN2b production by recombinant Escherichia coli. The extraction performance of α-IFN2b by osmotic shock using two different systems, automated microscale platform and manual extraction in MTP was compared. The amount of α-IFN2b extracted using automated microscale platform (49.2 μg/L) was comparable to manual osmotic shock method (48.8 μg/L), but the standard deviation was 2 times lower as compared to manual osmotic shock method. Fermentation parameters in MTP involving inoculum size, agitation speed, working volume and induction profiling revealed that the fermentation conditions for the highest production of α-IFN2b (85.5 μg/L) was attained at inoculum size of 8%, working volume of 40% and agitation speed of 1000 rpm with induction at 4 h after the inoculation. Although the findings at MTP scale did not show perfect scalable results as compared to shake flask culture, but microscale technique development would serve as a convenient and low-cost solution in process optimization for recombinant protein.

  6. Autonomous characterization of plastic-bonded explosives

    NASA Astrophysics Data System (ADS)

    Linder, Kim Dalton; DeRego, Paul; Gomez, Antonio; Baumgart, Chris

    2006-08-01

    Plastic-Bonded Explosives (PBXs) are a newer generation of explosive compositions developed at Los Alamos National Laboratory (LANL). Understanding the micromechanical behavior of these materials is critical. The size of the crystal particles and porosity within the PBX influences their shock sensitivity. Current methods to characterize the prominent structural characteristics include manual examination by scientists and attempts to use commercially available image processing packages. Both methods are time consuming and tedious. LANL personnel, recognizing this as a manually intensive process, have worked with the Kansas City Plant / Kirtland Operations to develop a system which utilizes image processing and pattern recognition techniques to characterize PBX material. System hardware consists of a CCD camera, zoom lens, two-dimensional, motorized stage, and coaxial, cross-polarized light. System integration of this hardware with the custom software is at the core of the machine vision system. Fundamental processing steps involve capturing images from the PBX specimen, and extraction of void, crystal, and binder regions. For crystal extraction, a Quadtree decomposition segmentation technique is employed. Benefits of this system include: (1) reduction of the overall characterization time; (2) a process which is quantifiable and repeatable; (3) utilization of personnel for intelligent review rather than manual processing; and (4) significantly enhanced characterization accuracy.

  7. A multi-atlas based method for automated anatomical Macaca fascicularis brain MRI segmentation and PET kinetic extraction.

    PubMed

    Ballanger, Bénédicte; Tremblay, Léon; Sgambato-Faure, Véronique; Beaudoin-Gobert, Maude; Lavenne, Franck; Le Bars, Didier; Costes, Nicolas

    2013-08-15

    MRI templates and digital atlases are needed for automated and reproducible quantitative analysis of non-human primate PET studies. Segmenting brain images via multiple atlases outperforms single-atlas labelling in humans. We present a set of atlases manually delineated on brain MRI scans of the monkey Macaca fascicularis. We use this multi-atlas dataset to evaluate two automated methods in terms of accuracy, robustness and reliability in segmenting brain structures on MRI and extracting regional PET measures. Twelve individual Macaca fascicularis high-resolution 3DT1 MR images were acquired. Four individual atlases were created by manually drawing 42 anatomical structures, including cortical and sub-cortical structures, white matter regions, and ventricles. To create the MRI template, we first chose one MRI to define a reference space, and then performed a two-step iterative procedure: affine registration of individual MRIs to the reference MRI, followed by averaging of the twelve resampled MRIs. Automated segmentation in native space was obtained in two ways: 1) Maximum probability atlases were created by decision fusion of two to four individual atlases in the reference space, and transformation back into the individual native space (MAXPROB)(.) 2) One to four individual atlases were registered directly to the individual native space, and combined by decision fusion (PROPAG). Accuracy was evaluated by computing the Dice similarity index and the volume difference. The robustness and reproducibility of PET regional measurements obtained via automated segmentation was evaluated on four co-registered MRI/PET datasets, which included test-retest data. Dice indices were always over 0.7 and reached maximal values of 0.9 for PROPAG with all four individual atlases. There was no significant mean volume bias. The standard deviation of the bias decreased significantly when increasing the number of individual atlases. MAXPROB performed better when increasing the number of atlases used. When all four atlases were used for the MAXPROB creation, the accuracy of morphometric segmentation approached that of the PROPAG method. PET measures extracted either via automatic methods or via the manually defined regions were strongly correlated, with no significant regional differences between methods. Intra-class correlation coefficients for test-retest data were over 0.87. Compared to single atlas extractions, multi-atlas methods improve the accuracy of region definition. They also perform comparably to manually defined regions for PET quantification. Multiple atlases of Macaca fascicularis brains are now available and allow reproducible and simplified analyses. Copyright © 2013 Elsevier Inc. All rights reserved.

  8. Comparison of manual and homogenizer methods for preparation of tick-derived stabilates of Theileria parva: equivalence testing using an in vitro titration model.

    PubMed

    Mbao, V; Speybroeck, N; Berkvens, D; Dolan, T; Dorny, P; Madder, M; Mulumba, M; Duchateau, L; Brandt, J; Marcotty, T

    2005-07-01

    Theileria parva sporozoite stabilates are used in the infection and treatment method of immunization, a widely accepted control option for East Coast fever in cattle. T. parva sporozoites are extracted from infected adult Rhipicephalus appendiculatus ticks either manually, using a pestle and a mortar, or by use of an electric homogenizer. A comparison of the two methods as a function of stabilate infectivity has never been documented. This study was designed to provide a quantitative comparison of stabilates produced by the two methods. The approach was to prepare batches of stabilate by both methods and then subject them to in vitro titration. Equivalence testing was then performed on the average effective doses (ED). The ratio of infective sporozoites yielded by the two methods was found to be 1.14 in favour of the manually ground stabilate with an upper limit of the 95% confidence interval equal to 1.3. We conclude that the choice of method rests more on costs, available infrastructure and standardization than on which method produces a richer sporozoite stabilate.

  9. Extraction of Blebs in Human Embryonic Stem Cell Videos.

    PubMed

    Guan, Benjamin X; Bhanu, Bir; Talbot, Prue; Weng, Nikki Jo-Hao

    2016-01-01

    Blebbing is an important biological indicator in determining the health of human embryonic stem cells (hESC). Especially, areas of a bleb sequence in a video are often used to distinguish two cell blebbing behaviors in hESC: dynamic and apoptotic blebbings. This paper analyzes various segmentation methods for bleb extraction in hESC videos and introduces a bio-inspired score function to improve the performance in bleb extraction. Full bleb formation consists of bleb expansion and retraction. Blebs change their size and image properties dynamically in both processes and between frames. Therefore, adaptive parameters are needed for each segmentation method. A score function derived from the change of bleb area and orientation between consecutive frames is proposed which provides adaptive parameters for bleb extraction in videos. In comparison to manual analysis, the proposed method provides an automated fast and accurate approach for bleb sequence extraction.

  10. Extracting important information from Chinese Operation Notes with natural language processing methods.

    PubMed

    Wang, Hui; Zhang, Weide; Zeng, Qiang; Li, Zuofeng; Feng, Kaiyan; Liu, Lei

    2014-04-01

    Extracting information from unstructured clinical narratives is valuable for many clinical applications. Although natural Language Processing (NLP) methods have been profoundly studied in electronic medical records (EMR), few studies have explored NLP in extracting information from Chinese clinical narratives. In this study, we report the development and evaluation of extracting tumor-related information from operation notes of hepatic carcinomas which were written in Chinese. Using 86 operation notes manually annotated by physicians as the training set, we explored both rule-based and supervised machine-learning approaches. Evaluating on unseen 29 operation notes, our best approach yielded 69.6% in precision, 58.3% in recall and 63.5% F-score. Copyright © 2014 Elsevier Inc. All rights reserved.

  11. Automatic extraction of blocks from 3D point clouds of fractured rock

    NASA Astrophysics Data System (ADS)

    Chen, Na; Kemeny, John; Jiang, Qinghui; Pan, Zhiwen

    2017-12-01

    This paper presents a new method for extracting blocks and calculating block size automatically from rock surface 3D point clouds. Block size is an important rock mass characteristic and forms the basis for several rock mass classification schemes. The proposed method consists of four steps: 1) the automatic extraction of discontinuities using an improved Ransac Shape Detection method, 2) the calculation of discontinuity intersections based on plane geometry, 3) the extraction of block candidates based on three discontinuities intersecting one another to form corners, and 4) the identification of "true" blocks using an improved Floodfill algorithm. The calculated block sizes were compared with manual measurements in two case studies, one with fabricated cardboard blocks and the other from an actual rock mass outcrop. The results demonstrate that the proposed method is accurate and overcomes the inaccuracies, safety hazards, and biases of traditional techniques.

  12. [Automatic Extraction and Analysis of Dosimetry Data in Radiotherapy Plans].

    PubMed

    Song, Wei; Zhao, Di; Lu, Hong; Zhang, Biyun; Ma, Jun; Yu, Dahai

    To improve the efficiency and accuracy of extraction and analysis of dosimetry data in radiotherapy plans for a batch of patients. With the interface function provided in Matlab platform, a program was written to extract the dosimetry data exported from treatment planning system in DICOM RT format and exported the dose-volume data to an Excel file with the SPSS compatible format. This method was compared with manual operation for 14 gastric carcinoma patients to validate the efficiency and accuracy. The output Excel data were compatible with SPSS in format, the dosimetry data error for PTV dose interval of 90%-98%, PTV dose interval of 99%-106% and all OARs were -3.48E-5 ± 3.01E-5, -1.11E-3 ± 7.68E-4, -7.85E-5 ± 9.91E-5 respectively. Compared with manual operation, the time required was reduced from 5.3 h to 0.19 h and input error was reduced from 0.002 to 0. The automatic extraction of dosimetry data in DICOM RT format for batch patients, the SPSS compatible data exportation, quick analysis were achieved in this paper. The efficiency of clinical researches based on dosimetry data analysis of large number of patients will be improved with this methods.

  13. A new artefacts resistant method for automatic lineament extraction using Multi-Hillshade Hierarchic Clustering (MHHC)

    NASA Astrophysics Data System (ADS)

    Šilhavý, Jakub; Minár, Jozef; Mentlík, Pavel; Sládek, Ján

    2016-07-01

    This paper presents a new method of automatic lineament extraction which includes the removal of the 'artefacts effect' which is associated with the process of raster based analysis. The core of the proposed Multi-Hillshade Hierarchic Clustering (MHHC) method incorporates a set of variously illuminated and rotated hillshades in combination with hierarchic clustering of derived 'protolineaments'. The algorithm also includes classification into positive and negative lineaments. MHHC was tested in two different territories in Bohemian Forest and Central Western Carpathians. The original vector-based algorithm was developed for comparison of the individual lineaments proximity. Its use confirms the compatibility of manual and automatic extraction and their similar relationships to structural data in the study areas.

  14. Comparison of five methods for extraction of Legionella pneumophila from respiratory specimens.

    PubMed

    Wilson, Deborah; Yen-Lieberman, Belinda; Reischl, Udo; Warshawsky, Ilka; Procop, Gary W

    2004-12-01

    The efficiencies of five commercially available nucleic acid extraction methods were evaluated for the recovery of a standardized inoculum of Legionella pneumophila in respiratory specimens (sputum and bronchoalveolar lavage [BAL] specimens). The concentrations of Legionella DNA recovered from sputa with the automated MagNA Pure (526,200 CFU/ml) and NucliSens (171,800 CFU/ml) extractors were greater than those recovered with the manual methods (i.e., Roche High Pure kit [133,900 CFU/ml], QIAamp DNA Mini kit [46,380 CFU/ml], and ViralXpress kit [13,635 CFU/ml]). The rank order was the same for extracts from BAL specimens, except that for this specimen type the QIAamp DNA Mini kit recovered more than the Roche High Pure kit.

  15. Characterization of shallow unconsolidated aquifers in West Africa using different hydrogeological data sources as a contribution to the promotion of manual drilling and low cost techniques for groundwater exploration

    NASA Astrophysics Data System (ADS)

    Fussi, Fabio; Fumagalli, Letizia; Bonomi, Tullia; Kane, Cheikh H.; Fava, Francesco; Di Mauro, Biagio; Hamidou, Barry; Niang, Magatte; Wade, Souleye; Colombo, Roberto

    2016-04-01

    Manual drilling refers to several drilling methods that rely on human energy to construct a borehole and complete a water supply (Danert, 2015). It can be an effective strategy to increase access to groundwater in low income countries , but manual drilling can be applied only where shallow geological layers are relatively soft and water table is not too deep. It is important therefore to identify those zones where shallow hydrogeological conditions are suitable, investigating the characteristics of shallow porous aquifers. Existing hydrogeological studies are generally focused in the characterization of deep fractures aquifers, more productive and able to ensure water supply for large settlements. Information concerning shallow porous aquifers are limited. This research has been carried out in two different study areas in West Africa (North-Western Senegal and Eastern Guinea). Aim of the research is the characterization of shallow aquifer using different methods and the identification of hydrogeological condition suitable for manual drilling implementation. Three different methods to estimate geometry and hydraulic properties of shallow unconsolidated aquifers have been used: The first method is based on the analysis of stratigraphic data obtained from borehole logs of the national water point database in both countries. The following steps have been implemented on the original information using the software TANGAFRIC, specifically designed for this study: a) identification of most frequent terms used for hydrogeological description in Senegal and Guinea database; b) definition of standard categories and manual codification of data; c) automatic extraction of average distribution of textural classes at different depth intervals in the unconsolidated aquifer; d) estimation of hydraulic parameters using conversion tables between texture and hydraulic conductivity available in the literature. . The second method is based on the interpretation of pump and recovery test in large diameter wells. K values obtained from these tests provide direct information on hydraulic parameters of shallow porous aquifers (while pump tests data obtained from deep mechanized boreholes, exploiting fractured aquifers, cannot be considered representative for the target shallow aquifer of manual drilling). The third method is based on the interpretation of stratigraphic logs and simplified pump test from manual drilled wells carried out since 2012 in Guinea. In this country a standard and systematic procedure to collect hydrogeological data from these wells (therefore indicating properties of shallow aquifer) has been put in place in 2011; it is considered one of the best example worldwide about technical data collection and systematization from manual drilling activities, but its development has been stopped because of the outbreak of Ebola in this country. The integration of these 3 methods allow to estimate geometry and hydraulic behavior of shallow unconsolidated aquifer, identifying those areas where manual drilling is feasible and estimating potential yield that can be extracted. In the mean time this research provides relevant indications concerning the use of data obtained from low cost open hand dug or manually drilled wells (rarely used in hydrogeological research) for groundwater exploration of shallow aquifers.

  16. Extracting genetic alteration information for personalized cancer therapy from ClinicalTrials.gov

    PubMed Central

    Xu, Jun; Lee, Hee-Jin; Zeng, Jia; Wu, Yonghui; Zhang, Yaoyun; Huang, Liang-Chin; Johnson, Amber; Holla, Vijaykumar; Bailey, Ann M; Cohen, Trevor; Meric-Bernstam, Funda; Bernstam, Elmer V

    2016-01-01

    Objective: Clinical trials investigating drugs that target specific genetic alterations in tumors are important for promoting personalized cancer therapy. The goal of this project is to create a knowledge base of cancer treatment trials with annotations about genetic alterations from ClinicalTrials.gov. Methods: We developed a semi-automatic framework that combines advanced text-processing techniques with manual review to curate genetic alteration information in cancer trials. The framework consists of a document classification system to identify cancer treatment trials from ClinicalTrials.gov and an information extraction system to extract gene and alteration pairs from the Title and Eligibility Criteria sections of clinical trials. By applying the framework to trials at ClinicalTrials.gov, we created a knowledge base of cancer treatment trials with genetic alteration annotations. We then evaluated each component of the framework against manually reviewed sets of clinical trials and generated descriptive statistics of the knowledge base. Results and Discussion: The automated cancer treatment trial identification system achieved a high precision of 0.9944. Together with the manual review process, it identified 20 193 cancer treatment trials from ClinicalTrials.gov. The automated gene-alteration extraction system achieved a precision of 0.8300 and a recall of 0.6803. After validation by manual review, we generated a knowledge base of 2024 cancer trials that are labeled with specific genetic alteration information. Analysis of the knowledge base revealed the trend of increased use of targeted therapy for cancer, as well as top frequent gene-alteration pairs of interest. We expect this knowledge base to be a valuable resource for physicians and patients who are seeking information about personalized cancer therapy. PMID:27013523

  17. The diagnostic performance of leak-plugging automated segmentation versus manual tracing of breast lesions on ultrasound images.

    PubMed

    Xiong, Hui; Sultan, Laith R; Cary, Theodore W; Schultz, Susan M; Bouzghar, Ghizlane; Sehgal, Chandra M

    2017-05-01

    To assess the diagnostic performance of a leak-plugging segmentation method that we have developed for delineating breast masses on ultrasound images. Fifty-two biopsy-proven breast lesion images were analyzed by three observers using the leak-plugging and manual segmentation methods. From each segmentation method, grayscale and morphological features were extracted and classified as malignant or benign by logistic regression analysis. The performance of leak-plugging and manual segmentations was compared by: size of the lesion, overlap area ( O a ) between the margins, and area under the ROC curves ( A z ). The lesion size from leak-plugging segmentation correlated closely with that from manual tracing ( R 2 of 0.91). O a was higher for leak plugging, 0.92 ± 0.01 and 0.86 ± 0.06 for benign and malignant masses, respectively, compared to 0.80 ± 0.04 and 0.73 ± 0.02 for manual tracings. Overall O a between leak-plugging and manual segmentations was 0.79 ± 0.14 for benign and 0.73 ± 0.14 for malignant lesions. A z for leak plugging was consistently higher (0.910 ± 0.003) compared to 0.888 ± 0.012 for manual tracings. The coefficient of variation of A z between three observers was 0.29% for leak plugging compared to 1.3% for manual tracings. The diagnostic performance, size measurements, and observer variability for automated leak-plugging segmentations were either comparable to or better than those of manual tracings.

  18. [Establishment of Automation System for Detection of Alcohol in Blood].

    PubMed

    Tian, L L; Shen, Lei; Xue, J F; Liu, M M; Liang, L J

    2017-02-01

    To establish an automation system for detection of alcohol content in blood. The determination was performed by automated workstation of extraction-headspace gas chromatography (HS-GC). The blood collection with negative pressure, sealing time of headspace bottle and sample needle were checked and optimized in the abstraction of automation system. The automatic sampling was compared with the manual sampling. The quantitative data obtained by the automated workstation of extraction-HS-GC for alcohol was stable. The relative differences of two parallel samples were less than 5%. The automated extraction was superior to the manual extraction. A good linear relationship was obtained at the alcohol concentration range of 0.1-3.0 mg/mL ( r ≥0.999) with good repeatability. The method is simple and quick, with more standard experiment process and accurate experimental data. It eliminates the error from the experimenter and has good repeatability, which can be applied to the qualitative and quantitative detections of alcohol in blood. Copyright© by the Editorial Department of Journal of Forensic Medicine

  19. A literature search tool for intelligent extraction of disease-associated genes.

    PubMed

    Jung, Jae-Yoon; DeLuca, Todd F; Nelson, Tristan H; Wall, Dennis P

    2014-01-01

    To extract disorder-associated genes from the scientific literature in PubMed with greater sensitivity for literature-based support than existing methods. We developed a PubMed query to retrieve disorder-related, original research articles. Then we applied a rule-based text-mining algorithm with keyword matching to extract target disorders, genes with significant results, and the type of study described by the article. We compared our resulting candidate disorder genes and supporting references with existing databases. We demonstrated that our candidate gene set covers nearly all genes in manually curated databases, and that the references supporting the disorder-gene link are more extensive and accurate than other general purpose gene-to-disorder association databases. We implemented a novel publication search tool to find target articles, specifically focused on links between disorders and genotypes. Through comparison against gold-standard manually updated gene-disorder databases and comparison with automated databases of similar functionality we show that our tool can search through the entirety of PubMed to extract the main gene findings for human diseases rapidly and accurately.

  20. In-vitro antimicrobial activity of marine actinobacteria against multidrug resistance Staphylococcus aureus

    PubMed Central

    Sathish, Kumar SR; Kokati, Venkata Bhaskara Rao

    2012-01-01

    Objective To investigate the antibacterial activity of marine actinobacteria against multidrug resistance Staphylococcus aureus (MDRSA). Methods Fifty one actinobacterial strains were isolated from salt pans soil, costal area in Kothapattanam, Ongole, Andhra Pradesh. Primary screening was done using cross-streak method against MDRSA. The bioactive compounds are extracted from efficient actinobacteria using solvent extraction. The antimicrobial activity of crude and solvent extracts was performed using Kirby-Bauer method. MIC for ethyl acetate extract was determined by modified agar well diffusion method. The potent actinobacteria are identified using Nonomura key, Shirling and Gottlieb 1966 with Bergey's manual of determinative bacteriology. Results Among the fifty one isolates screened for antibacterial activity, SRB25 were found efficient against MDRSA. The ethyl acetate extracts showed high inhibition against test organism. MIC test was performed with the ethyl acetate extract against MDRSA and found to be 1 000 µg/mL. The isolated actinobacteria are identified as Streptomyces sp with the help of Nonomura key. Conclusions The current investigation reveals that the marine actinobacteria from salt pan environment can be able to produce new drug molecules against drug resistant microorganisms. PMID:23569848

  1. A method of vehicle license plate recognition based on PCANet and compressive sensing

    NASA Astrophysics Data System (ADS)

    Ye, Xianyi; Min, Feng

    2018-03-01

    The manual feature extraction of the traditional method for vehicle license plates has no good robustness to change in diversity. And the high feature dimension that is extracted with Principal Component Analysis Network (PCANet) leads to low classification efficiency. For solving these problems, a method of vehicle license plate recognition based on PCANet and compressive sensing is proposed. First, PCANet is used to extract the feature from the images of characters. And then, the sparse measurement matrix which is a very sparse matrix and consistent with Restricted Isometry Property (RIP) condition of the compressed sensing is used to reduce the dimensions of extracted features. Finally, the Support Vector Machine (SVM) is used to train and recognize the features whose dimension has been reduced. Experimental results demonstrate that the proposed method has better performance than Convolutional Neural Network (CNN) in the recognition and time. Compared with no compression sensing, the proposed method has lower feature dimension for the increase of efficiency.

  2. Inter-comparison of Methods for Extracting Subsurface Layers from SHARAD Radargrams over Martian polar regions

    NASA Astrophysics Data System (ADS)

    Xiong, S.; Muller, J.-P.; Carretero, R. C.

    2017-09-01

    Subsurface layers are preserved in the polar regions on Mars, representing a record of past climate changes on Mars. Orbital radar instruments, such as the Mars Advanced Radar for Subsurface and Ionosphere Sounding (MARSIS) onboard ESA Mars Express (MEX) and the SHAllow RADar (SHARAD) onboard the Mars Reconnaissance Orbiter (MRO), transmit radar signals to Mars and receive a set of return signals from these subsurface regions. Layering is a prominent subsurface feature, which has been revealed by both MARSIS and SHARAD radargrams over both polar regions on Mars. Automatic extraction of these subsurface layering is becoming increasingly important as there is now over ten years' of data archived. In this study, we investigate two different methods for extracting these subsurface layers from SHARAD data and compare the results against delineated layers derived manually to validate which methods is better for extracting these layers automatically.

  3. Longitudinal Analysis of New Information Types in Clinical Notes

    PubMed Central

    Zhang, Rui; Pakhomov, Serguei; Melton, Genevieve B.

    2014-01-01

    It is increasingly recognized that redundant information in clinical notes within electronic health record (EHR) systems is ubiquitous, significant, and may negatively impact the secondary use of these notes for research and patient care. We investigated several automated methods to identify redundant versus relevant new information in clinical reports. These methods may provide a valuable approach to extract clinically pertinent information and further improve the accuracy of clinical information extraction systems. In this study, we used UMLS semantic types to extract several types of new information, including problems, medications, and laboratory information. Automatically identified new information highly correlated with manual reference standard annotations. Methods to identify different types of new information can potentially help to build up more robust information extraction systems for clinical researchers as well as aid clinicians and researchers in navigating clinical notes more effectively and quickly identify information pertaining to changes in health states. PMID:25717418

  4. Segmentation of Image Ensembles via Latent Atlases

    PubMed Central

    Van Leemput, Koen; Menze, Bjoern H.; Wells, William M.; Golland, Polina

    2010-01-01

    Spatial priors, such as probabilistic atlases, play an important role in MRI segmentation. However, the availability of comprehensive, reliable and suitable manual segmentations for atlas construction is limited. We therefore propose a method for joint segmentation of corresponding regions of interest in a collection of aligned images that does not require labeled training data. Instead, a latent atlas, initialized by at most a single manual segmentation, is inferred from the evolving segmentations of the ensemble. The algorithm is based on probabilistic principles but is solved using partial differential equations (PDEs) and energy minimization criteria. We evaluate the method on two datasets, segmenting subcortical and cortical structures in a multi-subject study and extracting brain tumors in a single-subject multi-modal longitudinal experiment. We compare the segmentation results to manual segmentations, when those exist, and to the results of a state-of-the-art atlas-based segmentation method. The quality of the results supports the latent atlas as a promising alternative when existing atlases are not compatible with the images to be segmented. PMID:20580305

  5. Automated prostate cancer localization without the need for peripheral zone extraction using multiparametric MRI.

    PubMed

    Liu, Xin; Yetik, Imam Samil

    2011-06-01

    Multiparametric magnetic resonance imaging (MRI) has been shown to have higher localization accuracy than transrectal ultrasound (TRUS) for prostate cancer. Therefore, automated cancer segmentation using multiparametric MRI is receiving a growing interest, since MRI can provide both morphological and functional images for tissue of interest. However, all automated methods to this date are applicable to a single zone of the prostate, and the peripheral zone (PZ) of the prostate needs to be extracted manually, which is a tedious and time-consuming job. In this paper, our goal is to remove the need of PZ extraction by incorporating the spatial and geometric information of prostate tumors with multiparametric MRI derived from T2-weighted MRI, diffusion-weighted imaging (DWI) and dynamic contrast enhanced MRI (DCE-MRI). In order to remove the need of PZ extraction, the authors propose a new method to incorporate the spatial information of the cancer. This is done by introducing a new feature called location map. This new feature is constructed by applying a nonlinear transformation to the spatial position coordinates of each pixel, so that the location map implicitly represents the geometric position of each pixel with respect to the prostate region. Then, this new feature is combined with multiparametric MR images to perform tumor localization. The proposed algorithm is applied to multiparametric prostate MRI data obtained from 20 patients with biopsy-confirmed prostate cancer. The proposed method which does not need the masks of PZ was found to have prostate cancer detection specificity of 0.84, sensitivity of 0.80 and dice coefficient value of 0.42. The authors have found that fusing the spatial information allows us to obtain tumor outline without the need of PZ extraction with a considerable success (better or similar performance to methods that require manual PZ extraction). Our experimental results quantitatively demonstrate the effectiveness of the proposed method, depicting that the proposed method has a slightly better or similar localization performance compared to methods which require the masks of PZ.

  6. Purification of Training Samples Based on Spectral Feature and Superpixel Segmentation

    NASA Astrophysics Data System (ADS)

    Guan, X.; Qi, W.; He, J.; Wen, Q.; Chen, T.; Wang, Z.

    2018-04-01

    Remote sensing image classification is an effective way to extract information from large volumes of high-spatial resolution remote sensing images. Generally, supervised image classification relies on abundant and high-precision training data, which is often manually interpreted by human experts to provide ground truth for training and evaluating the performance of the classifier. Remote sensing enterprises accumulated lots of manually interpreted products from early lower-spatial resolution remote sensing images by executing their routine research and business programs. However, these manually interpreted products may not match the very high resolution (VHR) image properly because of different dates or spatial resolution of both data, thus, hindering suitability of manually interpreted products in training classification models, or small coverage area of these manually interpreted products. We also face similar problems in our laboratory in 21st Century Aerospace Technology Co. Ltd (short for 21AT). In this work, we propose a method to purify the interpreted product to match newly available VHRI data and provide the best training data for supervised image classifiers in VHR image classification. And results indicate that our proposed method can efficiently purify the input data for future machine learning use.

  7. Analyzing depression tendency of web posts using an event-driven depression tendency warning model.

    PubMed

    Tung, Chiaming; Lu, Wenhsiang

    2016-01-01

    The Internet has become a platform to express individual moods/feelings of daily life, where authors share their thoughts in web blogs, micro-blogs, forums, bulletin board systems or other media. In this work, we investigate text-mining technology to analyze and predict the depression tendency of web posts. In this paper, we defined depression factors, which include negative events, negative emotions, symptoms, and negative thoughts from web posts. We proposed an enhanced event extraction (E3) method to automatically extract negative event terms. In addition, we also proposed an event-driven depression tendency warning (EDDTW) model to predict the depression tendency of web bloggers or post authors by analyzing their posted articles. We compare the performance among the proposed EDDTW model, negative emotion evaluation (NEE) model, and the diagnostic and statistical manual of mental disorders-based depression tendency evaluation method. The EDDTW model obtains the best recall rate and F-measure at 0.668 and 0.624, respectively, while the diagnostic and statistical manual of mental disorders-based method achieves the best precision rate of 0.666. The main reason is that our enhanced event extraction method can increase recall rate by enlarging the negative event lexicon at the expense of precision. Our EDDTW model can also be used to track the change or trend of depression tendency for each post author. The depression tendency trend can help doctors to diagnose and even track depression of web post authors more efficiently. This paper presents an E3 method to automatically extract negative event terms in web posts. We also proposed a new EDDTW model to predict the depression tendency of web posts and possibly help bloggers or post authors to early detect major depressive disorder. Copyright © 2015 Elsevier B.V. All rights reserved.

  8. A Manual Segmentation Tool for Three-Dimensional Neuron Datasets.

    PubMed

    Magliaro, Chiara; Callara, Alejandro L; Vanello, Nicola; Ahluwalia, Arti

    2017-01-01

    To date, automated or semi-automated software and algorithms for segmentation of neurons from three-dimensional imaging datasets have had limited success. The gold standard for neural segmentation is considered to be the manual isolation performed by an expert. To facilitate the manual isolation of complex objects from image stacks, such as neurons in their native arrangement within the brain, a new Manual Segmentation Tool (ManSegTool) has been developed. ManSegTool allows user to load an image stack, scroll down the images and to manually draw the structures of interest stack-by-stack. Users can eliminate unwanted regions or split structures (i.e., branches from different neurons that are too close each other, but, to the experienced eye, clearly belong to a unique cell), to view the object in 3D and save the results obtained. The tool can be used for testing the performance of a single-neuron segmentation algorithm or to extract complex objects, where the available automated methods still fail. Here we describe the software's main features and then show an example of how ManSegTool can be used to segment neuron images acquired using a confocal microscope. In particular, expert neuroscientists were asked to segment different neurons from which morphometric variables were subsequently extracted as a benchmark for precision. In addition, a literature-defined index for evaluating the goodness of segmentation was used as a benchmark for accuracy. Neocortical layer axons from a DIADEM challenge dataset were also segmented with ManSegTool and compared with the manual "gold-standard" generated for the competition.

  9. Ringer tablet-based ionic liquid phase microextraction: Application in extraction and preconcentration of neonicotinoid insecticides from fruit juice and vegetable samples.

    PubMed

    Farajzadeh, Mir Ali; Bamorowat, Mahdi; Mogaddam, Mohammad Reza Afshar

    2016-11-01

    An efficient, reliable, sensitive, rapid, and green analytical method for the extraction and determination of neonicotinoid insecticides in aqueous samples has been developed using ionic liquid phase microextraction coupled with high performance liquid chromatography-diode array detector. In this method, a few microliters of 1-hexyl-3-methylimidazolium hexafluorophosphate (as an extractant) is added onto a ringer tablet and it is transferred into a conical test tube containing aqueous phase of the analytes. By manually shaking, the ringer tablet is dissolved and the extractant is released into the aqueous phase as very tiny droplets to provide a cloudy solution. After centrifuging the extracted analytes into ionic liquid are collected at the bottom of a conical test tube. Under the optimum extraction conditions, the method showed low limits of detection and quantification between 0.12 and 0.33 and 0.41 and 1.11ngmL(-1), respectively. Extraction recoveries and enrichment factors were from 66% to 84% and 655% to 843%, respectively. Finally different aqueous samples were successfully analyzed using the proposed method. Copyright © 2016 Elsevier B.V. All rights reserved.

  10. Evaluation Of A Powder-Free DNA Extraction Method For Skeletal Remains.

    PubMed

    Harrel, Michelle; Mayes, Carrie; Gangitano, David; Hughes-Stamm, Sheree

    2018-02-07

    Bones are often recovered in forensic investigations, including missing persons and mass disasters. While traditional DNA extraction methods rely on grinding bone into powder prior to DNA purification, the TBone Ex buffer (DNA Chip Research Inc.) digests bone chips without powdering. In this study, six bones were extracted using the TBone Ex kit in conjunction with the PrepFiler ® BTA™ DNA extraction kit (Thermo Fisher Scientific) both manually and via an automated platform. Comparable amounts of DNA were recovered from a 50 mg bone chip using the TBone Ex kit and 50 mg of powdered bone with the PrepFiler ® BTA™ kit. However, automated DNA purification decreased DNA yield (p < 0.05). Nevertheless, short tandem repeat (STR) success was comparable across all methods tested. This study demonstrates that digestion of whole bone fragments is an efficient alternative to powdering bones for DNA extraction without compromising downstream STR profile quality. © 2018 American Academy of Forensic Sciences.

  11. Contour detection improved by context-adaptive surround suppression.

    PubMed

    Sang, Qiang; Cai, Biao; Chen, Hao

    2017-01-01

    Recently, many image processing applications have taken advantage of a psychophysical and neurophysiological mechanism, called "surround suppression" to extract object contour from a natural scene. However, these traditional methods often adopt a single suppression model and a fixed input parameter called "inhibition level", which needs to be manually specified. To overcome these drawbacks, we propose a novel model, called "context-adaptive surround suppression", which can automatically control the effect of surround suppression according to image local contextual features measured by a surface estimator based on a local linear kernel. Moreover, a dynamic suppression method and its stopping mechanism are introduced to avoid manual intervention. The proposed algorithm is demonstrated and validated by a broad range of experimental results.

  12. Knowledge-Driven Event Extraction in Russian: Corpus-Based Linguistic Resources

    PubMed Central

    Solovyev, Valery; Ivanov, Vladimir

    2016-01-01

    Automatic event extraction form text is an important step in knowledge acquisition and knowledge base population. Manual work in development of extraction system is indispensable either in corpus annotation or in vocabularies and pattern creation for a knowledge-based system. Recent works have been focused on adaptation of existing system (for extraction from English texts) to new domains. Event extraction in other languages was not studied due to the lack of resources and algorithms necessary for natural language processing. In this paper we define a set of linguistic resources that are necessary in development of a knowledge-based event extraction system in Russian: a vocabulary of subordination models, a vocabulary of event triggers, and a vocabulary of Frame Elements that are basic building blocks for semantic patterns. We propose a set of methods for creation of such vocabularies in Russian and other languages using Google Books NGram Corpus. The methods are evaluated in development of event extraction system for Russian. PMID:26955386

  13. Assessment of MagNA pure LC extraction system for detection of human papillomavirus (HPV) DNA in PreservCyt samples by the Roche AMPLICOR and LINEAR ARRAY HPV tests.

    PubMed

    Stevens, Matthew P; Rudland, Elice; Garland, Suzanne M; Tabrizi, Sepehr N

    2006-07-01

    Roche Molecular Systems recently released two PCR-based assays, AMPLICOR and LINEAR ARRAY (LA), for the detection and genotyping, respectively, of human papillomaviruses (HPVs). The manual specimen processing method recommended for use with both assays, AmpliLute, can be time-consuming and labor-intensive and is open to potential specimen cross-contamination. We evaluated the Roche MagNA Pure LC (MP) as an alternative for specimen processing prior to use with either assay. DNA was extracted from cervical brushings, collected in PreservCyt media, by AmpliLute and MP using DNA-I and Total Nucleic Acid (TNA) kits, from 150 patients with histologically confirmed cervical abnormalities. DNA was amplified and detected by AMPLICOR and the LA HPV test. Concordances of 96.5% (139 of 144) (kappa=0.93) and 95.1% (135 of 142) (kappa=0.90) were generated by AMPLICOR when we compared DNA extracts from AmpliLute to MP DNA-I and TNA, respectively. The HPV genotype profiles were identical in 78.7 and 74.7% of samples between AmpliLute and DNA-I or TNA, respectively. To improve LA concordance, all 150 specimens were extracted by MP DNA-I protocol after the centrifugation of 1-ml PreservCyt samples. This modified approach improved HPV genotype concordance levels between AmpliLute and MP DNA-I to 88.0% (P=0.043) without affecting AMPLICOR sensitivity. Laboratories that have an automated MP extraction system would find this procedure more feasible and easier to handle than the recommended manual extraction method and could substitute such extractions for AMPLICOR and LA HPV tests once internally validated.

  14. What is a Dune: Developing AN Automated Approach to Extracting Dunes from Digital Elevation Models

    NASA Astrophysics Data System (ADS)

    Taylor, H.; DeCuir, C.; Wernette, P. A.; Taube, C.; Eyler, R.; Thopson, S.

    2016-12-01

    Coastal dunes can absorb storm surge and mitigate inland erosion caused by elevated water levels during a storm. In order to understand how a dune responds to and recovers from a storm, it is important that we can first identify and differentiate the beach and dune from the rest of the landscape. Current literature does not provide a consistent definition of what the dune features (e.g. dune toe, dune crest) are or how they can be extracted. The purpose of this research is to develop enhanced approaches to extracting dunes from a digital elevation model (DEM). Manual delineation, convergence index, least-cost path, relative relief, and vegetation abundance were compared and contrasted on a small area of Padre Island National Seashore (PAIS), Preliminary results indicate that the method used to extract the dune greatly affects our interpretation of how the dune changes. The manual delineation method was time intensive and subjective, while the convergence index approach was useful to easily identify the dune crest through maximum and minimum values. The least-cost path method proved to be time intensive due to data clipping; however, this approach resulted in continuous geomorphic landscape features (e.g. dune toe, dune crest). While the relative relief approach shows the most features in multi resolution, it is difficult to assess the accuracy of the extracted features because extracted features appear as points that can vary widely in their location from one meter to the next. The vegetation approach was greatly impacted by the seasonal and annual fluctuations of growth but is advantageous in historical change studies because it can be used to extract consistent dune formation from historical aerial imagery. Improving our ability to more accurately assess dune response and recovery to a storm will enable coastal managers to more accurately predict how dunes may respond to future climate change scenarios.

  15. Impervious surface mapping with Quickbird imagery

    PubMed Central

    Lu, Dengsheng; Hetrick, Scott; Moran, Emilio

    2010-01-01

    This research selects two study areas with different urban developments, sizes, and spatial patterns to explore the suitable methods for mapping impervious surface distribution using Quickbird imagery. The selected methods include per-pixel based supervised classification, segmentation-based classification, and a hybrid method. A comparative analysis of the results indicates that per-pixel based supervised classification produces a large number of “salt-and-pepper” pixels, and segmentation based methods can significantly reduce this problem. However, neither method can effectively solve the spectral confusion of impervious surfaces with water/wetland and bare soils and the impacts of shadows. In order to accurately map impervious surface distribution from Quickbird images, manual editing is necessary and may be the only way to extract impervious surfaces from the confused land covers and the shadow problem. This research indicates that the hybrid method consisting of thresholding techniques, unsupervised classification and limited manual editing provides the best performance. PMID:21643434

  16. Intelligent fault diagnosis of rolling bearings using an improved deep recurrent neural network

    NASA Astrophysics Data System (ADS)

    Jiang, Hongkai; Li, Xingqiu; Shao, Haidong; Zhao, Ke

    2018-06-01

    Traditional intelligent fault diagnosis methods for rolling bearings heavily depend on manual feature extraction and feature selection. For this purpose, an intelligent deep learning method, named the improved deep recurrent neural network (DRNN), is proposed in this paper. Firstly, frequency spectrum sequences are used as inputs to reduce the input size and ensure good robustness. Secondly, DRNN is constructed by the stacks of the recurrent hidden layer to automatically extract the features from the input spectrum sequences. Thirdly, an adaptive learning rate is adopted to improve the training performance of the constructed DRNN. The proposed method is verified with experimental rolling bearing data, and the results confirm that the proposed method is more effective than traditional intelligent fault diagnosis methods.

  17. Methods for extracting social network data from chatroom logs

    NASA Astrophysics Data System (ADS)

    Osesina, O. Isaac; McIntire, John P.; Havig, Paul R.; Geiselman, Eric E.; Bartley, Cecilia; Tudoreanu, M. Eduard

    2012-06-01

    Identifying social network (SN) links within computer-mediated communication platforms without explicit relations among users poses challenges to researchers. Our research aims to extract SN links in internet chat with multiple users engaging in synchronous overlapping conversations all displayed in a single stream. We approached this problem using three methods which build on previous research. Response-time analysis builds on temporal proximity of chat messages; word context usage builds on keywords analysis and direct addressing which infers links by identifying the intended message recipient from the screen name (nickname) referenced in the message [1]. Our analysis of word usage within the chat stream also provides contexts for the extracted SN links. To test the capability of our methods, we used publicly available data from Internet Relay Chat (IRC), a real-time computer-mediated communication (CMC) tool used by millions of people around the world. The extraction performances of individual methods and their hybrids were assessed relative to a ground truth (determined a priori via manual scoring).

  18. Comparison of DNA extraction methods for human gut microbial community profiling.

    PubMed

    Lim, Mi Young; Song, Eun-Ji; Kim, Sang Ho; Lee, Jangwon; Nam, Young-Do

    2018-03-01

    The human gut harbors a vast range of microbes that have significant impact on health and disease. Therefore, gut microbiome profiling holds promise for use in early diagnosis and precision medicine development. Accurate profiling of the highly complex gut microbiome requires DNA extraction methods that provide sufficient coverage of the original community as well as adequate quality and quantity. We tested nine different DNA extraction methods using three commercial kits (TianLong Stool DNA/RNA Extraction Kit (TS), QIAamp DNA Stool Mini Kit (QS), and QIAamp PowerFecal DNA Kit (QP)) with or without additional bead-beating step using manual or automated methods and compared them in terms of DNA extraction ability from human fecal sample. All methods produced DNA in sufficient concentration and quality for use in sequencing, and the samples were clustered according to the DNA extraction method. Inclusion of bead-beating step especially resulted in higher degrees of microbial diversity and had the greatest effect on gut microbiome composition. Among the samples subjected to bead-beating method, TS kit samples were more similar to QP kit samples than QS kit samples. Our results emphasize the importance of mechanical disruption step for a more comprehensive profiling of the human gut microbiome. Copyright © 2017 The Authors. Published by Elsevier GmbH.. All rights reserved.

  19. Constraint factor graph cut-based active contour method for automated cellular image segmentation in RNAi screening.

    PubMed

    Chen, C; Li, H; Zhou, X; Wong, S T C

    2008-05-01

    Image-based, high throughput genome-wide RNA interference (RNAi) experiments are increasingly carried out to facilitate the understanding of gene functions in intricate biological processes. Automated screening of such experiments generates a large number of images with great variations in image quality, which makes manual analysis unreasonably time-consuming. Therefore, effective techniques for automatic image analysis are urgently needed, in which segmentation is one of the most important steps. This paper proposes a fully automatic method for cells segmentation in genome-wide RNAi screening images. The method consists of two steps: nuclei and cytoplasm segmentation. Nuclei are extracted and labelled to initialize cytoplasm segmentation. Since the quality of RNAi image is rather poor, a novel scale-adaptive steerable filter is designed to enhance the image in order to extract long and thin protrusions on the spiky cells. Then, constraint factor GCBAC method and morphological algorithms are combined to be an integrated method to segment tight clustered cells. Compared with the results obtained by using seeded watershed and the ground truth, that is, manual labelling results by experts in RNAi screening data, our method achieves higher accuracy. Compared with active contour methods, our method consumes much less time. The positive results indicate that the proposed method can be applied in automatic image analysis of multi-channel image screening data.

  20. Unsupervised Ontology Generation from Unstructured Text. CRESST Report 827

    ERIC Educational Resources Information Center

    Mousavi, Hamid; Kerr, Deirdre; Iseli, Markus R.

    2013-01-01

    Ontologies are a vital component of most knowledge acquisition systems, and recently there has been a huge demand for generating ontologies automatically since manual or supervised techniques are not scalable. In this paper, we introduce "OntoMiner", a rule-based, iterative method to extract and populate ontologies from unstructured or…

  1. Estimation of degree of sea ice ridging based on dual-polarized C-band SAR data

    NASA Astrophysics Data System (ADS)

    Gegiuc, Alexandru; Similä, Markku; Karvonen, Juha; Lensu, Mikko; Mäkynen, Marko; Vainio, Jouni

    2018-01-01

    For ship navigation in the Baltic Sea ice, parameters such as ice edge, ice concentration, ice thickness and degree of ridging are usually reported daily in manually prepared ice charts. These charts provide icebreakers with essential information for route optimization and fuel calculations. However, manual ice charting requires long analysis times, and detailed analysis of large areas (e.g. Arctic Ocean) is not feasible. Here, we propose a method for automatic estimation of the degree of ice ridging in the Baltic Sea region, based on RADARSAT-2 C-band dual-polarized (HH/HV channels) SAR texture features and sea ice concentration information extracted from Finnish ice charts. The SAR images were first segmented and then several texture features were extracted for each segment. Using the random forest method, we classified them into four classes of ridging intensity and compared them to the reference data extracted from the digitized ice charts. The overall agreement between the ice-chart-based degree of ice ridging and the automated results varied monthly, being 83, 63 and 81 % in January, February and March 2013, respectively. The correspondence between the degree of ice ridging reported in the ice charts and the actual ridge density was validated with data collected during a field campaign in March 2011. In principle the method can be applied to the seasonal sea ice regime in the Arctic Ocean.

  2. Dynamic leaching and fractionation of trace elements from environmental solids exploiting a novel circulating-flow platform.

    PubMed

    Mori, Masanobu; Nakano, Koji; Sasaki, Masaya; Shinozaki, Haruka; Suzuki, Shiho; Okawara, Chitose; Miró, Manuel; Itabashi, Hideyuki

    2016-02-01

    A dynamic flow-through microcolumn extraction system based on extractant re-circulation is herein proposed as a novel analytical approach for simplification of bioaccessibility tests of trace elements in sediments. On-line metal leaching is undertaken in the format of all injection (AI) analysis, which is a sequel of flow injection analysis, but involving extraction under steady-state conditions. The minimum circulation times and flow rates required to determine the maximum bioaccessible pools of target metals (viz., Cu, Zn, Cd, and Pb) from lake and river sediment samples were estimated using Tessier's sequential extraction scheme and an acid single extraction test. The on-line AIA method was successfully validated by mass balance studies of CRM and real sediment samples. Tessier's test in on-line AI format demonstrated to be carried out by one third of extraction time (6h against more than 17 h by the conventional method), with better analytical precision (<9.2% against >15% by the conventional method) and significant decrease in blank readouts as compared with the manual batch counterpart. Copyright © 2015 Elsevier B.V. All rights reserved.

  3. In-vitro antimicrobial activity of marine actinobacteria against multidrug resistance Staphylococcus aureus.

    PubMed

    Sathish, Kumar S R; Kokati, Venkata Bhaskara Rao

    2012-10-01

    To investigate the antibacterial activity of marine actinobacteria against multidrug resistance Staphylococcus aureus (MDRSA). Fifty one actinobacterial strains were isolated from salt pans soil, costal area in Kothapattanam, Ongole, Andhra Pradesh. Primary screening was done using cross-streak method against MDRSA. The bioactive compounds are extracted from efficient actinobacteria using solvent extraction. The antimicrobial activity of crude and solvent extracts was performed using Kirby-Bauer method. MIC for ethyl acetate extract was determined by modified agar well diffusion method. The potent actinobacteria are identified using Nonomura key, Shirling and Gottlieb 1966 with Bergey's manual of determinative bacteriology. Among the fifty one isolates screened for antibacterial activity, SRB25 were found efficient against MDRSA. The ethyl acetate extracts showed high inhibition against test organism. MIC test was performed with the ethyl acetate extract against MDRSA and found to be 1 000 µg/mL. The isolated actinobacteria are identified as Streptomyces sp with the help of Nonomura key. The current investigation reveals that the marine actinobacteria from salt pan environment can be able to produce new drug molecules against drug resistant microorganisms.

  4. A dual growing method for the automatic extraction of individual trees from mobile laser scanning data

    NASA Astrophysics Data System (ADS)

    Li, Lin; Li, Dalin; Zhu, Haihong; Li, You

    2016-10-01

    Street trees interlaced with other objects in cluttered point clouds of urban scenes inhibit the automatic extraction of individual trees. This paper proposes a method for the automatic extraction of individual trees from mobile laser scanning data, according to the general constitution of trees. Two components of each individual tree - a trunk and a crown can be extracted by the dual growing method. This method consists of coarse classification, through which most of artifacts are removed; the automatic selection of appropriate seeds for individual trees, by which the common manual initial setting is avoided; a dual growing process that separates one tree from others by circumscribing a trunk in an adaptive growing radius and segmenting a crown in constrained growing regions; and a refining process that draws a singular trunk from the interlaced other objects. The method is verified by two datasets with over 98% completeness and over 96% correctness. The low mean absolute percentage errors in capturing the morphological parameters of individual trees indicate that this method can output individual trees with high precision.

  5. Follicular Unit Extraction Hair Transplantation with Micromotor: Eight Years Experience.

    PubMed

    Ors, Safvet; Ozkose, Mehmet; Ors, Sevgi

    2015-08-01

    Follicular unit extraction (FUE) has been performed for over a decade. Our experience in the patients who underwent hair transplantation using only the FUE method was included in this study. A total of 1000 patients had hair transplantation using the FUE method between 2005 and 2014 in our clinic. Manual punch was used in 32 and micromotor was used in 968 patients for graft harvesting. During the time that manual punch was used for graft harvesting, 1000-2000 grafts were transplanted in one session in 6-8 h. Following micromotor use, the average graft count was increased to 2500 and the operation time remained unchanged. Graft take was difficult in 11.1 %, easy in 52.2 %, and very easy in 36.7 % of our patients. The main purpose of hair transplantation is to restore the hair loss. During the process, obtaining a natural appearance and adequate hair intensity is important. In the FUE method, grafts can be taken without changing their natural structure, there is no need for magnification, and the grafts can be transplanted directly without using any other processes. Because there is no suture in the FUE method, patients do not experience these incision site problems and scar formation. The FUE method enables us to achieve a natural appearance with less morbidity.

  6. Deep neural networks: A promising tool for fault characteristic mining and intelligent diagnosis of rotating machinery with massive data

    NASA Astrophysics Data System (ADS)

    Jia, Feng; Lei, Yaguo; Lin, Jing; Zhou, Xin; Lu, Na

    2016-05-01

    Aiming to promptly process the massive fault data and automatically provide accurate diagnosis results, numerous studies have been conducted on intelligent fault diagnosis of rotating machinery. Among these studies, the methods based on artificial neural networks (ANNs) are commonly used, which employ signal processing techniques for extracting features and further input the features to ANNs for classifying faults. Though these methods did work in intelligent fault diagnosis of rotating machinery, they still have two deficiencies. (1) The features are manually extracted depending on much prior knowledge about signal processing techniques and diagnostic expertise. In addition, these manual features are extracted according to a specific diagnosis issue and probably unsuitable for other issues. (2) The ANNs adopted in these methods have shallow architectures, which limits the capacity of ANNs to learn the complex non-linear relationships in fault diagnosis issues. As a breakthrough in artificial intelligence, deep learning holds the potential to overcome the aforementioned deficiencies. Through deep learning, deep neural networks (DNNs) with deep architectures, instead of shallow ones, could be established to mine the useful information from raw data and approximate complex non-linear functions. Based on DNNs, a novel intelligent method is proposed in this paper to overcome the deficiencies of the aforementioned intelligent diagnosis methods. The effectiveness of the proposed method is validated using datasets from rolling element bearings and planetary gearboxes. These datasets contain massive measured signals involving different health conditions under various operating conditions. The diagnosis results show that the proposed method is able to not only adaptively mine available fault characteristics from the measured signals, but also obtain superior diagnosis accuracy compared with the existing methods.

  7. Applying a weighted random forests method to extract karst sinkholes from LiDAR data

    NASA Astrophysics Data System (ADS)

    Zhu, Junfeng; Pierskalla, William P.

    2016-02-01

    Detailed mapping of sinkholes provides critical information for mitigating sinkhole hazards and understanding groundwater and surface water interactions in karst terrains. LiDAR (Light Detection and Ranging) measures the earth's surface in high-resolution and high-density and has shown great potentials to drastically improve locating and delineating sinkholes. However, processing LiDAR data to extract sinkholes requires separating sinkholes from other depressions, which can be laborious because of the sheer number of the depressions commonly generated from LiDAR data. In this study, we applied the random forests, a machine learning method, to automatically separate sinkholes from other depressions in a karst region in central Kentucky. The sinkhole-extraction random forest was grown on a training dataset built from an area where LiDAR-derived depressions were manually classified through a visual inspection and field verification process. Based on the geometry of depressions, as well as natural and human factors related to sinkholes, 11 parameters were selected as predictive variables to form the dataset. Because the training dataset was imbalanced with the majority of depressions being non-sinkholes, a weighted random forests method was used to improve the accuracy of predicting sinkholes. The weighted random forest achieved an average accuracy of 89.95% for the training dataset, demonstrating that the random forest can be an effective sinkhole classifier. Testing of the random forest in another area, however, resulted in moderate success with an average accuracy rate of 73.96%. This study suggests that an automatic sinkhole extraction procedure like the random forest classifier can significantly reduce time and labor costs and makes its more tractable to map sinkholes using LiDAR data for large areas. However, the random forests method cannot totally replace manual procedures, such as visual inspection and field verification.

  8. An annotated corpus with nanomedicine and pharmacokinetic parameters

    PubMed Central

    Lewinski, Nastassja A; Jimenez, Ivan; McInnes, Bridget T

    2017-01-01

    A vast amount of data on nanomedicines is being generated and published, and natural language processing (NLP) approaches can automate the extraction of unstructured text-based data. Annotated corpora are a key resource for NLP and information extraction methods which employ machine learning. Although corpora are available for pharmaceuticals, resources for nanomedicines and nanotechnology are still limited. To foster nanotechnology text mining (NanoNLP) efforts, we have constructed a corpus of annotated drug product inserts taken from the US Food and Drug Administration’s Drugs@FDA online database. In this work, we present the development of the Engineered Nanomedicine Database corpus to support the evaluation of nanomedicine entity extraction. The data were manually annotated for 21 entity mentions consisting of nanomedicine physicochemical characterization, exposure, and biologic response information of 41 Food and Drug Administration-approved nanomedicines. We evaluate the reliability of the manual annotations and demonstrate the use of the corpus by evaluating two state-of-the-art named entity extraction systems, OpenNLP and Stanford NER. The annotated corpus is available open source and, based on these results, guidelines and suggestions for future development of additional nanomedicine corpora are provided. PMID:29066897

  9. Extraction of the number of peroxisomes in yeast cells by automated image analysis.

    PubMed

    Niemistö, Antti; Selinummi, Jyrki; Saleem, Ramsey; Shmulevich, Ilya; Aitchison, John; Yli-Harja, Olli

    2006-01-01

    An automated image analysis method for extracting the number of peroxisomes in yeast cells is presented. Two images of the cell population are required for the method: a bright field microscope image from which the yeast cells are detected and the respective fluorescent image from which the number of peroxisomes in each cell is found. The segmentation of the cells is based on clustering the local mean-variance space. The watershed transformation is thereafter employed to separate cells that are clustered together. The peroxisomes are detected by thresholding the fluorescent image. The method is tested with several images of a budding yeast Saccharomyces cerevisiae population, and the results are compared with manually obtained results.

  10. Automation of DNA and miRNA co-extraction for miRNA-based identification of human body fluids and tissues.

    PubMed

    Kulstein, Galina; Marienfeld, Ralf; Miltner, Erich; Wiegand, Peter

    2016-10-01

    In the last years, microRNA (miRNA) analysis came into focus in the field of forensic genetics. Yet, no standardized and recommendable protocols for co-isolation of miRNA and DNA from forensic relevant samples have been developed so far. Hence, this study evaluated the performance of an automated Maxwell® 16 System-based strategy (Promega) for co-extraction of DNA and miRNA from forensically relevant (blood and saliva) samples compared to (semi-)manual extraction methods. Three procedures were compared on the basis of recovered quantity of DNA and miRNA (as determined by real-time PCR and Bioanalyzer), miRNA profiling (shown by Cq values and extraction efficiency), STR profiles, duration, contamination risk and handling. All in all, the results highlight that the automated co-extraction procedure yielded the highest miRNA and DNA amounts from saliva and blood samples compared to both (semi-)manual protocols. Also, for aged and genuine samples of forensically relevant traces the miRNA and DNA yields were sufficient for subsequent downstream analysis. Furthermore, the strategy allows miRNA extraction only in cases where it is relevant to obtain additional information about the sample type. Besides, this system enables flexible sample throughput and labor-saving sample processing with reduced risk of cross-contamination. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  11. Unsupervised method for automatic construction of a disease dictionary from a large free text collection.

    PubMed

    Xu, Rong; Supekar, Kaustubh; Morgan, Alex; Das, Amar; Garber, Alan

    2008-11-06

    Concept specific lexicons (e.g. diseases, drugs, anatomy) are a critical source of background knowledge for many medical language-processing systems. However, the rapid pace of biomedical research and the lack of constraints on usage ensure that such dictionaries are incomplete. Focusing on disease terminology, we have developed an automated, unsupervised, iterative pattern learning approach for constructing a comprehensive medical dictionary of disease terms from randomized clinical trial (RCT) abstracts, and we compared different ranking methods for automatically extracting con-textual patterns and concept terms. When used to identify disease concepts from 100 randomly chosen, manually annotated clinical abstracts, our disease dictionary shows significant performance improvement (F1 increased by 35-88%) over available, manually created disease terminologies.

  12. Unsupervised Method for Automatic Construction of a Disease Dictionary from a Large Free Text Collection

    PubMed Central

    Xu, Rong; Supekar, Kaustubh; Morgan, Alex; Das, Amar; Garber, Alan

    2008-01-01

    Concept specific lexicons (e.g. diseases, drugs, anatomy) are a critical source of background knowledge for many medical language-processing systems. However, the rapid pace of biomedical research and the lack of constraints on usage ensure that such dictionaries are incomplete. Focusing on disease terminology, we have developed an automated, unsupervised, iterative pattern learning approach for constructing a comprehensive medical dictionary of disease terms from randomized clinical trial (RCT) abstracts, and we compared different ranking methods for automatically extracting contextual patterns and concept terms. When used to identify disease concepts from 100 randomly chosen, manually annotated clinical abstracts, our disease dictionary shows significant performance improvement (F1 increased by 35–88%) over available, manually created disease terminologies. PMID:18999169

  13. Teaching artificial neural systems to drive: Manual training techniques for autonomous systems

    NASA Technical Reports Server (NTRS)

    Shepanski, J. F.; Macy, S. A.

    1987-01-01

    A methodology was developed for manually training autonomous control systems based on artificial neural systems (ANS). In applications where the rule set governing an expert's decisions is difficult to formulate, ANS can be used to extract rules by associating the information an expert receives with the actions taken. Properly constructed networks imitate rules of behavior that permits them to function autonomously when they are trained on the spanning set of possible situations. This training can be provided manually, either under the direct supervision of a system trainer, or indirectly using a background mode where the networks assimilates training data as the expert performs its day-to-day tasks. To demonstrate these methods, an ANS network was trained to drive a vehicle through simulated freeway traffic.

  14. Single-trial event-related potential extraction through one-unit ICA-with-reference

    NASA Astrophysics Data System (ADS)

    Lih Lee, Wee; Tan, Tele; Falkmer, Torbjörn; Leung, Yee Hong

    2016-12-01

    Objective. In recent years, ICA has been one of the more popular methods for extracting event-related potential (ERP) at the single-trial level. It is a blind source separation technique that allows the extraction of an ERP without making strong assumptions on the temporal and spatial characteristics of an ERP. However, the problem with traditional ICA is that the extraction is not direct and is time-consuming due to the need for source selection processing. In this paper, the application of an one-unit ICA-with-Reference (ICA-R), a constrained ICA method, is proposed. Approach. In cases where the time-region of the desired ERP is known a priori, this time information is utilized to generate a reference signal, which is then used for guiding the one-unit ICA-R to extract the source signal of the desired ERP directly. Main results. Our results showed that, as compared to traditional ICA, ICA-R is a more effective method for analysing ERP because it avoids manual source selection and it requires less computation thus resulting in faster ERP extraction. Significance. In addition to that, since the method is automated, it reduces the risks of any subjective bias in the ERP analysis. It is also a potential tool for extracting the ERP in online application.

  15. Single-trial event-related potential extraction through one-unit ICA-with-reference.

    PubMed

    Lee, Wee Lih; Tan, Tele; Falkmer, Torbjörn; Leung, Yee Hong

    2016-12-01

    In recent years, ICA has been one of the more popular methods for extracting event-related potential (ERP) at the single-trial level. It is a blind source separation technique that allows the extraction of an ERP without making strong assumptions on the temporal and spatial characteristics of an ERP. However, the problem with traditional ICA is that the extraction is not direct and is time-consuming due to the need for source selection processing. In this paper, the application of an one-unit ICA-with-Reference (ICA-R), a constrained ICA method, is proposed. In cases where the time-region of the desired ERP is known a priori, this time information is utilized to generate a reference signal, which is then used for guiding the one-unit ICA-R to extract the source signal of the desired ERP directly. Our results showed that, as compared to traditional ICA, ICA-R is a more effective method for analysing ERP because it avoids manual source selection and it requires less computation thus resulting in faster ERP extraction. In addition to that, since the method is automated, it reduces the risks of any subjective bias in the ERP analysis. It is also a potential tool for extracting the ERP in online application.

  16. A Method for Extracting Pigments from Squid Doryteuthis pealeii.

    PubMed

    DiBona, Christopher W; Williams, Thomas L; Dinneen, Sean R; Jones Labadie, Stephanie F; Deravi, Leila F

    2016-11-09

    Cephalopods can undergo rapid and adaptive changes in dermal coloration for sensing, communication, defense, and reproduction purposes. These capabilities are supported in part by the areal expansion and retraction of pigmented organs known as chromatophores. While it is known that the chromatophores contain a tethered network of pigmented granules, their structure-function properties have not been fully detailed. We describe a method for isolating the nanostructured granules in squid Doryteuthis pealeii chromatophores and demonstrate how their associated pigments can be extracted in acidic solvents. To accomplish this, the chromatophore containing dermal layer is first manually isolated using a superficial dissection, and the pigment granules are removed using sonication, centrifugation, and washing cycles. Pigments confined within the purified granules are then extracted via acidic methanol solutions, leaving nanostructures with smaller diameters that are void of visible color. This extraction procedure produces a 58% yield of soluble pigments isolated from granules. Using this method, the composition of the chromatophore pigments can be determined and used to provide insight into the mechanism of adaptive coloration in cephalopods.

  17. Non-Thermal, On-Site Decontamination and Destruction of Practice Bombs

    DTIC Science & Technology

    2006-06-12

    extraction (SW1311) followed by analysis (SW6010B with analysis for Mercury SW7470). The samples taken from the process tanks indicated in Table 3...A: Analytical Methods Supporting Project CD-ROM 7471A - 1 Revision 1 September 1994 METHOD 7471A MERCURY IN SOLID OR SEMISOLID WASTE (MANUAL...COLD-VAPOR TECHNIQUE) 1.0 SCOPE AND APPLICATION 1.1 Method 7471 is approved for measuring total mercury (organic and inorganic) in soils, sediments

  18. Revisiting chlorophyll extraction methods in biological soil crusts - methodology for determination of chlorophyll a and chlorophyll a + b as compared to previous methods

    NASA Astrophysics Data System (ADS)

    Caesar, Jennifer; Tamm, Alexandra; Ruckteschler, Nina; Lena Leifke, Anna; Weber, Bettina

    2018-03-01

    Chlorophyll concentrations of biological soil crust (biocrust) samples are commonly determined to quantify the relevance of photosynthetically active organisms within these surface soil communities. Whereas chlorophyll extraction methods for freshwater algae and leaf tissues of vascular plants are well established, there is still some uncertainty regarding the optimal extraction method for biocrusts, where organism composition is highly variable and samples comprise major amounts of soil. In this study we analyzed the efficiency of two different chlorophyll extraction solvents, the effect of grinding the soil samples prior to the extraction procedure, and the impact of shaking as an intermediate step during extraction. The analyses were conducted on four different types of biocrusts. Our results show that for all biocrust types chlorophyll contents obtained with ethanol were significantly lower than those obtained using dimethyl sulfoxide (DMSO) as a solvent. Grinding of biocrust samples prior to analysis caused a highly significant decrease in chlorophyll content for green algal lichen- and cyanolichen-dominated biocrusts, and a tendency towards lower values for moss- and algae-dominated biocrusts. Shaking of the samples after each extraction step had a significant positive effect on the chlorophyll content of green algal lichen- and cyanolichen-dominated biocrusts. Based on our results we confirm a DMSO-based chlorophyll extraction method without grinding pretreatment and suggest the addition of an intermediate shaking step for complete chlorophyll extraction (see Supplement S6 for detailed manual). Determination of a universal chlorophyll extraction method for biocrusts is essential for the inter-comparability of publications conducted across all continents.

  19. Apparatus And Method For Osl-Based, Remote Radiation Monitoring And Spectrometry

    DOEpatents

    Miller, Steven D.; Smith, Leon Eric; Skorpik, James R.

    2006-03-07

    Compact, OSL-based devices for long-term, unattended radiation detection and spectroscopy are provided. In addition, a method for extracting spectroscopic information from these devices is taught. The devices can comprise OSL pixels and at least one radiation filter surrounding at least a portion of the OSL pixels. The filter can modulate an incident radiation flux. The devices can further comprise a light source and a detector, both proximally located to the OSL pixels, as well as a power source and a wireless communication device, each operably connected to the light source and the detector. Power consumption of the device ranges from ultra-low to zero. The OSL pixels can retain data regarding incident radiation events as trapped charges. The data can be extracted wirelessly or manually. The method for extracting spectroscopic data comprises optically stimulating the exposed OSL pixels, detecting a readout luminescence, and reconstructing an incident-energy spectrum from the luminescence.

  20. Apparatus and method for OSL-based, remote radiation monitoring and spectrometry

    DOEpatents

    Smith, Leon Eric [Richland, WA; Miller, Steven D [Richland, WA; Bowyer, Theodore W [Oakton, VA

    2008-05-20

    Compact, OSL-based devices for long-term, unattended radiation detection and spectroscopy are provided. In addition, a method for extracting spectroscopic information from these devices is taught. The devices can comprise OSL pixels and at least one radiation filter surrounding at least a portion of the OSL pixels. The filter can modulate an incident radiation flux. The devices can further comprise a light source and a detector, both proximally located to the OSL pixels, as well as a power source and a wireless communication device, each operably connected to the light source and the detector. Power consumption of the device ranges from ultra-low to zero. The OSL pixels can retain data regarding incident radiation events as trapped charges. The data can be extracted wirelessly or manually. The method for extracting spectroscopic data comprises optically stimulating the exposed OSL pixels, detecting a readout luminescence, and reconstructing an incident-energy spectrum from the luminescence.

  1. Extracting Characteristics of the Study Subjects from Full-Text Articles

    PubMed Central

    Demner-Fushman, Dina; Mork, James G

    2015-01-01

    Characteristics of the subjects of biomedical research are important in determining if a publication describing the research is relevant to a search. To facilitate finding relevant publications, MEDLINE citations provide Medical Subject Headings that describe the subjects’ characteristics, such as their species, gender, and age. We seek to improve the recommendation of these headings by the Medical Text Indexer (MTI) that supports manual indexing of MEDLINE. To that end, we explore the potential of the full text of the publications. Using simple recall-oriented rule-based methods we determined that adding sentences extracted from the methods sections and captions to the abstracts prior to MTI processing significantly improved recall and F1 score with only a slight drop in precision. Improvements were also achieved in directly assigning several headings extracted from the full text. These results indicate the need for further development of automated methods capable of leveraging the full text for indexing. PMID:26958181

  2. Automatic segmentation of mandible in panoramic x-ray.

    PubMed

    Abdi, Amir Hossein; Kasaei, Shohreh; Mehdizadeh, Mojdeh

    2015-10-01

    As the panoramic x-ray is the most common extraoral radiography in dentistry, segmentation of its anatomical structures facilitates diagnosis and registration of dental records. This study presents a fast and accurate method for automatic segmentation of mandible in panoramic x-rays. In the proposed four-step algorithm, a superior border is extracted through horizontal integral projections. A modified Canny edge detector accompanied by morphological operators extracts the inferior border of the mandible body. The exterior borders of ramuses are extracted through a contour tracing method based on the average model of mandible. The best-matched template is fetched from the atlas of mandibles to complete the contour of left and right processes. The algorithm was tested on a set of 95 panoramic x-rays. Evaluating the results against manual segmentations of three expert dentists showed that the method is robust. It achieved an average performance of [Formula: see text] in Dice similarity, specificity, and sensitivity.

  3. Functional segmentation of dynamic PET studies: Open source implementation and validation of a leader-follower-based algorithm.

    PubMed

    Mateos-Pérez, José María; Soto-Montenegro, María Luisa; Peña-Zalbidea, Santiago; Desco, Manuel; Vaquero, Juan José

    2016-02-01

    We present a novel segmentation algorithm for dynamic PET studies that groups pixels according to the similarity of their time-activity curves. Sixteen mice bearing a human tumor cell line xenograft (CH-157MN) were imaged with three different (68)Ga-DOTA-peptides (DOTANOC, DOTATATE, DOTATOC) using a small animal PET-CT scanner. Regional activities (input function and tumor) were obtained after manual delineation of regions of interest over the image. The algorithm was implemented under the jClustering framework and used to extract the same regional activities as in the manual approach. The volume of distribution in the tumor was computed using the Logan linear method. A Kruskal-Wallis test was used to investigate significant differences between the manually and automatically obtained volumes of distribution. The algorithm successfully segmented all the studies. No significant differences were found for the same tracer across different segmentation methods. Manual delineation revealed significant differences between DOTANOC and the other two tracers (DOTANOC - DOTATATE, p=0.020; DOTANOC - DOTATOC, p=0.033). Similar differences were found using the leader-follower algorithm. An open implementation of a novel segmentation method for dynamic PET studies is presented and validated in rodent studies. It successfully replicated the manual results obtained in small-animal studies, thus making it a reliable substitute for this task and, potentially, for other dynamic segmentation procedures. Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. BIANCA (Brain Intensity AbNormality Classification Algorithm): A new tool for automated segmentation of white matter hyperintensities.

    PubMed

    Griffanti, Ludovica; Zamboni, Giovanna; Khan, Aamira; Li, Linxin; Bonifacio, Guendalina; Sundaresan, Vaanathi; Schulz, Ursula G; Kuker, Wilhelm; Battaglini, Marco; Rothwell, Peter M; Jenkinson, Mark

    2016-11-01

    Reliable quantification of white matter hyperintensities of presumed vascular origin (WMHs) is increasingly needed, given the presence of these MRI findings in patients with several neurological and vascular disorders, as well as in elderly healthy subjects. We present BIANCA (Brain Intensity AbNormality Classification Algorithm), a fully automated, supervised method for WMH detection, based on the k-nearest neighbour (k-NN) algorithm. Relative to previous k-NN based segmentation methods, BIANCA offers different options for weighting the spatial information, local spatial intensity averaging, and different options for the choice of the number and location of the training points. BIANCA is multimodal and highly flexible so that the user can adapt the tool to their protocol and specific needs. We optimised and validated BIANCA on two datasets with different MRI protocols and patient populations (a "predominantly neurodegenerative" and a "predominantly vascular" cohort). BIANCA was first optimised on a subset of images for each dataset in terms of overlap and volumetric agreement with a manually segmented WMH mask. The correlation between the volumes extracted with BIANCA (using the optimised set of options), the volumes extracted from the manual masks and visual ratings showed that BIANCA is a valid alternative to manual segmentation. The optimised set of options was then applied to the whole cohorts and the resulting WMH volume estimates showed good correlations with visual ratings and with age. Finally, we performed a reproducibility test, to evaluate the robustness of BIANCA, and compared BIANCA performance against existing methods. Our findings suggest that BIANCA, which will be freely available as part of the FSL package, is a reliable method for automated WMH segmentation in large cross-sectional cohort studies. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  5. Automated DNA extraction platforms offer solutions to challenges of assessing microbial biofouling in oil production facilities.

    PubMed

    Oldham, Athenia L; Drilling, Heather S; Stamps, Blake W; Stevenson, Bradley S; Duncan, Kathleen E

    2012-11-20

    The analysis of microbial assemblages in industrial, marine, and medical systems can inform decisions regarding quality control or mitigation. Modern molecular approaches to detect, characterize, and quantify microorganisms provide rapid and thorough measures unbiased by the need for cultivation. The requirement of timely extraction of high quality nucleic acids for molecular analysis is faced with specific challenges when used to study the influence of microorganisms on oil production. Production facilities are often ill equipped for nucleic acid extraction techniques, making the preservation and transportation of samples off-site a priority. As a potential solution, the possibility of extracting nucleic acids on-site using automated platforms was tested. The performance of two such platforms, the Fujifilm QuickGene-Mini80™ and Promega Maxwell®16 was compared to a widely used manual extraction kit, MOBIO PowerBiofilm™ DNA Isolation Kit, in terms of ease of operation, DNA quality, and microbial community composition. Three pipeline biofilm samples were chosen for these comparisons; two contained crude oil and corrosion products and the third transported seawater. Overall, the two more automated extraction platforms produced higher DNA yields than the manual approach. DNA quality was evaluated for amplification by quantitative PCR (qPCR) and end-point PCR to generate 454 pyrosequencing libraries for 16S rRNA microbial community analysis. Microbial community structure, as assessed by DGGE analysis and pyrosequencing, was comparable among the three extraction methods. Therefore, the use of automated extraction platforms should enhance the feasibility of rapidly evaluating microbial biofouling at remote locations or those with limited resources.

  6. Automated DNA extraction platforms offer solutions to challenges of assessing microbial biofouling in oil production facilities

    PubMed Central

    2012-01-01

    The analysis of microbial assemblages in industrial, marine, and medical systems can inform decisions regarding quality control or mitigation. Modern molecular approaches to detect, characterize, and quantify microorganisms provide rapid and thorough measures unbiased by the need for cultivation. The requirement of timely extraction of high quality nucleic acids for molecular analysis is faced with specific challenges when used to study the influence of microorganisms on oil production. Production facilities are often ill equipped for nucleic acid extraction techniques, making the preservation and transportation of samples off-site a priority. As a potential solution, the possibility of extracting nucleic acids on-site using automated platforms was tested. The performance of two such platforms, the Fujifilm QuickGene-Mini80™ and Promega Maxwell®16 was compared to a widely used manual extraction kit, MOBIO PowerBiofilm™ DNA Isolation Kit, in terms of ease of operation, DNA quality, and microbial community composition. Three pipeline biofilm samples were chosen for these comparisons; two contained crude oil and corrosion products and the third transported seawater. Overall, the two more automated extraction platforms produced higher DNA yields than the manual approach. DNA quality was evaluated for amplification by quantitative PCR (qPCR) and end-point PCR to generate 454 pyrosequencing libraries for 16S rRNA microbial community analysis. Microbial community structure, as assessed by DGGE analysis and pyrosequencing, was comparable among the three extraction methods. Therefore, the use of automated extraction platforms should enhance the feasibility of rapidly evaluating microbial biofouling at remote locations or those with limited resources. PMID:23168231

  7. Sieve-based relation extraction of gene regulatory networks from biological literature

    PubMed Central

    2015-01-01

    Background Relation extraction is an essential procedure in literature mining. It focuses on extracting semantic relations between parts of text, called mentions. Biomedical literature includes an enormous amount of textual descriptions of biological entities, their interactions and results of related experiments. To extract them in an explicit, computer readable format, these relations were at first extracted manually from databases. Manual curation was later replaced with automatic or semi-automatic tools with natural language processing capabilities. The current challenge is the development of information extraction procedures that can directly infer more complex relational structures, such as gene regulatory networks. Results We develop a computational approach for extraction of gene regulatory networks from textual data. Our method is designed as a sieve-based system and uses linear-chain conditional random fields and rules for relation extraction. With this method we successfully extracted the sporulation gene regulation network in the bacterium Bacillus subtilis for the information extraction challenge at the BioNLP 2013 conference. To enable extraction of distant relations using first-order models, we transform the data into skip-mention sequences. We infer multiple models, each of which is able to extract different relationship types. Following the shared task, we conducted additional analysis using different system settings that resulted in reducing the reconstruction error of bacterial sporulation network from 0.73 to 0.68, measured as the slot error rate between the predicted and the reference network. We observe that all relation extraction sieves contribute to the predictive performance of the proposed approach. Also, features constructed by considering mention words and their prefixes and suffixes are the most important features for higher accuracy of extraction. Analysis of distances between different mention types in the text shows that our choice of transforming data into skip-mention sequences is appropriate for detecting relations between distant mentions. Conclusions Linear-chain conditional random fields, along with appropriate data transformations, can be efficiently used to extract relations. The sieve-based architecture simplifies the system as new sieves can be easily added or removed and each sieve can utilize the results of previous ones. Furthermore, sieves with conditional random fields can be trained on arbitrary text data and hence are applicable to broad range of relation extraction tasks and data domains. PMID:26551454

  8. Sieve-based relation extraction of gene regulatory networks from biological literature.

    PubMed

    Žitnik, Slavko; Žitnik, Marinka; Zupan, Blaž; Bajec, Marko

    2015-01-01

    Relation extraction is an essential procedure in literature mining. It focuses on extracting semantic relations between parts of text, called mentions. Biomedical literature includes an enormous amount of textual descriptions of biological entities, their interactions and results of related experiments. To extract them in an explicit, computer readable format, these relations were at first extracted manually from databases. Manual curation was later replaced with automatic or semi-automatic tools with natural language processing capabilities. The current challenge is the development of information extraction procedures that can directly infer more complex relational structures, such as gene regulatory networks. We develop a computational approach for extraction of gene regulatory networks from textual data. Our method is designed as a sieve-based system and uses linear-chain conditional random fields and rules for relation extraction. With this method we successfully extracted the sporulation gene regulation network in the bacterium Bacillus subtilis for the information extraction challenge at the BioNLP 2013 conference. To enable extraction of distant relations using first-order models, we transform the data into skip-mention sequences. We infer multiple models, each of which is able to extract different relationship types. Following the shared task, we conducted additional analysis using different system settings that resulted in reducing the reconstruction error of bacterial sporulation network from 0.73 to 0.68, measured as the slot error rate between the predicted and the reference network. We observe that all relation extraction sieves contribute to the predictive performance of the proposed approach. Also, features constructed by considering mention words and their prefixes and suffixes are the most important features for higher accuracy of extraction. Analysis of distances between different mention types in the text shows that our choice of transforming data into skip-mention sequences is appropriate for detecting relations between distant mentions. Linear-chain conditional random fields, along with appropriate data transformations, can be efficiently used to extract relations. The sieve-based architecture simplifies the system as new sieves can be easily added or removed and each sieve can utilize the results of previous ones. Furthermore, sieves with conditional random fields can be trained on arbitrary text data and hence are applicable to broad range of relation extraction tasks and data domains.

  9. Temporally rendered automatic cloud extraction (TRACE) system

    NASA Astrophysics Data System (ADS)

    Bodrero, Dennis M.; Yale, James G.; Davis, Roger E.; Rollins, John M.

    1999-10-01

    Smoke/obscurant testing requires that 2D cloud extent be extracted from visible and thermal imagery. These data are used alone or in combination with 2D data from other aspects to make 3D calculations of cloud properties, including dimensions, volume, centroid, travel, and uniformity. Determining cloud extent from imagery has historically been a time-consuming manual process. To reduce time and cost associated with smoke/obscurant data processing, automated methods to extract cloud extent from imagery were investigated. The TRACE system described in this paper was developed and implemented at U.S. Army Dugway Proving Ground, UT by the Science and Technology Corporation--Acuity Imaging Incorporated team with Small Business Innovation Research funding. TRACE uses dynamic background subtraction and 3D fast Fourier transform as primary methods to discriminate the smoke/obscurant cloud from the background. TRACE has been designed to run on a PC-based platform using Windows. The PC-Windows environment was chosen for portability, to give TRACE the maximum flexibility in terms of its interaction with peripheral hardware devices such as video capture boards, removable media drives, network cards, and digital video interfaces. Video for Windows provides all of the necessary tools for the development of the video capture utility in TRACE and allows for interchangeability of video capture boards without any software changes. TRACE is designed to take advantage of future upgrades in all aspects of its component hardware. A comparison of cloud extent determined by TRACE with manual method is included in this paper.

  10. A Robust Gradient Based Method for Building Extraction from LiDAR and Photogrammetric Imagery.

    PubMed

    Siddiqui, Fasahat Ullah; Teng, Shyh Wei; Awrangjeb, Mohammad; Lu, Guojun

    2016-07-19

    Existing automatic building extraction methods are not effective in extracting buildings which are small in size and have transparent roofs. The application of large area threshold prohibits detection of small buildings and the use of ground points in generating the building mask prevents detection of transparent buildings. In addition, the existing methods use numerous parameters to extract buildings in complex environments, e.g., hilly area and high vegetation. However, the empirical tuning of large number of parameters reduces the robustness of building extraction methods. This paper proposes a novel Gradient-based Building Extraction (GBE) method to address these limitations. The proposed method transforms the Light Detection And Ranging (LiDAR) height information into intensity image without interpolation of point heights and then analyses the gradient information in the image. Generally, building roof planes have a constant height change along the slope of a roof plane whereas trees have a random height change. With such an analysis, buildings of a greater range of sizes with a transparent or opaque roof can be extracted. In addition, a local colour matching approach is introduced as a post-processing stage to eliminate trees. This stage of our proposed method does not require any manual setting and all parameters are set automatically from the data. The other post processing stages including variance, point density and shadow elimination are also applied to verify the extracted buildings, where comparatively fewer empirically set parameters are used. The performance of the proposed GBE method is evaluated on two benchmark data sets by using the object and pixel based metrics (completeness, correctness and quality). Our experimental results show the effectiveness of the proposed method in eliminating trees, extracting buildings of all sizes, and extracting buildings with and without transparent roof. When compared with current state-of-the-art building extraction methods, the proposed method outperforms the existing methods in various evaluation metrics.

  11. A Robust Gradient Based Method for Building Extraction from LiDAR and Photogrammetric Imagery

    PubMed Central

    Siddiqui, Fasahat Ullah; Teng, Shyh Wei; Awrangjeb, Mohammad; Lu, Guojun

    2016-01-01

    Existing automatic building extraction methods are not effective in extracting buildings which are small in size and have transparent roofs. The application of large area threshold prohibits detection of small buildings and the use of ground points in generating the building mask prevents detection of transparent buildings. In addition, the existing methods use numerous parameters to extract buildings in complex environments, e.g., hilly area and high vegetation. However, the empirical tuning of large number of parameters reduces the robustness of building extraction methods. This paper proposes a novel Gradient-based Building Extraction (GBE) method to address these limitations. The proposed method transforms the Light Detection And Ranging (LiDAR) height information into intensity image without interpolation of point heights and then analyses the gradient information in the image. Generally, building roof planes have a constant height change along the slope of a roof plane whereas trees have a random height change. With such an analysis, buildings of a greater range of sizes with a transparent or opaque roof can be extracted. In addition, a local colour matching approach is introduced as a post-processing stage to eliminate trees. This stage of our proposed method does not require any manual setting and all parameters are set automatically from the data. The other post processing stages including variance, point density and shadow elimination are also applied to verify the extracted buildings, where comparatively fewer empirically set parameters are used. The performance of the proposed GBE method is evaluated on two benchmark data sets by using the object and pixel based metrics (completeness, correctness and quality). Our experimental results show the effectiveness of the proposed method in eliminating trees, extracting buildings of all sizes, and extracting buildings with and without transparent roof. When compared with current state-of-the-art building extraction methods, the proposed method outperforms the existing methods in various evaluation metrics. PMID:27447631

  12. In Data We Trust? Comparison of Electronic Versus Manual Abstraction of Antimicrobial Prescribing Quality Metrics for Hospitalized Veterans With Pneumonia.

    PubMed

    Jones, Barbara E; Haroldsen, Candace; Madaras-Kelly, Karl; Goetz, Matthew B; Ying, Jian; Sauer, Brian; Jones, Makoto M; Leecaster, Molly; Greene, Tom; Fridkin, Scott K; Neuhauser, Melinda M; Samore, Matthew H

    2018-07-01

    Electronic health records provide the opportunity to assess system-wide quality measures. Veterans Affairs Pharmacy Benefits Management Center for Medication Safety uses medication use evaluation (MUE) through manual review of the electronic health records. To compare an electronic MUE approach versus human/manual review for extraction of antibiotic use (choice and duration) and severity metrics. Retrospective. Hospitalizations for uncomplicated pneumonia occurring during 2013 at 30 Veterans Affairs facilities. We compared summary statistics, individual hospitalization-level agreement, facility-level consistency, and patterns of variation between electronic and manual MUE for initial severity, antibiotic choice, daily clinical stability, and antibiotic duration. Among 2004 hospitalizations, electronic and manual abstraction methods showed high individual hospitalization-level agreement for initial severity measures (agreement=86%-98%, κ=0.5-0.82), antibiotic choice (agreement=89%-100%, κ=0.70-0.94), and facility-level consistency for empiric antibiotic choice (anti-MRSA r=0.97, P<0.001; antipseudomonal r=0.95, P<0.001) and therapy duration (r=0.77, P<0.001) but lower facility-level consistency for days to clinical stability (r=0.52, P=0.006) or excessive duration of therapy (r=0.55, P=0.005). Both methods identified widespread facility-level variation in antibiotic choice, but we found additional variation in manual estimation of excessive antibiotic duration and initial illness severity. Electronic and manual MUE agreed well for illness severity, antibiotic choice, and duration of therapy in pneumonia at both the individual and facility levels. Manual MUE showed additional reviewer-level variation in estimation of initial illness severity and excessive antibiotic use. Electronic MUE allows for reliable, scalable tracking of national patterns of antimicrobial use, enabling the examination of system-wide interventions to improve quality.

  13. Assessing the role of a medication-indication resource in the treatment relation extraction from clinical text

    PubMed Central

    Bejan, Cosmin Adrian; Wei, Wei-Qi; Denny, Joshua C

    2015-01-01

    Objective To evaluate the contribution of the MEDication Indication (MEDI) resource and SemRep for identifying treatment relations in clinical text. Materials and methods We first processed clinical documents with SemRep to extract the Unified Medical Language System (UMLS) concepts and the treatment relations between them. Then, we incorporated MEDI into a simple algorithm that identifies treatment relations between two concepts if they match a medication-indication pair in this resource. For a better coverage, we expanded MEDI using ontology relationships from RxNorm and UMLS Metathesaurus. We also developed two ensemble methods, which combined the predictions of SemRep and the MEDI algorithm. We evaluated our selected methods on two datasets, a Vanderbilt corpus of 6864 discharge summaries and the 2010 Informatics for Integrating Biology and the Bedside (i2b2)/Veteran's Affairs (VA) challenge dataset. Results The Vanderbilt dataset included 958 manually annotated treatment relations. A double annotation was performed on 25% of relations with high agreement (Cohen's κ = 0.86). The evaluation consisted of comparing the manual annotated relations with the relations identified by SemRep, the MEDI algorithm, and the two ensemble methods. On the first dataset, the best F1-measure results achieved by the MEDI algorithm and the union of the two resources (78.7 and 80, respectively) were significantly higher than the SemRep results (72.3). On the second dataset, the MEDI algorithm achieved better precision and significantly lower recall values than the best system in the i2b2 challenge. The two systems obtained comparable F1-measure values on the subset of i2b2 relations with both arguments in MEDI. Conclusions Both SemRep and MEDI can be used to extract treatment relations from clinical text. Knowledge-based extraction with MEDI outperformed use of SemRep alone, but superior performance was achieved by integrating both systems. The integration of knowledge-based resources such as MEDI into information extraction systems such as SemRep and the i2b2 relation extractors may improve treatment relation extraction from clinical text. PMID:25336593

  14. Validated Automatic Brain Extraction of Head CT Images

    PubMed Central

    Muschelli, John; Ullman, Natalie L.; Mould, W. Andrew; Vespa, Paul; Hanley, Daniel F.; Crainiceanu, Ciprian M.

    2015-01-01

    Background X-ray Computed Tomography (CT) imaging of the brain is commonly used in diagnostic settings. Although CT scans are primarily used in clinical practice, they are increasingly used in research. A fundamental processing step in brain imaging research is brain extraction – the process of separating the brain tissue from all other tissues. Methods for brain extraction have either been 1) validated but not fully automated, or 2) fully automated and informally proposed, but never formally validated. Aim To systematically analyze and validate the performance of FSL's brain extraction tool (BET) on head CT images of patients with intracranial hemorrhage. This was done by comparing the manual gold standard with the results of several versions of automatic brain extraction and by estimating the reliability of automated segmentation of longitudinal scans. The effects of the choice of BET parameters and data smoothing is studied and reported. Methods All images were thresholded using a 0 – 100 Hounsfield units (HU) range. In one variant of the pipeline, data were smoothed using a 3-dimensional Gaussian kernel (σ = 1mm3) and re-thresholded to 0 – 100 HU; in the other, data were not smoothed. BET was applied using 1 of 3 fractional intensity (FI) thresholds: 0.01, 0.1, or 0.35 and any holes in the brain mask were filled. For validation against a manual segmentation, 36 images from patients with intracranial hemorrhage were selected from 19 different centers from the MISTIE (Minimally Invasive Surgery plus recombinant-tissue plasminogen activator for Intracerebral Evacuation) stroke trial. Intracranial masks of the brain were manually created by one expert CT reader. The resulting brain tissue masks were quantitatively compared to the manual segmentations using sensitivity, specificity, accuracy, and the Dice Similarity Index (DSI). Brain extraction performance across smoothing and FI thresholds was compared using the Wilcoxon signed-rank test. The intracranial volume (ICV) of each scan was estimated by multiplying the number of voxels in the brain mask by the dimensions of each voxel for that scan. From this, we calculated the ICV ratio comparing manual and automated segmentation: ICVautomatedICVmanual. To estimate the performance in a large number of scans, brain masks were generated from the 6 BET pipelines for 1095 longitudinal scans from 129 patients. Failure rates were estimated from visual inspection. ICV of each scan was estimated and and an intraclass correlation (ICC) was estimated using a one-way ANOVA. Results Smoothing images improves brain extraction results using BET for all measures except specificity (all p < 0.01, uncorrected), irrespective of the FI threshold. Using an FI of 0.01 or 0.1 performed better than 0.35. Thus, all reported results refer only to smoothed data using an FI of 0.01 or 0.1. Using an FI of 0.01 had a higher median sensitivity (0.9901) than an FI of 0.1 (0.9884, median difference: 0.0014, p < 0.001), accuracy (0.9971 vs. 0.9971; median difference: 0.0001, p < 0.001), and DSI (0.9895 vs. 0.9894; median difference: 0.0004, p < 0.001) and lower specificity (0.9981 vs. 0.9982; median difference: −0.0001, p < 0.001). These measures are all very high indicating that a range of FI values may produce visually indistinguishable brain extractions. Using smoothed data and an FI of 0.01, the mean (SD) ICV ratio was 1.002 (0.008); the mean being close to 1 indicates the ICV estimates are similar for automated and manual segmentation. In the 1095 longitudinal scans, this pipeline had a low failure rate (5.2%) and the ICC estimate was high (0.929, 95% CI: 0.91, 0.945) for successfully extracted brains. Conclusion BET performs well at brain extraction on thresholded, 1mm3 smoothed CT images with an FI of 0.01 or 0.1. Smoothing before applying BET is an important step not previously discussed in the literature. Analysis code is provided. PMID:25862260

  15. Chapter A5. Processing of Water Samples

    USGS Publications Warehouse

    Wilde, Franceska D.; Radtke, Dean B.; Gibs, Jacob; Iwatsubo, Rick T.

    1999-01-01

    The National Field Manual for the Collection of Water-Quality Data (National Field Manual) describes protocols and provides guidelines for U.S. Geological Survey (USGS) personnel who collect data used to assess the quality of the Nation's surface-water and ground-water resources. This chapter addresses methods to be used in processing water samples to be analyzed for inorganic and organic chemical substances, including the bottling of composite, pumped, and bailed samples and subsamples; sample filtration; solid-phase extraction for pesticide analyses; sample preservation; and sample handling and shipping. Each chapter of the National Field Manual is published separately and revised periodically. Newly published and revised chapters will be announced on the USGS Home Page on the World Wide Web under 'New Publications of the U.S. Geological Survey.' The URL for this page is http:/ /water.usgs.gov/lookup/get?newpubs.

  16. Digital representation of oil and natural gas well pad scars in southwest Wyoming

    USGS Publications Warehouse

    Garman, Steven L.; McBeth, Jamie L.

    2014-01-01

    The recent proliferation of oil and natural gas energy development in southwest Wyoming has stimulated the need to understand wildlife responses to this development. Central to many wildlife assessments is the use of geospatial methods that rely on digital representation of energy infrastructure. Surface disturbance of the well pad scars associated with oil and natural gas extraction has been an important but unavailable infrastructure layer. To provide a digital baseline of this surface disturbance, we extracted visible oil and gas well pad scars from 1-meter National Agriculture Imagery Program imagery (NAIP) acquired in 2009 for a 7.7 million-hectare region of southwest Wyoming. Scars include the pad area where wellheads, pumps, and storage facilities reside, and the surrounding area that was scraped and denuded of vegetation during the establishment of the pad. Scars containing tanks, compressors, and the storage of oil and gas related equipment, and produced-water ponds were also collected on occasion. Our extraction method was a two-step process starting with automated extraction followed by manual inspection and clean up. We used available well-point information to guide manual clean up and to derive estimates of year of origin and duration of activity on a pad scar. We also derived estimates of the proportion of non-vegetated area on a scar using a Normalized Difference Vegetation Index derived using 1-meter NAIP imagery. We extracted 16,973 pad scars of which 15,318 were oil and gas well pads. Digital representation of pad scars along with time-stamps of activity and estimates of non-vegetated area provides important baseline (circa 2009) data for assessments of wildlife responses, land-use trends, and disturbance-mediated pattern assessments.

  17. Feature Selection in Order to Extract Multiple Sclerosis Lesions Automatically in 3D Brain Magnetic Resonance Images Using Combination of Support Vector Machine and Genetic Algorithm.

    PubMed

    Khotanlou, Hassan; Afrasiabi, Mahlagha

    2012-10-01

    This paper presents a new feature selection approach for automatically extracting multiple sclerosis (MS) lesions in three-dimensional (3D) magnetic resonance (MR) images. Presented method is applicable to different types of MS lesions. In this method, T1, T2, and fluid attenuated inversion recovery (FLAIR) images are firstly preprocessed. In the next phase, effective features to extract MS lesions are selected by using a genetic algorithm (GA). The fitness function of the GA is the Similarity Index (SI) of a support vector machine (SVM) classifier. The results obtained on different types of lesions have been evaluated by comparison with manual segmentations. This algorithm is evaluated on 15 real 3D MR images using several measures. As a result, the SI between MS regions determined by the proposed method and radiologists was 87% on average. Experiments and comparisons with other methods show the effectiveness and the efficiency of the proposed approach.

  18. Fully convolutional networks (FCNs)-based segmentation method for colorectal tumors on T2-weighted magnetic resonance images.

    PubMed

    Jian, Junming; Xiong, Fei; Xia, Wei; Zhang, Rui; Gu, Jinhui; Wu, Xiaodong; Meng, Xiaochun; Gao, Xin

    2018-06-01

    Segmentation of colorectal tumors is the basis of preoperative prediction, staging, and therapeutic response evaluation. Due to the blurred boundary between lesions and normal colorectal tissue, it is hard to realize accurate segmentation. Routinely manual or semi-manual segmentation methods are extremely tedious, time-consuming, and highly operator-dependent. In the framework of FCNs, a segmentation method for colorectal tumor was presented. Normalization was applied to reduce the differences among images. Borrowing from transfer learning, VGG-16 was employed to extract features from normalized images. We conducted five side-output blocks from the last convolutional layer of each block of VGG-16 along the network, these side-output blocks can deep dive multiscale features, and produced corresponding predictions. Finally, all of the predictions from side-output blocks were fused to determine the final boundaries of the tumors. A quantitative comparison of 2772 colorectal tumor manual segmentation results from T2-weighted magnetic resonance images shows that the average Dice similarity coefficient, positive predictive value, specificity, sensitivity, Hammoude distance, and Hausdorff distance were 83.56, 82.67, 96.75, 87.85%, 0.2694, and 8.20, respectively. The proposed method is superior to U-net in colorectal tumor segmentation (P < 0.05). There is no difference between cross-entropy loss and Dice-based loss in colorectal tumor segmentation (P > 0.05). The results indicate that the introduction of FCNs contributed to accurate segmentation of colorectal tumors. This method has the potential to replace the present time-consuming and nonreproducible manual segmentation method.

  19. The paradox of sham therapy and placebo effect in osteopathy

    PubMed Central

    Cerritelli, Francesco; Verzella, Marco; Cicchitti, Luca; D’Alessandro, Giandomenico; Vanacore, Nicola

    2016-01-01

    Abstract Background: Placebo, defined as “false treatment,” is a common gold-standard method to assess the validity of a therapy both in pharmacological trials and manual medicine research where placebo is also referred to as “sham therapy.” In the medical literature, guidelines have been proposed on how to conduct robust placebo-controlled trials, but mainly in a drug-based scenario. In contrast, there are not precise guidelines on how to conduct a placebo-controlled in manual medicine trials (particularly osteopathy). The aim of the present systematic review was to report how and what type of sham methods, dosage, operator characteristics, and patient types were used in osteopathic clinical trials and, eventually, assess sham clinical effectiveness. Methods: A systematic Cochrane-based review was conducted by analyzing the osteopathic trials that used both manual and nonmanual placebo control. Searches were conducted on 8 databases from journal inception to December 2015 using a pragmatic literature search approach. Two independent reviewers conducted the study selection and data extraction for each study. The risk of bias was evaluated according to the Cochrane methods. Results: A total of 64 studies were eligible for analysis collecting a total of 5024 participants. More than half (43 studies) used a manual placebo; 9 studies used a nonmanual placebo; and 12 studies used both manual and nonmanual placebo. Data showed lack of reporting sham therapy information across studies. Risk of bias analysis demonstrated a high risk of bias for allocation, blinding of personnel and participants, selective, and other bias. To explore the clinical effects of sham therapies used, a quantitative analysis was planned. However, due to the high heterogeneity of sham approaches used no further analyses were performed. Conclusion: High heterogeneity regarding placebo used between studies, lack of reporting information on placebo methods and within-study variability between sham and real treatment procedures suggest prudence in reading and interpreting study findings in manual osteopathic randomized controlled trials (RCTs). Efforts must be made to promote guidelines to design the most reliable placebo for manual RCTs as a means of increasing the internal validity and improve external validity of findings. PMID:27583913

  20. Context-Aware Adaptive Hybrid Semantic Relatedness in Biomedical Science

    NASA Astrophysics Data System (ADS)

    Emadzadeh, Ehsan

    Text mining of biomedical literature and clinical notes is a very active field of research in biomedical science. Semantic analysis is one of the core modules for different Natural Language Processing (NLP) solutions. Methods for calculating semantic relatedness of two concepts can be very useful in solutions solving different problems such as relationship extraction, ontology creation and question / answering [1--6]. Several techniques exist in calculating semantic relatedness of two concepts. These techniques utilize different knowledge sources and corpora. So far, researchers attempted to find the best hybrid method for each domain by combining semantic relatedness techniques and data sources manually. In this work, attempts were made to eliminate the needs for manually combining semantic relatedness methods targeting any new contexts or resources through proposing an automated method, which attempted to find the best combination of semantic relatedness techniques and resources to achieve the best semantic relatedness score in every context. This may help the research community find the best hybrid method for each context considering the available algorithms and resources.

  1. The Efficiency of Random Forest Method for Shoreline Extraction from LANDSAT-8 and GOKTURK-2 Imageries

    NASA Astrophysics Data System (ADS)

    Bayram, B.; Erdem, F.; Akpinar, B.; Ince, A. K.; Bozkurt, S.; Catal Reis, H.; Seker, D. Z.

    2017-11-01

    Coastal monitoring plays a vital role in environmental planning and hazard management related issues. Since shorelines are fundamental data for environment management, disaster management, coastal erosion studies, modelling of sediment transport and coastal morphodynamics, various techniques have been developed to extract shorelines. Random Forest is one of these techniques which is used in this study for shoreline extraction.. This algorithm is a machine learning method based on decision trees. Decision trees analyse classes of training data creates rules for classification. In this study, Terkos region has been chosen for the proposed method within the scope of "TUBITAK Project (Project No: 115Y718) titled "Integration of Unmanned Aerial Vehicles for Sustainable Coastal Zone Monitoring Model - Three-Dimensional Automatic Coastline Extraction and Analysis: Istanbul-Terkos Example". Random Forest algorithm has been implemented to extract the shoreline of the Black Sea where near the lake from LANDSAT-8 and GOKTURK-2 satellite imageries taken in 2015. The MATLAB environment was used for classification. To obtain land and water-body classes, the Random Forest method has been applied to NIR bands of LANDSAT-8 (5th band) and GOKTURK-2 (4th band) imageries. Each image has been digitized manually and shorelines obtained for accuracy assessment. According to accuracy assessment results, Random Forest method is efficient for both medium and high resolution images for shoreline extraction studies.

  2. Validation of a high-throughput real-time polymerase chain reaction assay for the detection of capripoxviral DNA.

    PubMed

    Stubbs, Samuel; Oura, Chris A L; Henstock, Mark; Bowden, Timothy R; King, Donald P; Tuppurainen, Eeva S M

    2012-02-01

    Capripoxviruses, which are endemic in much of Africa and Asia, are the aetiological agents of economically devastating poxviral diseases in cattle, sheep and goats. The aim of this study was to validate a high-throughput real-time PCR assay for routine diagnostic use in a capripoxvirus reference laboratory. The performance of two previously published real-time PCR methods were compared using commercially available reagents including the amplification kits recommended in the original publication. Furthermore, both manual and robotic extraction methods used to prepare template nucleic acid were evaluated using samples collected from experimentally infected animals. The optimised assay had an analytical sensitivity of at least 63 target DNA copies per reaction, displayed a greater diagnostic sensitivity compared to conventional gel-based PCR, detected capripoxviruses isolated from outbreaks around the world and did not amplify DNA from related viruses in the genera Orthopoxvirus or Parapoxvirus. The high-throughput robotic DNA extraction procedure did not adversely affect the sensitivity of the assay compared to manual preparation of PCR templates. This laboratory-based assay provides a rapid and robust method to detect capripoxviruses following suspicion of disease in endemic or disease-free countries. Crown Copyright © 2011. Published by Elsevier B.V. All rights reserved.

  3. WE-EF-210-08: BEST IN PHYSICS (IMAGING): 3D Prostate Segmentation in Ultrasound Images Using Patch-Based Anatomical Feature

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, X; Rossi, P; Jani, A

    Purpose: Transrectal ultrasound (TRUS) is the standard imaging modality for the image-guided prostate-cancer interventions (e.g., biopsy and brachytherapy) due to its versatility and real-time capability. Accurate segmentation of the prostate plays a key role in biopsy needle placement, treatment planning, and motion monitoring. As ultrasound images have a relatively low signal-to-noise ratio (SNR), automatic segmentation of the prostate is difficult. However, manual segmentation during biopsy or radiation therapy can be time consuming. We are developing an automated method to address this technical challenge. Methods: The proposed segmentation method consists of two major stages: the training stage and the segmentation stage.more » During the training stage, patch-based anatomical features are extracted from the registered training images with patient-specific information, because these training images have been mapped to the new patient’ images, and the more informative anatomical features are selected to train the kernel support vector machine (KSVM). During the segmentation stage, the selected anatomical features are extracted from newly acquired image as the input of the well-trained KSVM and the output of this trained KSVM is the segmented prostate of this patient. Results: This segmentation technique was validated with a clinical study of 10 patients. The accuracy of our approach was assessed using the manual segmentation. The mean volume Dice Overlap Coefficient was 89.7±2.3%, and the average surface distance was 1.52 ± 0.57 mm between our and manual segmentation, which indicate that the automatic segmentation method works well and could be used for 3D ultrasound-guided prostate intervention. Conclusion: We have developed a new prostate segmentation approach based on the optimal feature learning framework, demonstrated its clinical feasibility, and validated its accuracy with manual segmentation (gold standard). This segmentation technique could be a useful tool for image-guided interventions in prostate-cancer diagnosis and treatment. This research is supported in part by DOD PCRP Award W81XWH-13-1-0269, and National Cancer Institute (NCI) Grant CA114313.« less

  4. Classifying zones of suitability for manual drilling using textural and hydraulic parameters of shallow aquifers: a case study in northwestern Senegal

    NASA Astrophysics Data System (ADS)

    Fussi, F. Fabio; Fumagalli, Letizia; Fava, Francesco; Di Mauro, Biagio; Kane, Cheik Hamidou; Niang, Magatte; Wade, Souleye; Hamidou, Barry; Colombo, Roberto; Bonomi, Tullia

    2017-12-01

    A method is proposed that uses analysis of borehole stratigraphic logs for the characterization of shallow aquifers and for the assessment of areas suitable for manual drilling. The model is based on available borehole-log parameters: depth to hard rock, depth to water, thickness of laterite and hydraulic transmissivity of the shallow aquifer. The model is applied to a study area in northwestern Senegal. A dataset of boreholes logs has been processed using a software package (TANGAFRIC) developed during the research. After a manual procedure to assign a standard category describing the lithological characteristics, the next step is the automated extraction of different textural parameters and the estimation of hydraulic conductivity using reference values available in the literature. The hydraulic conductivity values estimated from stratigraphic data have been partially validated, by comparing them with measured values from a series of pumping tests carried out in large-diameter wells. The results show that this method is able to produce a reliable interpretation of the shallow hydrogeological context using information generally available in the region. The research contributes to improving the identification of areas where conditions are suitable for manual drilling. This is achieved by applying the described method, based on a structured and semi-quantitative approach, to classify the zones of suitability for given manual drilling techniques using data available in most African countries. Ultimately, this work will support proposed international programs aimed at promoting low-cost water supply in Africa and enhancing access to safe drinking water for the population.

  5. Development of a novel constellation based landmark detection algorithm

    NASA Astrophysics Data System (ADS)

    Ghayoor, Ali; Vaidya, Jatin G.; Johnson, Hans J.

    2013-03-01

    Anatomical landmarks such as the anterior commissure (AC) and posterior commissure (PC) are commonly used by researchers for co-registration of images. In this paper, we present a novel, automated approach for landmark detection that combines morphometric constraining and statistical shape models to provide accurate estimation of landmark points. This method is made robust to large rotations in initial head orientation by extracting extra information of the eye centers using a radial Hough transform and exploiting the centroid of head mass (CM) using a novel estimation approach. To evaluate the effectiveness of this method, the algorithm is trained on a set of 20 images with manually selected landmarks, and a test dataset is used to compare the automatically detected against the manually detected landmark locations of the AC, PC, midbrain-pons junction (MPJ), and fourth ventricle notch (VN4). The results show that the proposed method is accurate as the average error between the automatically and manually labeled landmark points is less than 1 mm. Also, the algorithm is highly robust as it was successfully run on a large dataset that included different kinds of images with various orientation, spacing, and origin.

  6. Cardiac Multi-detector CT Segmentation Based on Multiscale Directional Edge Detector and 3D Level Set.

    PubMed

    Antunes, Sofia; Esposito, Antonio; Palmisano, Anna; Colantoni, Caterina; Cerutti, Sergio; Rizzo, Giovanna

    2016-05-01

    Extraction of the cardiac surfaces of interest from multi-detector computed tomographic (MDCT) data is a pre-requisite step for cardiac analysis, as well as for image guidance procedures. Most of the existing methods need manual corrections, which is time-consuming. We present a fully automatic segmentation technique for the extraction of the right ventricle, left ventricular endocardium and epicardium from MDCT images. The method consists in a 3D level set surface evolution approach coupled to a new stopping function based on a multiscale directional second derivative Gaussian filter, which is able to stop propagation precisely on the real boundary of the structures of interest. We validated the segmentation method on 18 MDCT volumes from healthy and pathologic subjects using manual segmentation performed by a team of expert radiologists as gold standard. Segmentation errors were assessed for each structure resulting in a surface-to-surface mean error below 0.5 mm and a percentage of surface distance with errors less than 1 mm above 80%. Moreover, in comparison to other segmentation approaches, already proposed in previous work, our method presented an improved accuracy (with surface distance errors less than 1 mm increased of 8-20% for all structures). The obtained results suggest that our approach is accurate and effective for the segmentation of ventricular cavities and myocardium from MDCT images.

  7. A marker-based watershed method for X-ray image segmentation.

    PubMed

    Zhang, Xiaodong; Jia, Fucang; Luo, Suhuai; Liu, Guiying; Hu, Qingmao

    2014-03-01

    Digital X-ray images are the most frequent modality for both screening and diagnosis in hospitals. To facilitate subsequent analysis such as quantification and computer aided diagnosis (CAD), it is desirable to exclude image background. A marker-based watershed segmentation method was proposed to segment background of X-ray images. The method consisted of six modules: image preprocessing, gradient computation, marker extraction, watershed segmentation from markers, region merging and background extraction. One hundred clinical direct radiograph X-ray images were used to validate the method. Manual thresholding and multiscale gradient based watershed method were implemented for comparison. The proposed method yielded a dice coefficient of 0.964±0.069, which was better than that of the manual thresholding (0.937±0.119) and that of multiscale gradient based watershed method (0.942±0.098). Special means were adopted to decrease the computational cost, including getting rid of few pixels with highest grayscale via percentile, calculation of gradient magnitude through simple operations, decreasing the number of markers by appropriate thresholding, and merging regions based on simple grayscale statistics. As a result, the processing time was at most 6s even for a 3072×3072 image on a Pentium 4 PC with 2.4GHz CPU (4 cores) and 2G RAM, which was more than one time faster than that of the multiscale gradient based watershed method. The proposed method could be a potential tool for diagnosis and quantification of X-ray images. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  8. A method for automatic feature points extraction of human vertebrae three-dimensional model

    NASA Astrophysics Data System (ADS)

    Wu, Zhen; Wu, Junsheng

    2017-05-01

    A method for automatic extraction of the feature points of the human vertebrae three-dimensional model is presented. Firstly, the statistical model of vertebrae feature points is established based on the results of manual vertebrae feature points extraction. Then anatomical axial analysis of the vertebrae model is performed according to the physiological and morphological characteristics of the vertebrae. Using the axial information obtained from the analysis, a projection relationship between the statistical model and the vertebrae model to be extracted is established. According to the projection relationship, the statistical model is matched with the vertebrae model to get the estimated position of the feature point. Finally, by analyzing the curvature in the spherical neighborhood with the estimated position of feature points, the final position of the feature points is obtained. According to the benchmark result on multiple test models, the mean relative errors of feature point positions are less than 5.98%. At more than half of the positions, the error rate is less than 3% and the minimum mean relative error is 0.19%, which verifies the effectiveness of the method.

  9. Lipid Extraction Techniques for Stable Isotope Analysis and Ecological Assays.

    PubMed

    Elliott, Kyle H; Roth, James D; Crook, Kevin

    2017-01-01

    Lipid extraction is an important component of many ecological and ecotoxicological measurements. For instance, percent lipid is often used as a measure of body condition, under the assumption that those individuals with higher lipid reserves are healthier. Likewise, lipids are depleted in 13 C compared with protein, and it is consequently a routine to remove lipids prior to measuring carbon isotopes in ecological studies so that variation in lipid content does not obscure variation in diet. We provide detailed methods for two different protocols for lipid extraction: Soxhlet apparatus and manual distillation. We also provide methods for polar and nonpolar solvents. Neutral (nonpolar) solvents remove some lipids but few non-lipid compounds, whereas polar solvents remove most lipids but also many non-lipid compounds. We discuss each of the methods and provide guidelines for best practices. We recommend that, for stable isotope analysis, researchers test for a relationship between the change in carbon stable isotope ratio and the amount of lipid extracted to see if the degree of extraction has an impact on isotope ratios. Stable isotope analysis is widely used by ecologists, and we provide a detailed methodology that minimizes known biases.

  10. Statistical Validation of Automatic Methods for Hippocampus Segmentation in MR Images of Epileptic Patients

    PubMed Central

    Hosseini, Mohammad-Parsa; Nazem-Zadeh, Mohammad R.; Pompili, Dario; Soltanian-Zadeh, Hamid

    2015-01-01

    Hippocampus segmentation is a key step in the evaluation of mesial Temporal Lobe Epilepsy (mTLE) by MR images. Several automated segmentation methods have been introduced for medical image segmentation. Because of multiple edges, missing boundaries, and shape changing along its longitudinal axis, manual outlining still remains the benchmark for hippocampus segmentation, which however, is impractical for large datasets due to time constraints. In this study, four automatic methods, namely FreeSurfer, Hammer, Automatic Brain Structure Segmentation (ABSS), and LocalInfo segmentation, are evaluated to find the most accurate and applicable method that resembles the bench-mark of hippocampus. Results from these four methods are compared against those obtained using manual segmentation for T1-weighted images of 157 symptomatic mTLE patients. For performance evaluation of automatic segmentation, Dice coefficient, Hausdorff distance, Precision, and Root Mean Square (RMS) distance are extracted and compared. Among these four automated methods, ABSS generates the most accurate results and the reproducibility is more similar to expert manual outlining by statistical validation. By considering p-value<0.05, the results of performance measurement for ABSS reveal that, Dice is 4%, 13%, and 17% higher, Hausdorff is 23%, 87%, and 70% lower, precision is 5%, -5%, and 12% higher, and RMS is 19%, 62%, and 65% lower compared to LocalInfo, FreeSurfer, and Hammer, respectively. PMID:25571043

  11. Fully automated registration of first-pass myocardial perfusion MRI using independent component analysis.

    PubMed

    Milles, J; van der Geest, R J; Jerosch-Herold, M; Reiber, J H C; Lelieveldt, B P F

    2007-01-01

    This paper presents a novel method for registration of cardiac perfusion MRI. The presented method successfully corrects for breathing motion without any manual interaction using Independent Component Analysis to extract physiologically relevant features together with their time-intensity behavior. A time-varying reference image mimicking intensity changes in the data of interest is computed based on the results of ICA, and used to compute the displacement caused by breathing for each frame. Qualitative and quantitative validation of the method is carried out using 46 clinical quality, short-axis, perfusion MR datasets comprising 100 images each. Validation experiments showed a reduction of the average LV motion from 1.26+/-0.87 to 0.64+/-0.46 pixels. Time-intensity curves are also improved after registration with an average error reduced from 2.65+/-7.89% to 0.87+/-3.88% between registered data and manual gold standard. We conclude that this fully automatic ICA-based method shows an excellent accuracy, robustness and computation speed, adequate for use in a clinical environment.

  12. Validation of an automated solid-phase extraction method for the analysis of 23 opioids, cocaine, and metabolites in urine with ultra-performance liquid chromatography-tandem mass spectrometry.

    PubMed

    Ramírez Fernández, María del Mar; Van Durme, Filip; Wille, Sarah M R; di Fazio, Vincent; Kummer, Natalie; Samyn, Nele

    2014-06-01

    The aim of this work was to automate a sample preparation procedure extracting morphine, hydromorphone, oxymorphone, norcodeine, codeine, dihydrocodeine, oxycodone, 6-monoacetyl-morphine, hydrocodone, ethylmorphine, benzoylecgonine, cocaine, cocaethylene, tramadol, meperidine, pentazocine, fentanyl, norfentanyl, buprenorphine, norbuprenorphine, propoxyphene, methadone and 2-ethylidene-1,5-dimethyl-3,3-diphenylpyrrolidine from urine samples. Samples were extracted by solid-phase extraction (SPE) with cation exchange cartridges using a TECAN Freedom Evo 100 base robotic system, including a hydrolysis step previous extraction when required. Block modules were carefully selected in order to use the same consumable material as in manual procedures to reduce cost and/or manual sample transfers. Moreover, the present configuration included pressure monitoring pipetting increasing pipetting accuracy and detecting sampling errors. The compounds were then separated in a chromatographic run of 9 min using a BEH Phenyl analytical column on a ultra-performance liquid chromatography-tandem mass spectrometry system. Optimization of the SPE was performed with different wash conditions and elution solvents. Intra- and inter-day relative standard deviations (RSDs) were within ±15% and bias was within ±15% for most of the compounds. Recovery was >69% (RSD < 11%) and matrix effects ranged from 1 to 26% when compensated with the internal standard. The limits of quantification ranged from 3 to 25 ng/mL depending on the compound. No cross-contamination in the automated SPE system was observed. The extracted samples were stable for 72 h in the autosampler (4°C). This method was applied to authentic samples (from forensic and toxicology cases) and to proficiency testing schemes containing cocaine, heroin, buprenorphine and methadone, offering fast and reliable results. Automation resulted in improved precision and accuracy, and a minimum operator intervention, leading to safer sample handling and less time-consuming procedures.

  13. Radar fall detection using principal component analysis

    NASA Astrophysics Data System (ADS)

    Jokanovic, Branka; Amin, Moeness; Ahmad, Fauzia; Boashash, Boualem

    2016-05-01

    Falls are a major cause of fatal and nonfatal injuries in people aged 65 years and older. Radar has the potential to become one of the leading technologies for fall detection, thereby enabling the elderly to live independently. Existing techniques for fall detection using radar are based on manual feature extraction and require significant parameter tuning in order to provide successful detections. In this paper, we employ principal component analysis for fall detection, wherein eigen images of observed motions are employed for classification. Using real data, we demonstrate that the PCA based technique provides performance improvement over the conventional feature extraction methods.

  14. Comparison of methodologies in determining bone marrow fat percentage under different environmental conditions.

    PubMed

    Murden, David; Hunnam, Jaimie; De Groef, Bert; Rawlin, Grant; McCowan, Christina

    2017-01-01

    The use of bone marrow fat percentage has been recommended in assessing body condition at the time of death in wild and domestic ruminants, but few studies have looked at the effects of time and exposure on animal bone marrow. We investigated the utility of bone marrow fat extraction as a tool for establishing antemortem body condition in postmortem specimens from sheep and cattle, particularly after exposure to high heat, and compared different techniques of fat extraction for this purpose. Femora were collected from healthy and "skinny" sheep and cattle. The bones were either frozen or subjected to 40°C heat; heated bones were either wrapped in plastic to minimize desiccation or were left unwrapped. Marrow fat percentage was determined at different time intervals by oven-drying, or by solvent extraction using hexane in manual equipment or a Soxhlet apparatus. Extraction was performed, where possible, on both wet and dried tissue. Multiple samples were tested from each bone. Bone marrow fat analysis using a manual, hexane-based extraction technique was found to be a moderately sensitive method of assessing antemortem body condition of cattle up to 6 d after death. Multiple replicates should be analyzed where possible. Samples from "skinny" sheep showed a different response to heat from those of "healthy" sheep; "skinny" samples were so reduced in quantity by day 6 (the first sampling day) that no individual testing could be performed. Further work is required to understand the response of sheep marrow.

  15. [Caesarean section with vacuum extraction of the head].

    PubMed

    Dimitrov, A; Pavlova, E; Krŭsteva, K; Nikolov, A

    2008-01-01

    The aim of the study is to investigate the benefits and the limits in using the soft cup vacuum extractor on the fetal scalp during the caesarean section. The prospective study includes 19 cases of caesarean sections (group A), with vacuum assisted delivery using the soft cup vacuum extractor on the fetal scalp (diameter 6 cm) and 25 cases (group B) of caesarean sections with usual, manual extraction of the head assisted by fundal compression. All of the patients had undergone a planned caesarean section on term in absence of uterine activity and preserved amniotic membranes. Our results doesn't show differences in the Apgar score on the first and 5-th minute in the newborns of the two groups. The duration of the scalp traction was significantly shorter (30 +/- 4 sec) in comparison to the classical manual extraction (53 +/- 21 sec). The mean duration for applying the vacuum cup was 10 sec and 25 sec for tractions. The total blood loose and total duration of the caesarean sections were shorter than in the control group. The applied traction with the vacuum cup was sufficient for head extraction and there was no need for additional fundal compression. In conclusion we consider that the extraction of the fetal head in high position in caesarean section with vacuum extractor is an easy, non traumatic and rapid method which can put away the need of rough and prolonged fundal compression and its consequences.

  16. TRAFIC: fiber tract classification using deep learning

    NASA Astrophysics Data System (ADS)

    Ngattai Lam, Prince D.; Belhomme, Gaetan; Ferrall, Jessica; Patterson, Billie; Styner, Martin; Prieto, Juan C.

    2018-03-01

    We present TRAFIC, a fully automated tool for the labeling and classification of brain fiber tracts. TRAFIC classifies new fibers using a neural network trained using shape features computed from previously traced and manually corrected fiber tracts. It is independent from a DTI Atlas as it is applied to already traced fibers. This work is motivated by medical applications where the process of extracting fibers from a DTI atlas, or classifying fibers manually is time consuming and requires knowledge about brain anatomy. With this new approach we were able to classify traced fiber tracts obtaining encouraging results. In this report we will present in detail the methods used and the results achieved with our approach.

  17. The paradox of sham therapy and placebo effect in osteopathy: A systematic review.

    PubMed

    Cerritelli, Francesco; Verzella, Marco; Cicchitti, Luca; D'Alessandro, Giandomenico; Vanacore, Nicola

    2016-08-01

    Placebo, defined as "false treatment," is a common gold-standard method to assess the validity of a therapy both in pharmacological trials and manual medicine research where placebo is also referred to as "sham therapy." In the medical literature, guidelines have been proposed on how to conduct robust placebo-controlled trials, but mainly in a drug-based scenario. In contrast, there are not precise guidelines on how to conduct a placebo-controlled in manual medicine trials (particularly osteopathy). The aim of the present systematic review was to report how and what type of sham methods, dosage, operator characteristics, and patient types were used in osteopathic clinical trials and, eventually, assess sham clinical effectiveness. A systematic Cochrane-based review was conducted by analyzing the osteopathic trials that used both manual and nonmanual placebo control. Searches were conducted on 8 databases from journal inception to December 2015 using a pragmatic literature search approach. Two independent reviewers conducted the study selection and data extraction for each study. The risk of bias was evaluated according to the Cochrane methods. A total of 64 studies were eligible for analysis collecting a total of 5024 participants. More than half (43 studies) used a manual placebo; 9 studies used a nonmanual placebo; and 12 studies used both manual and nonmanual placebo. Data showed lack of reporting sham therapy information across studies. Risk of bias analysis demonstrated a high risk of bias for allocation, blinding of personnel and participants, selective, and other bias. To explore the clinical effects of sham therapies used, a quantitative analysis was planned. However, due to the high heterogeneity of sham approaches used no further analyses were performed. High heterogeneity regarding placebo used between studies, lack of reporting information on placebo methods and within-study variability between sham and real treatment procedures suggest prudence in reading and interpreting study findings in manual osteopathic randomized controlled trials (RCTs). Efforts must be made to promote guidelines to design the most reliable placebo for manual RCTs as a means of increasing the internal validity and improve external validity of findings.

  18. Light Microscopy at Maximal Precision

    NASA Astrophysics Data System (ADS)

    Bierbaum, Matthew; Leahy, Brian D.; Alemi, Alexander A.; Cohen, Itai; Sethna, James P.

    2017-10-01

    Microscopy is the workhorse of the physical and life sciences, producing crisp images of everything from atoms to cells well beyond the capabilities of the human eye. However, the analysis of these images is frequently little more accurate than manual marking. Here, we revolutionize the analysis of microscopy images, extracting all the useful information theoretically contained in a complex microscope image. Using a generic, methodological approach, we extract the information by fitting experimental images with a detailed optical model of the microscope, a method we call parameter extraction from reconstructing images (PERI). As a proof of principle, we demonstrate this approach with a confocal image of colloidal spheres, improving measurements of particle positions and radii by 10-100 times over current methods and attaining the maximum possible accuracy. With this unprecedented accuracy, we measure nanometer-scale colloidal interactions in dense suspensions solely with light microscopy, a previously impossible feat. Our approach is generic and applicable to imaging methods from brightfield to electron microscopy, where we expect accuracies of 1 nm and 0.1 pm, respectively.

  19. Efficient isolation method for high-quality genomic DNA from cicada exuviae.

    PubMed

    Nguyen, Hoa Quynh; Kim, Ye Inn; Borzée, Amaël; Jang, Yikweon

    2017-10-01

    In recent years, animal ethics issues have led researchers to explore nondestructive methods to access materials for genetic studies. Cicada exuviae are among those materials because they are cast skins that individuals left after molt and are easily collected. In this study, we aim to identify the most efficient extraction method to obtain high quantity and quality of DNA from cicada exuviae. We compared relative DNA yield and purity of six extraction protocols, including both manual protocols and available commercial kits, extracting from four different exoskeleton parts. Furthermore, amplification and sequencing of genomic DNA were evaluated in terms of availability of sequencing sequence at the expected genomic size. Both the choice of protocol and exuvia part significantly affected DNA yield and purity. Only samples that were extracted using the PowerSoil DNA Isolation kit generated gel bands of expected size as well as successful sequencing results. The failed attempts to extract DNA using other protocols could be partially explained by a low DNA yield from cicada exuviae and partly by contamination with humic acids that exist in the soil where cicada nymphs reside before emergence, as shown by spectroscopic measurements. Genomic DNA extracted from cicada exuviae could provide valuable information for species identification, allowing the investigation of genetic diversity across consecutive broods, or spatiotemporal variation among various populations. Consequently, we hope to provide a simple method to acquire pure genomic DNA applicable for multiple research purposes.

  20. E&V (Evaluation and Validation) Reference Manual, Version 1.0.

    DTIC Science & Technology

    1988-07-01

    references featured in the Reference Manual. G-05097a GENERAL REFERENCE INFORMATION EXTRACTED , FROM * INDEXES AND CROSS REFERENCES CHAPTER 4...at E&V techniques through many different paths, and provides a means to extract useful information along the way. /^c^^s; /r^ ^yr*•**•»» * L...electronically (preferred) to szymansk@ajpo.sei.cmu.edu or by regular mail to Mr. Raymond Szymanski . AFWAUAAAF, Wright Patterson AFB, OH 45433-6543. ES-2

  1. Client-side Skype forensics: an overview

    NASA Astrophysics Data System (ADS)

    Meißner, Tina; Kröger, Knut; Creutzburg, Reiner

    2013-03-01

    IT security and computer forensics are important components in the information technology. In the present study, a client-side Skype forensics is performed. It is designed to explain which kind of user data are stored on a computer and which tools allow the extraction of those data for a forensic investigation. There are described both methods - a manual analysis and an analysis with (mainly) open source tools, respectively.

  2. Role of Noninvasive Hemoglobin Monitoring in Trauma

    DTIC Science & Technology

    2015-03-25

    spectrophotometry-based monitoring technology (Radical-7® Pulse CO- Oximeter ; Masimo Corp., Irvine, CA) that provides continuous hemoglobin...116(1):65-72. 14. Masimo Corp. Radical-7 signal extraction pulse co- oximeter operator’s manual. Irvine (CA): Masimo Corp.; 2007. 15. Bland JM...method similar to conventional pulse oximetry. Transmitted light is captured by photodiode receptor and analyzed to create an analog signal that, in

  3. Unsupervised Ensemble Anomaly Detection Using Time-Periodic Packet Sampling

    NASA Astrophysics Data System (ADS)

    Uchida, Masato; Nawata, Shuichi; Gu, Yu; Tsuru, Masato; Oie, Yuji

    We propose an anomaly detection method for finding patterns in network traffic that do not conform to legitimate (i.e., normal) behavior. The proposed method trains a baseline model describing the normal behavior of network traffic without using manually labeled traffic data. The trained baseline model is used as the basis for comparison with the audit network traffic. This anomaly detection works in an unsupervised manner through the use of time-periodic packet sampling, which is used in a manner that differs from its intended purpose — the lossy nature of packet sampling is used to extract normal packets from the unlabeled original traffic data. Evaluation using actual traffic traces showed that the proposed method has false positive and false negative rates in the detection of anomalies regarding TCP SYN packets comparable to those of a conventional method that uses manually labeled traffic data to train the baseline model. Performance variation due to the probabilistic nature of sampled traffic data is mitigated by using ensemble anomaly detection that collectively exploits multiple baseline models in parallel. Alarm sensitivity is adjusted for the intended use by using maximum- and minimum-based anomaly detection that effectively take advantage of the performance variations among the multiple baseline models. Testing using actual traffic traces showed that the proposed anomaly detection method performs as well as one using manually labeled traffic data and better than one using randomly sampled (unlabeled) traffic data.

  4. A continuous scale-space method for the automated placement of spot heights on maps

    NASA Astrophysics Data System (ADS)

    Rocca, Luigi; Jenny, Bernhard; Puppo, Enrico

    2017-12-01

    Spot heights and soundings explicitly indicate terrain elevation on cartographic maps. Cartographers have developed design principles for the manual selection, placement, labeling, and generalization of spot height locations, but these processes are work-intensive and expensive. Finding an algorithmic criterion that matches the cartographers' judgment in ranking the significance of features on a terrain is a difficult endeavor. This article proposes a method for the automated selection of spot heights locations representing natural features such as peaks, saddles and depressions. A lifespan of critical points in a continuous scale-space model is employed as the main measure of the importance of features, and an algorithm and a data structure for its computation are described. We also introduce a method for the comparison of algorithmically computed spot height locations with manually produced reference compilations. The new method is compared with two known techniques from the literature. Results show spot height locations that are closer to reference spot heights produced manually by swisstopo cartographers, compared to previous techniques. The introduced method can be applied to elevation models for the creation of topographic and bathymetric maps. It also ranks the importance of extracted spot height locations, which allows for a variation in the size of symbols and labels according to the significance of represented features. The importance ranking could also be useful for adjusting spot height density of zoomable maps in real time.

  5. Texture segmentation by genetic programming.

    PubMed

    Song, Andy; Ciesielski, Vic

    2008-01-01

    This paper describes a texture segmentation method using genetic programming (GP), which is one of the most powerful evolutionary computation algorithms. By choosing an appropriate representation texture, classifiers can be evolved without computing texture features. Due to the absence of time-consuming feature extraction, the evolved classifiers enable the development of the proposed texture segmentation algorithm. This GP based method can achieve a segmentation speed that is significantly higher than that of conventional methods. This method does not require a human expert to manually construct models for texture feature extraction. In an analysis of the evolved classifiers, it can be seen that these GP classifiers are not arbitrary. Certain textural regularities are captured by these classifiers to discriminate different textures. GP has been shown in this study as a feasible and a powerful approach for texture classification and segmentation, which are generally considered as complex vision tasks.

  6. Validation of a standardized extraction method for formalin-fixed paraffin-embedded tissue samples.

    PubMed

    Lagheden, Camilla; Eklund, Carina; Kleppe, Sara Nordqvist; Unger, Elizabeth R; Dillner, Joakim; Sundström, Karin

    2016-07-01

    Formalin-fixed paraffin-embedded (FFPE) samples can be DNA-extracted and used for human papillomavirus (HPV) genotyping. The xylene-based gold standard for extracting FFPE samples is laborious, suboptimal and involves health hazards for the personnel involved. To compare extraction with the standard xylene method to a xylene-free method used in an HPV LabNet Global Reference Laboratory at the Centers for Disease Control (CDC); based on a commercial method with an extra heating step. Fifty FFPE samples were randomly selected from a national audit of all cervical cancer cases diagnosed in Sweden during 10 years. For each case-block, a blank-block was sectioned, as a control for contamination. For xylene extraction, the standard WHO Laboratory Manual protocol was used. For the CDC method, the manufacturers' protocol was followed except for an extra heating step, 120°C for 20min. Samples were extracted and tested in parallel with β-globin real-time PCR, HPV16 real-time PCR and HPV typing using modified general primers (MGP)-PCR and Luminex assays. For a valid result the blank-block had to be betaglobin-negative in all tests and the case-block positive for beta-globin. Overall, detection was improved with the heating method and the amount of HPV-positive samples increased from 70% to 86% (p=0.039). For all samples where HPV type concordance could be evaluated, there was 100% type concordance. A xylene-free and robust extraction method for HPV-DNA typing in FFPE material is currently in great demand. Our proposed standardized protocol appears to be generally useful. Copyright © 2016. Published by Elsevier B.V.

  7. The impact of manual rotation of the occiput posterior position on spontaneous vaginal delivery rate: study protocol for a randomized clinical trial (RMOS).

    PubMed

    Verhaeghe, C; Parot-Schinkel, E; Bouet, P E; Madzou, S; Biquard, F; Gillard, P; Descamps, P; Legendre, G

    2018-02-14

    The frequency of posterior presentations (occiput of the fetus towards the sacrum of the mother) in labor is approximately 20% and, of this, 5% remain posterior until the end of labor. These posterior presentations are associated with higher rates of cesarean section and instrumental delivery. Manual rotation of a posterior position in order to rotate the fetus to an anterior position has been proposed in order to reduce the rate of instrumental fetal delivery. No randomized study has compared the efficacy of this procedure to expectant management. We therefore propose a monocentric, interventional, randomized, prospective study to show the superiority of vaginal delivery rates using the manual rotation of the posterior position at full dilation over expectant management. Ultrasound imaging of the presentation will be performed at full dilation on all the singleton pregnancies for which a clinical suspicion of a posterior position was raised at more than 37 weeks' gestation (WG). In the event of an ultrasound confirming a posterior position, the patient will be randomized into an experimental group (manual rotation) or a control group (expectative management with no rotation). For a power of 90% and the hypothesis that vaginal deliveries will increase by 20%, (10% of patients lost to follow-up) 238 patients will need to be included in the study. The primary endpoint will be the rate of spontaneous vaginal deliveries (expected rate without rotation: 60%). The secondary endpoints will be the rate of fetal extractions (cesarean or instrumental) and the maternal and fetal morbidity and mortality rates. The intent-to-treat study will be conducted over 24 months. Recruitment started in February 2017. To achieve the primary objective, we will perform a test comparing the number of spontaneous vaginal deliveries in the two groups using Pearson's chi-squared test (provided that the conditions for using this test are satisfactory in terms of numbers). In the event that this test cannot be performed, we will use Fisher's exact test. Given that the efficacy of manual rotation has not been proven with a high level of evidence, the practice of this technique is not systematically recommended by scholarly societies and is, therefore, rarely performed by obstetric gynecologists. If our hypothesis regarding the superiority of manual rotation is confirmed, our study will help change delivery practices in cases of posterior fetal position. An increase in the rates of vaginal delivery will help decrease the short- and long-term rates of morbidity and mortality following cesarean section. Manual rotation is a simple and effective method with a success rate of almost 90%. Several preliminary studies have shown that manual rotation is associated with reduced rates for fetal extraction and maternal complications: Shaffer has shown that the cesarean section rate is lower in patients for whom a manual rotation is performed successfully (2%) with a 9% rate of cesarean sections when manual rotation is performed versus 41% when it is not performed. Le Ray has shown that manual rotation significantly reduces vaginal delivery rates via fetal extraction (23.2% vs 38.7%, p < 0.01). However, manual rotation is not systematically performed due to the absence of proof of its efficacy in retrospective studies and quasi-experimental before/after studies. ClinicalTrials.gov, Identifier: NCT03009435 . Registered on 30 December 2016.

  8. A knowledge engineering approach to recognizing and extracting sequences of nucleic acids from scientific literature.

    PubMed

    García-Remesal, Miguel; Maojo, Victor; Crespo, José

    2010-01-01

    In this paper we present a knowledge engineering approach to automatically recognize and extract genetic sequences from scientific articles. To carry out this task, we use a preliminary recognizer based on a finite state machine to extract all candidate DNA/RNA sequences. The latter are then fed into a knowledge-based system that automatically discards false positives and refines noisy and incorrectly merged sequences. We created the knowledge base by manually analyzing different manuscripts containing genetic sequences. Our approach was evaluated using a test set of 211 full-text articles in PDF format containing 3134 genetic sequences. For such set, we achieved 87.76% precision and 97.70% recall respectively. This method can facilitate different research tasks. These include text mining, information extraction, and information retrieval research dealing with large collections of documents containing genetic sequences.

  9. A gradient-based approach for automated crest-line detection and analysis of sand dune patterns on planetary surfaces

    NASA Astrophysics Data System (ADS)

    Lancaster, N.; LeBlanc, D.; Bebis, G.; Nicolescu, M.

    2015-12-01

    Dune-field patterns are believed to behave as self-organizing systems, but what causes the patterns to form is still poorly understood. The most obvious (and in many cases the most significant) aspect of a dune system is the pattern of dune crest lines. Extracting meaningful features such as crest length, orientation, spacing, bifurcations, and merging of crests from image data can reveal important information about the specific dune-field morphological properties, development, and response to changes in boundary conditions, but manual methods are labor-intensive and time-consuming. We are developing the capability to recognize and characterize patterns of sand dunes on planetary surfaces. Our goal is to develop a robust methodology and the necessary algorithms for automated or semi-automated extraction of dune morphometric information from image data. Our main approach uses image processing methods to extract gradient information from satellite images of dune fields. Typically, the gradients have a dominant magnitude and orientation. In many cases, the images have two major dominant gradient orientations, for the sunny and shaded side of the dunes. A histogram of the gradient orientations is used to determine the dominant orientation. A threshold is applied to the image based on gradient orientations which agree with the dominant orientation. The contours of the binary image can then be used to determine the dune crest-lines, based on pixel intensity values. Once the crest-lines have been extracted, the morphological properties can be computed. We have tested our approach on a variety of images of linear and crescentic (transverse) dunes and compared dune detection algorithms with manually-digitized dune crest lines, achieving true positive values of 0.57-0.99; and false positives values of 0.30-0.67, indicating that out approach is generally robust.

  10. Motor Fault Diagnosis Based on Short-time Fourier Transform and Convolutional Neural Network

    NASA Astrophysics Data System (ADS)

    Wang, Li-Hua; Zhao, Xiao-Ping; Wu, Jia-Xin; Xie, Yang-Yang; Zhang, Yong-Hong

    2017-11-01

    With the rapid development of mechanical equipment, the mechanical health monitoring field has entered the era of big data. However, the method of manual feature extraction has the disadvantages of low efficiency and poor accuracy, when handling big data. In this study, the research object was the asynchronous motor in the drivetrain diagnostics simulator system. The vibration signals of different fault motors were collected. The raw signal was pretreated using short time Fourier transform (STFT) to obtain the corresponding time-frequency map. Then, the feature of the time-frequency map was adaptively extracted by using a convolutional neural network (CNN). The effects of the pretreatment method, and the hyper parameters of network diagnostic accuracy, were investigated experimentally. The experimental results showed that the influence of the preprocessing method is small, and that the batch-size is the main factor affecting accuracy and training efficiency. By investigating feature visualization, it was shown that, in the case of big data, the extracted CNN features can represent complex mapping relationships between signal and health status, and can also overcome the prior knowledge and engineering experience requirement for feature extraction, which is used by traditional diagnosis methods. This paper proposes a new method, based on STFT and CNN, which can complete motor fault diagnosis tasks more intelligently and accurately.

  11. Automated labelling of cancer textures in colorectal histopathology slides using quasi-supervised learning.

    PubMed

    Onder, Devrim; Sarioglu, Sulen; Karacali, Bilge

    2013-04-01

    Quasi-supervised learning is a statistical learning algorithm that contrasts two datasets by computing estimate for the posterior probability of each sample in either dataset. This method has not been applied to histopathological images before. The purpose of this study is to evaluate the performance of the method to identify colorectal tissues with or without adenocarcinoma. Light microscopic digital images from histopathological sections were obtained from 30 colorectal radical surgery materials including adenocarcinoma and non-neoplastic regions. The texture features were extracted by using local histograms and co-occurrence matrices. The quasi-supervised learning algorithm operates on two datasets, one containing samples of normal tissues labelled only indirectly, and the other containing an unlabeled collection of samples of both normal and cancer tissues. As such, the algorithm eliminates the need for manually labelled samples of normal and cancer tissues for conventional supervised learning and significantly reduces the expert intervention. Several texture feature vector datasets corresponding to different extraction parameters were tested within the proposed framework. The Independent Component Analysis dimensionality reduction approach was also identified as the one improving the labelling performance evaluated in this series. In this series, the proposed method was applied to the dataset of 22,080 vectors with reduced dimensionality 119 from 132. Regions containing cancer tissue could be identified accurately having false and true positive rates up to 19% and 88% respectively without using manually labelled ground-truth datasets in a quasi-supervised strategy. The resulting labelling performances were compared to that of a conventional powerful supervised classifier using manually labelled ground-truth data. The supervised classifier results were calculated as 3.5% and 95% for the same case. The results in this series in comparison with the benchmark classifier, suggest that quasi-supervised image texture labelling may be a useful method in the analysis and classification of pathological slides but further study is required to improve the results. Copyright © 2013 Elsevier Ltd. All rights reserved.

  12. Establishing a novel automated magnetic bead-based method for the extraction of DNA from a variety of forensic samples.

    PubMed

    Witt, Sebastian; Neumann, Jan; Zierdt, Holger; Gébel, Gabriella; Röscheisen, Christiane

    2012-09-01

    Automated systems have been increasingly utilized for DNA extraction by many forensic laboratories to handle growing numbers of forensic casework samples while minimizing the risk of human errors and assuring high reproducibility. The step towards automation however is not easy: The automated extraction method has to be very versatile to reliably prepare high yields of pure genomic DNA from a broad variety of sample types on different carrier materials. To prevent possible cross-contamination of samples or the loss of DNA, the components of the kit have to be designed in a way that allows for the automated handling of the samples with no manual intervention necessary. DNA extraction using paramagnetic particles coated with a DNA-binding surface is predestined for an automated approach. For this study, we tested different DNA extraction kits using DNA-binding paramagnetic particles with regard to DNA yield and handling by a Freedom EVO(®)150 extraction robot (Tecan) equipped with a Te-MagS magnetic separator. Among others, the extraction kits tested were the ChargeSwitch(®)Forensic DNA Purification Kit (Invitrogen), the PrepFiler™Automated Forensic DNA Extraction Kit (Applied Biosystems) and NucleoMag™96 Trace (Macherey-Nagel). After an extensive test phase, we established a novel magnetic bead extraction method based upon the NucleoMag™ extraction kit (Macherey-Nagel). The new method is readily automatable and produces high yields of DNA from different sample types (blood, saliva, sperm, contact stains) on various substrates (filter paper, swabs, cigarette butts) with no evidence of a loss of magnetic beads or sample cross-contamination. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  13. Application of manual assessment of oxygen radical absorbent capacity (ORAC) for use in high throughput assay of "total" antioxidant activity of drugs and natural products.

    PubMed

    Price, Joseph A; Sanny, Charles G; Shevlin, Dennis

    2006-01-01

    Antioxidants are of particular interest in a spectrum of diseases, and thus are an active area of drug discovery and design. It is important to make considered choices as to which assay chemistry will best serve for particular investigations. We examined the manual oxygen radical absorbent capacity (ORAC) assay for "total" antioxidant activity, including a direct comparison to an alternative technique, the AOP-490 assay, using a panel of extracts from 12 phylogenetically unrelated algae. The AOP-490 assay was done per manufacturer's protocol. The ORAC assay was done by hand, in 96-well plates, not by machine as had been previously published. Our ORAC calculations were done using an in-experiment antioxidant standard curve. Results were reported as equivalents of the antioxidant Trolox, which was used as a standard. With the AOP-490 kit (from Oxis Research) widespread activity was found, but not in all samples. When the ORAC method was used to assay aliquots of the same extracts there was significant activity detected in all samples, and the rank order of activity by the two methods was not identical. The data showed the wide occurrence of antioxidants in algae. The standard curve with the manual ORAC assay was linear in the range tested (0-100 mM Trolox) and had excellent reproducibility. The importance of the beneficial effects of antioxidants is currently an area of active interest for drug development, and thus it is of great value to have an assay that is robust and approximates "total" antioxidant activity in a high throughput format. The ORAC (oxygen radical absorbent capacity) method was adapted to microplates and an eight-channel pipette and was more effective in detecting "total" antioxidant activity than the AOP-490 assay. These results might vary with other types of samples, and would depend on the active agents measured, but do suggest the practical value of the ORAC assay for any laboratory not ready for robotics but using manual 96-well format assays, and the utility of the ORAC assay for evaluating algal, and probably other samples as well.

  14. Extricating Manual and Non-Manual Features for Subunit Level Medical Sign Modelling in Automatic Sign Language Classification and Recognition.

    PubMed

    R, Elakkiya; K, Selvamani

    2017-09-22

    Subunit segmenting and modelling in medical sign language is one of the important studies in linguistic-oriented and vision-based Sign Language Recognition (SLR). Many efforts were made in the precedent to focus the functional subunits from the view of linguistic syllables but the problem is implementing such subunit extraction using syllables is not feasible in real-world computer vision techniques. And also, the present recognition systems are designed in such a way that it can detect the signer dependent actions under restricted and laboratory conditions. This research paper aims at solving these two important issues (1) Subunit extraction and (2) Signer independent action on visual sign language recognition. Subunit extraction involved in the sequential and parallel breakdown of sign gestures without any prior knowledge on syllables and number of subunits. A novel Bayesian Parallel Hidden Markov Model (BPaHMM) is introduced for subunit extraction to combine the features of manual and non-manual parameters to yield better results in classification and recognition of signs. Signer independent action aims in using a single web camera for different signer behaviour patterns and for cross-signer validation. Experimental results have proved that the proposed signer independent subunit level modelling for sign language classification and recognition has shown improvement and variations when compared with other existing works.

  15. Efficient content-based low-altitude images correlated network and strips reconstruction

    NASA Astrophysics Data System (ADS)

    He, Haiqing; You, Qi; Chen, Xiaoyong

    2017-01-01

    The manual intervention method is widely used to reconstruct strips for further aerial triangulation in low-altitude photogrammetry. Clearly the method for fully automatic photogrammetric data processing is not an expected way. In this paper, we explore a content-based approach without manual intervention or external information for strips reconstruction. Feature descriptors in the local spatial patterns are extracted by SIFT to construct vocabulary tree, in which these features are encoded in terms of TF-IDF numerical statistical algorithm to generate new representation for each low-altitude image. Then images correlated network is reconstructed by similarity measure, image matching and geometric graph theory. Finally, strips are reconstructed automatically by tracing straight lines and growing adjacent images gradually. Experimental results show that the proposed approach is highly effective in automatically rearranging strips of lowaltitude images and can provide rough relative orientation for further aerial triangulation.

  16. Microchip-based cell lysis and DNA extraction from sperm cells for application to forensic analysis.

    PubMed

    Bienvenue, Joan M; Duncalf, Natalie; Marchiarullo, Daniel; Ferrance, Jerome P; Landers, James P

    2006-03-01

    The current backlog of casework is among the most significant challenges facing crime laboratories at this time. While the development of next-generation microchip-based technology for expedited forensic casework analysis offers one solution to this problem, this will require the adaptation of manual, large-volume, benchtop chemistry to small volume microfluidic devices. Analysis of evidentiary materials from rape kits where semen or sperm cells are commonly found represents a unique set of challenges for on-chip cell lysis and DNA extraction that must be addressed for successful application. The work presented here details the development of a microdevice capable of DNA extraction directly from sperm cells for application to the analysis of sexual assault evidence. A variety of chemical lysing agents are assessed for inclusion in the extraction protocol and a method for DNA purification from sperm cells is described. Suitability of the extracted DNA for short tandem repeat (STR) analysis is assessed and genetic profiles shown. Finally, on-chip cell lysis methods are evaluated, with results from fluorescence visualization of cell rupture and DNA extraction from an integrated cell lysis and purification with subsequent STR amplification presented. A method for on-chip cell lysis and DNA purification is described, with considerations toward inclusion in an integrated microdevice capable of both differential cell sorting and DNA extraction. The results of this work demonstrate the feasibility of incorporating microchip-based cell lysis and DNA extraction into forensic casework analysis.

  17. Similarity estimation for reference image retrieval in mammograms using convolutional neural network

    NASA Astrophysics Data System (ADS)

    Muramatsu, Chisako; Higuchi, Shunichi; Morita, Takako; Oiwa, Mikinao; Fujita, Hiroshi

    2018-02-01

    Periodic breast cancer screening with mammography is considered effective in decreasing breast cancer mortality. For screening programs to be successful, an intelligent image analytic system may support radiologists' efficient image interpretation. In our previous studies, we have investigated image retrieval schemes for diagnostic references of breast lesions on mammograms and ultrasound images. Using a machine learning method, reliable similarity measures that agree with radiologists' similarity were determined and relevant images could be retrieved. However, our previous method includes a feature extraction step, in which hand crafted features were determined based on manual outlines of the masses. Obtaining the manual outlines of masses is not practical in clinical practice and such data would be operator-dependent. In this study, we investigated a similarity estimation scheme using a convolutional neural network (CNN) to skip such procedure and to determine data-driven similarity scores. By using CNN as feature extractor, in which extracted features were employed in determination of similarity measures with a conventional 3-layered neural network, the determined similarity measures were correlated well with the subjective ratings and the precision of retrieving diagnostically relevant images was comparable with that of the conventional method using handcrafted features. By using CNN for determination of similarity measure directly, the result was also comparable. By optimizing the network parameters, results may be further improved. The proposed method has a potential usefulness in determination of similarity measure without precise lesion outlines for retrieval of similar mass images on mammograms.

  18. Manual small incision extracapsular cataract surgery in Australia.

    PubMed

    van Zyl, Lourens; Kahawita, Shyalle; Goggin, Michael

    2014-11-01

    Examination of the results and describing the technique of manual small incision extracapsular cataract extraction on patients with advanced cataracts in urban Australia. A descriptive case series. Thirty-eight patients at three public hospitals, one tertiary and two secondary ophthalmic units in urban Australia. Forty eyes with dense mature cataracts with hand movement vision or worse underwent a planned manual small incision extracapsular cataract extraction instead of traditional phaco-emulsification. Postoperative visual aquity, surgically induced astigmatism and complications. Seventy-eight per cent of patients had an uncorrected visual acuity of 6/12 or better on the first postoperative day. Eighty-three per cent of patients had a distance corrected visual acuity of 6/9 or better 3 months postoperatively. One case was complicated by a posterior capsule rupture. No cases of endophthalmitis were reported. The summated vector mean of the surgically induced astigmatism was 0.089D at 93°. Manual small incision extracapsular cataract extraction is an efficacious cataract surgery technique with good visual outcome and is a safe alternative to phaco-emulsification in suitable cases in a first-world setting. © 2014 Royal Australian and New Zealand College of Ophthalmologists.

  19. SHERPA: an image segmentation and outline feature extraction tool for diatoms and other objects

    PubMed Central

    2014-01-01

    Background Light microscopic analysis of diatom frustules is widely used both in basic and applied research, notably taxonomy, morphometrics, water quality monitoring and paleo-environmental studies. In these applications, usually large numbers of frustules need to be identified and/or measured. Although there is a need for automation in these applications, and image processing and analysis methods supporting these tasks have previously been developed, they did not become widespread in diatom analysis. While methodological reports for a wide variety of methods for image segmentation, diatom identification and feature extraction are available, no single implementation combining a subset of these into a readily applicable workflow accessible to diatomists exists. Results The newly developed tool SHERPA offers a versatile image processing workflow focused on the identification and measurement of object outlines, handling all steps from image segmentation over object identification to feature extraction, and providing interactive functions for reviewing and revising results. Special attention was given to ease of use, applicability to a broad range of data and problems, and supporting high throughput analyses with minimal manual intervention. Conclusions Tested with several diatom datasets from different sources and of various compositions, SHERPA proved its ability to successfully analyze large amounts of diatom micrographs depicting a broad range of species. SHERPA is unique in combining the following features: application of multiple segmentation methods and selection of the one giving the best result for each individual object; identification of shapes of interest based on outline matching against a template library; quality scoring and ranking of resulting outlines supporting quick quality checking; extraction of a wide range of outline shape descriptors widely used in diatom studies and elsewhere; minimizing the need for, but enabling manual quality control and corrections. Although primarily developed for analyzing images of diatom valves originating from automated microscopy, SHERPA can also be useful for other object detection, segmentation and outline-based identification problems. PMID:24964954

  20. SHERPA: an image segmentation and outline feature extraction tool for diatoms and other objects.

    PubMed

    Kloster, Michael; Kauer, Gerhard; Beszteri, Bánk

    2014-06-25

    Light microscopic analysis of diatom frustules is widely used both in basic and applied research, notably taxonomy, morphometrics, water quality monitoring and paleo-environmental studies. In these applications, usually large numbers of frustules need to be identified and/or measured. Although there is a need for automation in these applications, and image processing and analysis methods supporting these tasks have previously been developed, they did not become widespread in diatom analysis. While methodological reports for a wide variety of methods for image segmentation, diatom identification and feature extraction are available, no single implementation combining a subset of these into a readily applicable workflow accessible to diatomists exists. The newly developed tool SHERPA offers a versatile image processing workflow focused on the identification and measurement of object outlines, handling all steps from image segmentation over object identification to feature extraction, and providing interactive functions for reviewing and revising results. Special attention was given to ease of use, applicability to a broad range of data and problems, and supporting high throughput analyses with minimal manual intervention. Tested with several diatom datasets from different sources and of various compositions, SHERPA proved its ability to successfully analyze large amounts of diatom micrographs depicting a broad range of species. SHERPA is unique in combining the following features: application of multiple segmentation methods and selection of the one giving the best result for each individual object; identification of shapes of interest based on outline matching against a template library; quality scoring and ranking of resulting outlines supporting quick quality checking; extraction of a wide range of outline shape descriptors widely used in diatom studies and elsewhere; minimizing the need for, but enabling manual quality control and corrections. Although primarily developed for analyzing images of diatom valves originating from automated microscopy, SHERPA can also be useful for other object detection, segmentation and outline-based identification problems.

  1. A time-series method for automated measurement of changes in mitotic and interphase duration from time-lapse movies.

    PubMed

    Sigoillot, Frederic D; Huckins, Jeremy F; Li, Fuhai; Zhou, Xiaobo; Wong, Stephen T C; King, Randall W

    2011-01-01

    Automated time-lapse microscopy can visualize proliferation of large numbers of individual cells, enabling accurate measurement of the frequency of cell division and the duration of interphase and mitosis. However, extraction of quantitative information by manual inspection of time-lapse movies is too time-consuming to be useful for analysis of large experiments. Here we present an automated time-series approach that can measure changes in the duration of mitosis and interphase in individual cells expressing fluorescent histone 2B. The approach requires analysis of only 2 features, nuclear area and average intensity. Compared to supervised learning approaches, this method reduces processing time and does not require generation of training data sets. We demonstrate that this method is as sensitive as manual analysis in identifying small changes in interphase or mitotic duration induced by drug or siRNA treatment. This approach should facilitate automated analysis of high-throughput time-lapse data sets to identify small molecules or gene products that influence timing of cell division.

  2. Hypertensive retinopathy identification through retinal fundus image using backpropagation neural network

    NASA Astrophysics Data System (ADS)

    Syahputra, M. F.; Amalia, C.; Rahmat, R. F.; Abdullah, D.; Napitupulu, D.; Setiawan, M. I.; Albra, W.; Nurdin; Andayani, U.

    2018-03-01

    Hypertension or high blood pressure can cause damage of blood vessels in the retina of eye called hypertensive retinopathy (HR). In the event Hypertension, it will cause swelling blood vessels and a decrese in retina performance. To detect HR in patients body, it is usually performed through physical examination of opthalmoscope which is still conducted manually by an ophthalmologist. Certainly, in such a manual manner, takes a ong time for a doctor to detetct HR on aa patient based on retina fundus iamge. To overcome ths problem, a method is needed to identify the image of retinal fundus automatically. In this research, backpropagation neural network was used as a method for retinal fundus identification. The steps performed prior to identification were pre-processing (green channel, contrast limited adapative histogram qualization (CLAHE), morphological close, background exclusion, thresholding and connected component analysis), feature extraction using zoning. The results show that the proposed method is able to identify retinal fundus with an accuracy of 95% with maximum epoch of 1500.

  3. Chassis unit insert tightening-extract device

    NASA Technical Reports Server (NTRS)

    Haerther, L. W.; Zimmerman, P. A. (Inventor)

    1964-01-01

    The invention relates to the insertion and extraction of rack mounted electronic units and in particular to a screw thread insert tightening and extract device, for chassis units having a collar which may be rotatably positioned manually for the insert tightening or extraction of various associated chassis units, as desired.

  4. Automatic definition of the oncologic EHR data elements from NCIT in OWL.

    PubMed

    Cuggia, Marc; Bourdé, Annabel; Turlin, Bruno; Vincendeau, Sebastien; Bertaud, Valerie; Bohec, Catherine; Duvauferrier, Régis

    2011-01-01

    Semantic interoperability based on ontologies allows systems to combine their information and process them automatically. The ability to extract meaningful fragments from ontology is a key for the ontology re-use and the construction of a subset will help to structure clinical data entries. The aim of this work is to provide a method for extracting a set of concepts for a specific domain, in order to help to define data elements of an oncologic EHR. a generic extraction algorithm was developed to extract, from the NCIT and for a specific disease (i.e. prostate neoplasm), all the concepts of interest into a sub-ontology. We compared all the concepts extracted to the concepts encoded manually contained into the multi-disciplinary meeting report form (MDMRF). We extracted two sub-ontologies: sub-ontology 1 by using a single key concept and sub-ontology 2 by using 5 additional keywords. The coverage of sub-ontology 2 to the MDMRF concepts was 51%. The low rate of coverage is due to the lack of definition or mis-classification of the NCIT concepts. By providing a subset of concepts focused on a particular domain, this extraction method helps at optimizing the binding process of data elements and at maintaining and enriching a domain ontology.

  5. Quantification of 31 illicit and medicinal drugs and metabolites in whole blood by fully automated solid-phase extraction and ultra-performance liquid chromatography-tandem mass spectrometry.

    PubMed

    Bjørk, Marie Kjærgaard; Simonsen, Kirsten Wiese; Andersen, David Wederkinck; Dalsgaard, Petur Weihe; Sigurðardóttir, Stella Rögn; Linnet, Kristian; Rasmussen, Brian Schou

    2013-03-01

    An efficient method for analyzing illegal and medicinal drugs in whole blood using fully automated sample preparation and short ultra-high-performance liquid chromatography-tandem mass spectrometry (MS/MS) run time is presented. A selection of 31 drugs, including amphetamines, cocaine, opioids, and benzodiazepines, was used. In order to increase the efficiency of routine analysis, a robotic system based on automated liquid handling and capable of handling all unit operation for sample preparation was built on a Freedom Evo 200 platform with several add-ons from Tecan and third-party vendors. Solid-phase extraction was performed using Strata X-C plates. Extraction time for 96 samples was less than 3 h. Chromatography was performed using an ACQUITY UPLC system (Waters Corporation, Milford, USA). Analytes were separated on a 100 mm × 2.1 mm, 1.7 μm Acquity UPLC CSH C(18) column using a 6.5 min 0.1 % ammonia (25 %) in water/0.1 % ammonia (25 %) in methanol gradient and quantified by MS/MS (Waters Quattro Premier XE) in multiple-reaction monitoring mode. Full validation, including linearity, precision and trueness, matrix effect, ion suppression/enhancement of co-eluting analytes, recovery, and specificity, was performed. The method was employed successfully in the laboratory and used for routine analysis of forensic material. In combination with tetrahydrocannabinol analysis, the method covered 96 % of cases involving driving under the influence of drugs. The manual labor involved in preparing blood samples, solvents, etc., was reduced to a half an hour per batch. The automated sample preparation setup also minimized human exposure to hazardous materials, provided highly improved ergonomics, and eliminated manual pipetting.

  6. Registration of in vivo MR to histology of rodent brains using blockface imaging

    NASA Astrophysics Data System (ADS)

    Uberti, Mariano; Liu, Yutong; Dou, Huanyu; Mosley, R. Lee; Gendelman, Howard E.; Boska, Michael

    2009-02-01

    Registration of MRI to histopathological sections can enhance bioimaging validation for use in pathobiologic, diagnostic, and therapeutic evaluations. However, commonly used registration methods fall short of this goal due to tissue shrinkage and tearing after brain extraction and preparation. In attempts to overcome these limitations we developed a software toolbox using 3D blockface imaging as the common space of reference. This toolbox includes a semi-automatic brain extraction technique using constraint level sets (CLS), 3D reconstruction methods for the blockface and MR volume, and a 2D warping technique using thin-plate splines with landmark optimization. Using this toolbox, the rodent brain volume is first extracted from the whole head MRI using CLS. The blockface volume is reconstructed followed by 3D brain MRI registration to the blockface volume to correct the global deformations due to brain extraction and fixation. Finally, registered MRI and histological slices are warped to corresponding blockface images to correct slice specific deformations. The CLS brain extraction technique was validated by comparing manual results showing 94% overlap. The image warping technique was validated by calculating target registration error (TRE). Results showed a registration accuracy of a TRE < 1 pixel. Lastly, the registration method and the software tools developed were used to validate cell migration in murine human immunodeficiency virus type one encephalitis.

  7. Automated sample preparation using membrane microtiter extraction for bioanalytical mass spectrometry.

    PubMed

    Janiszewski, J; Schneider, P; Hoffmaster, K; Swyden, M; Wells, D; Fouda, H

    1997-01-01

    The development and application of membrane solid phase extraction (SPE) in 96-well microtiter plate format is described for the automated analysis of drugs in biological fluids. The small bed volume of the membrane allows elution of the analyte in a very small solvent volume, permitting direct HPLC injection and negating the need for the time consuming solvent evaporation step. A programmable liquid handling station (Quadra 96) was modified to automate all SPE steps. To avoid drying of the SPE bed and to enhance the analytical precision a novel protocol for performing the condition, load and wash steps in rapid succession was utilized. A block of 96 samples can now be extracted in 10 min., about 30 times faster than manual solvent extraction or single cartridge SPE methods. This processing speed complements the high-throughput speed of contemporary high performance liquid chromatography mass spectrometry (HPLC/MS) analysis. The quantitative analysis of a test analyte (Ziprasidone) in plasma demonstrates the utility and throughput of membrane SPE in combination with HPLC/MS. The results obtained with the current automated procedure compare favorably with those obtained using solvent and traditional solid phase extraction methods. The method has been used for the analysis of numerous drug prototypes in biological fluids to support drug discovery efforts.

  8. Development of an automatic cow body condition scoring using body shape signature and Fourier descriptors.

    PubMed

    Bercovich, A; Edan, Y; Alchanatis, V; Moallem, U; Parmet, Y; Honig, H; Maltz, E; Antler, A; Halachmi, I

    2013-01-01

    Body condition evaluation is a common tool to assess energy reserves of dairy cows and to estimate their fatness or thinness. This study presents a computer-vision tool that automatically estimates cow's body condition score. Top-view images of 151 cows were collected on an Israeli research dairy farm using a digital still camera located at the entrance to the milking parlor. The cow's tailhead area and its contour were segmented and extracted automatically. Two types of features of the tailhead contour were extracted: (1) the angles and distances between 5 anatomical points; and (2) the cow signature, which is a 1-dimensional vector of the Euclidean distances from each point in the normalized tailhead contour to the shape center. Two methods were applied to describe the cow's signature and to reduce its dimension: (1) partial least squares regression, and (2) Fourier descriptors of the cow signature. Three prediction models were compared with manual scores of an expert. Results indicate that (1) it is possible to automatically extract and predict body condition from color images without any manual interference; and (2) Fourier descriptors of the cow's signature result in improved performance (R(2)=0.77). Copyright © 2013 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  9. Interactive Cadastral Boundary Delineation from Uav Data

    NASA Astrophysics Data System (ADS)

    Crommelinck, S.; Höfle, B.; Koeva, M. N.; Yang, M. Y.; Vosselman, G.

    2018-05-01

    Unmanned aerial vehicles (UAV) are evolving as an alternative tool to acquire land tenure data. UAVs can capture geospatial data at high quality and resolution in a cost-effective, transparent and flexible manner, from which visible land parcel boundaries, i.e., cadastral boundaries are delineable. This delineation is to no extent automated, even though physical objects automatically retrievable through image analysis methods mark a large portion of cadastral boundaries. This study proposes (i) a methodology that automatically extracts and processes candidate cadastral boundary features from UAV data, and (ii) a procedure for a subsequent interactive delineation. Part (i) consists of two state-of-the-art computer vision methods, namely gPb contour detection and SLIC superpixels, as well as a classification part assigning costs to each outline according to local boundary knowledge. Part (ii) allows a user-guided delineation by calculating least-cost paths along previously extracted and weighted lines. The approach is tested on visible road outlines in two UAV datasets from Germany. Results show that all roads can be delineated comprehensively. Compared to manual delineation, the number of clicks per 100 m is reduced by up to 86 %, while obtaining a similar localization quality. The approach shows promising results to reduce the effort of manual delineation that is currently employed for indirect (cadastral) surveying.

  10. An anaesthesia information management system as a tool for a quality assurance program: 10years of experience.

    PubMed

    Motamed, Cyrus; Bourgain, Jean Louis

    2016-06-01

    Anaesthesia Information Management Systems (AIMS) generate large amounts of data, which might be useful for quality assurance programs. This study was designed to highlight the multiple contributions of our AIMS system in extracting quality indicators over a period of 10years. The study was conducted from 2002 to 2011. Two methods were used to extract anaesthesia indicators: the manual extraction of individual files for monitoring neuromuscular relaxation and structured query language (SQL) extraction for other indicators which were postoperative nausea and vomiting (PONV), pain, sedation scores, pain-related medications, scores and postoperative hypothermia. For each indicator, a program of information/meetings and adaptation/suggestions for operating room and PACU personnel was initiated to improve quality assurance, while data were extracted each year. The study included 77,573 patients. The mean overall completeness of data for the initial years ranged from 55 to 85% and was indicator-dependent, which then improved to 95% completeness for the last 5years. The incidence of neuromuscular monitoring was initially 67% and then increased to 95% (P<0.05). The rate of pharmacological reversal remained around 53% throughout the study. Regarding SQL data, an improvement of severe postoperative pain and PONV scores was observed throughout the study, while mild postoperative hypothermia remained a challenge, despite efforts for improvement. The AIMS system permitted the follow-up of certain indicators through manual sampling and many more via SQL extraction in a sustained and non-time-consuming way across years. However, it requires competent and especially dedicated resources to handle the database. Copyright © 2016 Société française d'anesthésie et de réanimation (Sfar). Published by Elsevier Masson SAS. All rights reserved.

  11. Automated drumlin shape and volume estimation using high resolution LiDAR imagery (Curvature Based Relief Separation): A test from the Wadena Drumlin Field, Minnesota

    NASA Astrophysics Data System (ADS)

    Yu, Peter; Eyles, Nick; Sookhan, Shane

    2015-10-01

    Resolving the origin(s) of drumlins and related megaridges in areas of megascale glacial lineations (MSGL) left by paleo-ice sheets is critical to understanding how ancient ice sheets interacted with their sediment beds. MSGL is now linked with fast-flowing ice streams but there is a broad range of erosional and depositional models. Further progress is reliant on constraining fluxes of subglacial sediment at the ice sheet base which in turn is dependent on morphological data such as landform shape and elongation and most importantly landform volume. Past practice in determining shape has employed a broad range of geomorphological methods from strictly visualisation techniques to more complex semi-automated and automated drumlin extraction methods. This paper reviews and builds on currently available visualisation, semi-automated and automated extraction methods and presents a new, Curvature Based Relief Separation (CBRS) technique; for drumlin mapping. This uses curvature analysis to generate a base level from which topography can be normalized and drumlin volume can be derived. This methodology is tested using a high resolution (3 m) LiDAR elevation dataset from the Wadena Drumlin Field, Minnesota, USA, which was constructed by the Wadena Lobe of the Laurentide Ice Sheet ca. 20,000 years ago and which as a whole contains 2000 drumlins across an area of 7500 km2. This analysis demonstrates that CBRS provides an objective and robust procedure for automated drumlin extraction. There is strong agreement with manually selected landforms but the method is also capable of resolving features that were not detectable manually thereby considerably expanding the known population of streamlined landforms. CBRS provides an effective automatic method for visualisation of large areas of the streamlined beds of former ice sheets and for modelling sediment fluxes below ice sheets.

  12. Terminologies for text-mining; an experiment in the lipoprotein metabolism domain

    PubMed Central

    Alexopoulou, Dimitra; Wächter, Thomas; Pickersgill, Laura; Eyre, Cecilia; Schroeder, Michael

    2008-01-01

    Background The engineering of ontologies, especially with a view to a text-mining use, is still a new research field. There does not yet exist a well-defined theory and technology for ontology construction. Many of the ontology design steps remain manual and are based on personal experience and intuition. However, there exist a few efforts on automatic construction of ontologies in the form of extracted lists of terms and relations between them. Results We share experience acquired during the manual development of a lipoprotein metabolism ontology (LMO) to be used for text-mining. We compare the manually created ontology terms with the automatically derived terminology from four different automatic term recognition (ATR) methods. The top 50 predicted terms contain up to 89% relevant terms. For the top 1000 terms the best method still generates 51% relevant terms. In a corpus of 3066 documents 53% of LMO terms are contained and 38% can be generated with one of the methods. Conclusions Given high precision, automatic methods can help decrease development time and provide significant support for the identification of domain-specific vocabulary. The coverage of the domain vocabulary depends strongly on the underlying documents. Ontology development for text mining should be performed in a semi-automatic way; taking ATR results as input and following the guidelines we described. Availability The TFIDF term recognition is available as Web Service, described at PMID:18460175

  13. DBCG hypo trial validation of radiotherapy parameters from a national data bank versus manual reporting.

    PubMed

    Brink, Carsten; Lorenzen, Ebbe L; Krogh, Simon Long; Westberg, Jonas; Berg, Martin; Jensen, Ingelise; Thomsen, Mette Skovhus; Yates, Esben Svitzer; Offersen, Birgitte Vrou

    2018-01-01

    The current study evaluates the data quality achievable using a national data bank for reporting radiotherapy parameters relative to the classical manual reporting method of selected parameters. The data comparison is based on 1522 Danish patients of the DBCG hypo trial with data stored in the Danish national radiotherapy data bank. In line with standard DBCG trial practice selected parameters were also reported manually to the DBCG database. Categorical variables are compared using contingency tables, and comparison of continuous parameters is presented in scatter plots. For categorical variables 25 differences between the data bank and manual values were located. Of these 23 were related to mistakes in the manual reported value whilst the remaining two were a wrong classification in the data bank. The wrong classification in the data bank was related to lack of dose information, since the two patients had been treated with an electron boost based on a manual calculation, thus data was not exported to the data bank, and this was not detected prior to comparison with the manual data. For a few database fields in the manual data an ambiguity of the parameter definition of the specific field is seen in the data. This was not the case for the data bank, which extract all data consistently. In terms of data quality the data bank is superior to manually reported values. However, there is a need to allocate resources for checking the validity of the available data as well as ensuring that all relevant data is present. The data bank contains more detailed information, and thus facilitates research related to the actual dose distribution in the patients.

  14. Automatic extraction of disease-specific features from Doppler images

    NASA Astrophysics Data System (ADS)

    Negahdar, Mohammadreza; Moradi, Mehdi; Parajuli, Nripesh; Syeda-Mahmood, Tanveer

    2017-03-01

    Flow Doppler imaging is widely used by clinicians to detect diseases of the valves. In particular, continuous wave (CW) Doppler mode scan is routinely done during echocardiography and shows Doppler signal traces over multiple heart cycles. Traditionally, echocardiographers have manually traced such velocity envelopes to extract measurements such as decay time and pressure gradient which are then matched to normal and abnormal values based on clinical guidelines. In this paper, we present a fully automatic approach to deriving these measurements for aortic stenosis retrospectively from echocardiography videos. Comparison of our method with measurements made by echocardiographers shows large agreement as well as identification of new cases missed by echocardiographers.

  15. Automated detection and localization of bowhead whale sounds in the presence of seismic airgun surveys.

    PubMed

    Thode, Aaron M; Kim, Katherine H; Blackwell, Susanna B; Greene, Charles R; Nations, Christopher S; McDonald, Trent L; Macrander, A Michael

    2012-05-01

    An automated procedure has been developed for detecting and localizing frequency-modulated bowhead whale sounds in the presence of seismic airgun surveys. The procedure was applied to four years of data, collected from over 30 directional autonomous recording packages deployed over a 280 km span of continental shelf in the Alaskan Beaufort Sea. The procedure has six sequential stages that begin by extracting 25-element feature vectors from spectrograms of potential call candidates. Two cascaded neural networks then classify some feature vectors as bowhead calls, and the procedure then matches calls between recorders to triangulate locations. To train the networks, manual analysts flagged 219 471 bowhead call examples from 2008 and 2009. Manual analyses were also used to identify 1.17 million transient signals that were not whale calls. The network output thresholds were adjusted to reject 20% of whale calls in the training data. Validation runs using 2007 and 2010 data found that the procedure missed 30%-40% of manually detected calls. Furthermore, 20%-40% of the sounds flagged as calls are not present in the manual analyses; however, these extra detections incorporate legitimate whale calls overlooked by human analysts. Both manual and automated methods produce similar spatial and temporal call distributions.

  16. Segmentation of hand radiographs using fast marching methods

    NASA Astrophysics Data System (ADS)

    Chen, Hong; Novak, Carol L.

    2006-03-01

    Rheumatoid Arthritis is one of the most common chronic diseases. Joint space width in hand radiographs is evaluated to assess joint damage in order to monitor progression of disease and response to treatment. Manual measurement of joint space width is time-consuming and highly prone to inter- and intra-observer variation. We propose a method for automatic extraction of finger bone boundaries using fast marching methods for quantitative evaluation of joint space width. The proposed algorithm includes two stages: location of hand joints followed by extraction of bone boundaries. By setting the propagation speed of the wave front as a function of image intensity values, the fast marching algorithm extracts the skeleton of the hands, in which each branch corresponds to a finger. The finger joint locations are then determined by using the image gradients along the skeletal branches. In order to extract bone boundaries at joints, the gradient magnitudes are utilized for setting the propagation speed, and the gradient phases are used for discriminating the boundaries of adjacent bones. The bone boundaries are detected by searching for the fastest paths from one side of each joint to the other side. Finally, joint space width is computed based on the extracted upper and lower bone boundaries. The algorithm was evaluated on a test set of 8 two-hand radiographs, including images from healthy patients and from patients suffering from arthritis, gout and psoriasis. Using our method, 97% of 208 joints were accurately located and 89% of 416 bone boundaries were correctly extracted.

  17. Color image definition evaluation method based on deep learning method

    NASA Astrophysics Data System (ADS)

    Liu, Di; Li, YingChun

    2018-01-01

    In order to evaluate different blurring levels of color image and improve the method of image definition evaluation, this paper proposed a method based on the depth learning framework and BP neural network classification model, and presents a non-reference color image clarity evaluation method. Firstly, using VGG16 net as the feature extractor to extract 4,096 dimensions features of the images, then the extracted features and labeled images are employed in BP neural network to train. And finally achieve the color image definition evaluation. The method in this paper are experimented by using images from the CSIQ database. The images are blurred at different levels. There are 4,000 images after the processing. Dividing the 4,000 images into three categories, each category represents a blur level. 300 out of 400 high-dimensional features are trained in VGG16 net and BP neural network, and the rest of 100 samples are tested. The experimental results show that the method can take full advantage of the learning and characterization capability of deep learning. Referring to the current shortcomings of the major existing image clarity evaluation methods, which manually design and extract features. The method in this paper can extract the images features automatically, and has got excellent image quality classification accuracy for the test data set. The accuracy rate is 96%. Moreover, the predicted quality levels of original color images are similar to the perception of the human visual system.

  18. Extracting rate changes in transcriptional regulation from MEDLINE abstracts.

    PubMed

    Liu, Wenting; Miao, Kui; Li, Guangxia; Chang, Kuiyu; Zheng, Jie; Rajapakse, Jagath C

    2014-01-01

    Time delays are important factors that are often neglected in gene regulatory network (GRN) inference models. Validating time delays from knowledge bases is a challenge since the vast majority of biological databases do not record temporal information of gene regulations. Biological knowledge and facts on gene regulations are typically extracted from bio-literature with specialized methods that depend on the regulation task. In this paper, we mine evidences for time delays related to the transcriptional regulation of yeast from the PubMed abstracts. Since the vast majority of abstracts lack quantitative time information, we can only collect qualitative evidences of time delays. Specifically, the speed-up or delay in transcriptional regulation rate can provide evidences for time delays (shorter or longer) in GRN. Thus, we focus on deriving events related to rate changes in transcriptional regulation. A corpus of yeast regulation related abstracts was manually labeled with such events. In order to capture these events automatically, we create an ontology of sub-processes that are likely to result in transcription rate changes by combining textual patterns and biological knowledge. We also propose effective feature extraction methods based on the created ontology to identify the direct evidences with specific details of these events. Our ontologies outperform existing state-of-the-art gene regulation ontologies in the automatic rule learning method applied to our corpus. The proposed deterministic ontology rule-based method can achieve comparable performance to the automatic rule learning method based on decision trees. This demonstrates the effectiveness of our ontology in identifying rate-changing events. We also tested the effectiveness of the proposed feature mining methods on detecting direct evidence of events. Experimental results show that the machine learning method on these features achieves an F1-score of 71.43%. The manually labeled corpus of events relating to rate changes in transcriptional regulation for yeast is available in https://sites.google.com/site/wentingntu/data. The created ontologies summarized both biological causes of rate changes in transcriptional regulation and corresponding positive and negative textual patterns from the corpus. They are demonstrated to be effective in identifying rate-changing events, which shows the benefits of combining textual patterns and biological knowledge on extracting complex biological events.

  19. Iterative variational mode decomposition based automated detection of glaucoma using fundus images.

    PubMed

    Maheshwari, Shishir; Pachori, Ram Bilas; Kanhangad, Vivek; Bhandary, Sulatha V; Acharya, U Rajendra

    2017-09-01

    Glaucoma is one of the leading causes of permanent vision loss. It is an ocular disorder caused by increased fluid pressure within the eye. The clinical methods available for the diagnosis of glaucoma require skilled supervision. They are manual, time consuming, and out of reach of common people. Hence, there is a need for an automated glaucoma diagnosis system for mass screening. In this paper, we present a novel method for an automated diagnosis of glaucoma using digital fundus images. Variational mode decomposition (VMD) method is used in an iterative manner for image decomposition. Various features namely, Kapoor entropy, Renyi entropy, Yager entropy, and fractal dimensions are extracted from VMD components. ReliefF algorithm is used to select the discriminatory features and these features are then fed to the least squares support vector machine (LS-SVM) for classification. Our proposed method achieved classification accuracies of 95.19% and 94.79% using three-fold and ten-fold cross-validation strategies, respectively. This system can aid the ophthalmologists in confirming their manual reading of classes (glaucoma or normal) using fundus images. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. Two Automated Techniques for Carotid Lumen Diameter Measurement: Regional versus Boundary Approaches.

    PubMed

    Araki, Tadashi; Kumar, P Krishna; Suri, Harman S; Ikeda, Nobutaka; Gupta, Ajay; Saba, Luca; Rajan, Jeny; Lavra, Francesco; Sharma, Aditya M; Shafique, Shoaib; Nicolaides, Andrew; Laird, John R; Suri, Jasjit S

    2016-07-01

    The degree of stenosis in the carotid artery can be predicted using automated carotid lumen diameter (LD) measured from B-mode ultrasound images. Systolic velocity-based methods for measurement of LD are subjective. With the advancement of high resolution imaging, image-based methods have started to emerge. However, they require robust image analysis for accurate LD measurement. This paper presents two different algorithms for automated segmentation of the lumen borders in carotid ultrasound images. Both algorithms are modeled as a two stage process. Stage one consists of a global-based model using scale-space framework for the extraction of the region of interest. This stage is common to both algorithms. Stage two is modeled using a local-based strategy that extracts the lumen interfaces. At this stage, the algorithm-1 is modeled as a region-based strategy using a classification framework, whereas the algorithm-2 is modeled as a boundary-based approach that uses the level set framework. Two sets of databases (DB), Japan DB (JDB) (202 patients, 404 images) and Hong Kong DB (HKDB) (50 patients, 300 images) were used in this study. Two trained neuroradiologists performed manual LD tracings. The mean automated LD measured was 6.35 ± 0.95 mm for JDB and 6.20 ± 1.35 mm for HKDB. The precision-of-merit was: 97.4 % and 98.0 % w.r.t to two manual tracings for JDB and 99.7 % and 97.9 % w.r.t to two manual tracings for HKDB. Statistical tests such as ANOVA, Chi-Squared, T-test, and Mann-Whitney test were conducted to show the stability and reliability of the automated techniques.

  1. A comparative evaluation of the efficacy of manual, magnetostrictive and piezoelectric ultrasonic instruments - an in vitro profilometric and SEM study

    PubMed Central

    SINGH, Sumita; UPPOOR, Ashita; NAYAK, Dilip

    2012-01-01

    Objectives The debridement of diseased root surface is usually performed by mechanical scaling and root planing using manual and power driven instruments. Many new designs in ultrasonic powered scaling tips have been developed. However, their effectiveness as compared to manual curettes has always been debatable. Thus, the objective of this in vitro study was to comparatively evaluate the efficacy of manual, magnetostrictive and piezoelectric ultrasonic instrumentation on periodontally involved extracted teeth using profilometer and scanning electron microscope (SEM). Material and Methods 30 periodontally involved extracted human teeth were divided into 3 groups. The teeth were instrumented with hand and ultrasonic instruments resembling clinical application. In Group A all teeth were scaled with a new universal hand curette (Hu Friedy Gracey After Five Vision curette; Hu Friedy, Chicago, USA). In Group B CavitronTM FSI - SLITM ultrasonic device with focused spray slimline inserts (Dentsply International Inc., York, PA, USA) were used. In Group C teeth were scaled with an EMS piezoelectric ultrasonic device with prototype modified PS inserts. The surfaces were analyzed by a Precision profilometer to measure the surface roughness (Ra value in µm) consecutively before and after the instrumentation. The samples were examined under SEM at magnifications ranging from 17x to 300x and 600x. Results The mean Ra values (µm) before and after instrumentation in all the three groups A, B and C were tabulated. After statistically analyzing the data, no significant difference was observed in the three experimental groups. Though there was a decrease in the percentage reduction of Ra values consecutively from group A to C. Conclusion Within the limits of the present study, given that the manual, magnetostrictive and piezoelectric ultrasonic instruments produce the same surface roughness, it can be concluded that their efficacy for creating a biologically compatible surface of periodontally diseased teeth is similar. PMID:22437673

  2. A method for automatically extracting infectious disease-related primers and probes from the literature

    PubMed Central

    2010-01-01

    Background Primer and probe sequences are the main components of nucleic acid-based detection systems. Biologists use primers and probes for different tasks, some related to the diagnosis and prescription of infectious diseases. The biological literature is the main information source for empirically validated primer and probe sequences. Therefore, it is becoming increasingly important for researchers to navigate this important information. In this paper, we present a four-phase method for extracting and annotating primer/probe sequences from the literature. These phases are: (1) convert each document into a tree of paper sections, (2) detect the candidate sequences using a set of finite state machine-based recognizers, (3) refine problem sequences using a rule-based expert system, and (4) annotate the extracted sequences with their related organism/gene information. Results We tested our approach using a test set composed of 297 manuscripts. The extracted sequences and their organism/gene annotations were manually evaluated by a panel of molecular biologists. The results of the evaluation show that our approach is suitable for automatically extracting DNA sequences, achieving precision/recall rates of 97.98% and 95.77%, respectively. In addition, 76.66% of the detected sequences were correctly annotated with their organism name. The system also provided correct gene-related information for 46.18% of the sequences assigned a correct organism name. Conclusions We believe that the proposed method can facilitate routine tasks for biomedical researchers using molecular methods to diagnose and prescribe different infectious diseases. In addition, the proposed method can be expanded to detect and extract other biological sequences from the literature. The extracted information can also be used to readily update available primer/probe databases or to create new databases from scratch. PMID:20682041

  3. The Filament Sensor for Near Real-Time Detection of Cytoskeletal Fiber Structures

    PubMed Central

    Eltzner, Benjamin; Wollnik, Carina; Gottschlich, Carsten; Huckemann, Stephan; Rehfeldt, Florian

    2015-01-01

    A reliable extraction of filament data from microscopic images is of high interest in the analysis of acto-myosin structures as early morphological markers in mechanically guided differentiation of human mesenchymal stem cells and the understanding of the underlying fiber arrangement processes. In this paper, we propose the filament sensor (FS), a fast and robust processing sequence which detects and records location, orientation, length, and width for each single filament of an image, and thus allows for the above described analysis. The extraction of these features has previously not been possible with existing methods. We evaluate the performance of the proposed FS in terms of accuracy and speed in comparison to three existing methods with respect to their limited output. Further, we provide a benchmark dataset of real cell images along with filaments manually marked by a human expert as well as simulated benchmark images. The FS clearly outperforms existing methods in terms of computational runtime and filament extraction accuracy. The implementation of the FS and the benchmark database are available as open source. PMID:25996921

  4. Optimized manual and automated recovery of amplifiable DNA from tissues preserved in buffered formalin and alcohol-based fixative.

    PubMed

    Duval, Kristin; Aubin, Rémy A; Elliott, James; Gorn-Hondermann, Ivan; Birnboim, H Chaim; Jonker, Derek; Fourney, Ron M; Frégeau, Chantal J

    2010-02-01

    Archival tissue preserved in fixative constitutes an invaluable resource for histological examination, molecular diagnostic procedures and for DNA typing analysis in forensic investigations. However, available material is often limited in size and quantity. Moreover, recovery of DNA is often severely compromised by the presence of covalent DNA-protein cross-links generated by formalin, the most prevalent fixative. We describe the evaluation of buffer formulations, sample lysis regimens and DNA recovery strategies and define optimized manual and automated procedures for the extraction of high quality DNA suitable for molecular diagnostics and genotyping. Using a 3-step enzymatic digestion protocol carried out in the absence of dithiothreitol, we demonstrate that DNA can be efficiently released from cells or tissues preserved in buffered formalin or the alcohol-based fixative GenoFix. This preparatory procedure can then be integrated to traditional phenol/chloroform extraction, a modified manual DNA IQ or automated DNA IQ/Te-Shake-based extraction in order to recover DNA for downstream applications. Quantitative recovery of high quality DNA was best achieved from specimens archived in GenoFix and extracted using magnetic bead capture.

  5. A Learning-Based Wrapper Method to Correct Systematic Errors in Automatic Image Segmentation: Consistently Improved Performance in Hippocampus, Cortex and Brain Segmentation

    PubMed Central

    Wang, Hongzhi; Das, Sandhitsu R.; Suh, Jung Wook; Altinay, Murat; Pluta, John; Craige, Caryne; Avants, Brian; Yushkevich, Paul A.

    2011-01-01

    We propose a simple but generally applicable approach to improving the accuracy of automatic image segmentation algorithms relative to manual segmentations. The approach is based on the hypothesis that a large fraction of the errors produced by automatic segmentation are systematic, i.e., occur consistently from subject to subject, and serves as a wrapper method around a given host segmentation method. The wrapper method attempts to learn the intensity, spatial and contextual patterns associated with systematic segmentation errors produced by the host method on training data for which manual segmentations are available. The method then attempts to correct such errors in segmentations produced by the host method on new images. One practical use of the proposed wrapper method is to adapt existing segmentation tools, without explicit modification, to imaging data and segmentation protocols that are different from those on which the tools were trained and tuned. An open-source implementation of the proposed wrapper method is provided, and can be applied to a wide range of image segmentation problems. The wrapper method is evaluated with four host brain MRI segmentation methods: hippocampus segmentation using FreeSurfer (Fischl et al., 2002); hippocampus segmentation using multi-atlas label fusion (Artaechevarria et al., 2009); brain extraction using BET (Smith, 2002); and brain tissue segmentation using FAST (Zhang et al., 2001). The wrapper method generates 72%, 14%, 29% and 21% fewer erroneously segmented voxels than the respective host segmentation methods. In the hippocampus segmentation experiment with multi-atlas label fusion as the host method, the average Dice overlap between reference segmentations and segmentations produced by the wrapper method is 0.908 for normal controls and 0.893 for patients with mild cognitive impairment. Average Dice overlaps of 0.964, 0.905 and 0.951 are obtained for brain extraction, white matter segmentation and gray matter segmentation, respectively. PMID:21237273

  6. Classification of speech dysfluencies using LPC based parameterization techniques.

    PubMed

    Hariharan, M; Chee, Lim Sin; Ai, Ooi Chia; Yaacob, Sazali

    2012-06-01

    The goal of this paper is to discuss and compare three feature extraction methods: Linear Predictive Coefficients (LPC), Linear Prediction Cepstral Coefficients (LPCC) and Weighted Linear Prediction Cepstral Coefficients (WLPCC) for recognizing the stuttered events. Speech samples from the University College London Archive of Stuttered Speech (UCLASS) were used for our analysis. The stuttered events were identified through manual segmentation and were used for feature extraction. Two simple classifiers namely, k-nearest neighbour (kNN) and Linear Discriminant Analysis (LDA) were employed for speech dysfluencies classification. Conventional validation method was used for testing the reliability of the classifier results. The study on the effect of different frame length, percentage of overlapping, value of ã in a first order pre-emphasizer and different order p were discussed. The speech dysfluencies classification accuracy was found to be improved by applying statistical normalization before feature extraction. The experimental investigation elucidated LPC, LPCC and WLPCC features can be used for identifying the stuttered events and WLPCC features slightly outperforms LPCC features and LPC features.

  7. Automatic information extraction from unstructured mammography reports using distributed semantics.

    PubMed

    Gupta, Anupama; Banerjee, Imon; Rubin, Daniel L

    2018-02-01

    To date, the methods developed for automated extraction of information from radiology reports are mainly rule-based or dictionary-based, and, therefore, require substantial manual effort to build these systems. Recent efforts to develop automated systems for entity detection have been undertaken, but little work has been done to automatically extract relations and their associated named entities in narrative radiology reports that have comparable accuracy to rule-based methods. Our goal is to extract relations in a unsupervised way from radiology reports without specifying prior domain knowledge. We propose a hybrid approach for information extraction that combines dependency-based parse tree with distributed semantics for generating structured information frames about particular findings/abnormalities from the free-text mammography reports. The proposed IE system obtains a F 1 -score of 0.94 in terms of completeness of the content in the information frames, which outperforms a state-of-the-art rule-based system in this domain by a significant margin. The proposed system can be leveraged in a variety of applications, such as decision support and information retrieval, and may also easily scale to other radiology domains, since there is no need to tune the system with hand-crafted information extraction rules. Copyright © 2018 Elsevier Inc. All rights reserved.

  8. Automatic drawing for traffic marking with MMS LIDAR intensity

    NASA Astrophysics Data System (ADS)

    Takahashi, G.; Takeda, H.; Shimano, Y.

    2014-05-01

    Upgrading the database of CYBER JAPAN has been strategically promoted because the "Basic Act on Promotion of Utilization of Geographical Information", was enacted in May 2007. In particular, there is a high demand for road information that comprises a framework in this database. Therefore, road inventory mapping work has to be accurate and eliminate variation caused by individual human operators. Further, the large number of traffic markings that are periodically maintained and possibly changed require an efficient method for updating spatial data. Currently, we apply manual photogrammetry drawing for mapping traffic markings. However, this method is not sufficiently efficient in terms of the required productivity, and data variation can arise from individual operators. In contrast, Mobile Mapping Systems (MMS) and high-density Laser Imaging Detection and Ranging (LIDAR) scanners are rapidly gaining popularity. The aim in this study is to build an efficient method for automatically drawing traffic markings using MMS LIDAR data. The key idea in this method is extracting lines using a Hough transform strategically focused on changes in local reflection intensity along scan lines. However, also note that this method processes every traffic marking. In this paper, we discuss a highly accurate and non-human-operator-dependent method that applies the following steps: (1) Binarizing LIDAR points by intensity and extracting higher intensity points; (2) Generating a Triangulated Irregular Network (TIN) from higher intensity points; (3) Deleting arcs by length and generating outline polygons on the TIN; (4) Generating buffers from the outline polygons; (5) Extracting points from the buffers using the original LIDAR points; (6) Extracting local-intensity-changing points along scan lines using the extracted points; (7) Extracting lines from intensity-changing points through a Hough transform; and (8) Connecting lines to generate automated traffic marking mapping data.

  9. Job Language Performance Requirements for MOS 75C. Personnel Management Specialist. Reference Soldier’s Manual Dated 22 May 1979.

    DTIC Science & Technology

    1979-05-22

    STANDARDS-1963-A •4 OCT 1982 kJOB LANGUAGE PERFORMANCE REQUIREMENTS FOR 75C PERSONNEL MANAGEENT SPECIALIST REFERENCE SOLDIER’S MANUAL DATED 22 May 1979...task by task listing of the vocabulary extracted from the Soldier’s Manual .. iii [’ Appendix seven contains the machine-generated vocabulary f3r this...Observation Form and an analysis of language structures in the Soldier’s Manual for this MOS. The Observation Form (Appendix 4) was used to record actual

  10. DFLOW USER'S MANUAL

    EPA Science Inventory

    DFLOW is a computer program for estimating design stream flows for use in water quality studies. The manual describes the use of the program on both the EPA's IBM mainframe system and on a personal computer (PC). The mainframe version of DFLOW can extract a river's daily flow rec...

  11. Automatic bone outer contour extraction from B-modes ultrasound images based on local phase symmetry and quadratic polynomial fitting

    NASA Astrophysics Data System (ADS)

    Karlita, Tita; Yuniarno, Eko Mulyanto; Purnama, I. Ketut Eddy; Purnomo, Mauridhi Hery

    2017-06-01

    Analyzing ultrasound (US) images to get the shapes and structures of particular anatomical regions is an interesting field of study since US imaging is a non-invasive method to capture internal structures of a human body. However, bone segmentation of US images is still challenging because it is strongly influenced by speckle noises and it has poor image quality. This paper proposes a combination of local phase symmetry and quadratic polynomial fitting methods to extract bone outer contour (BOC) from two dimensional (2D) B-modes US image as initial steps of three-dimensional (3D) bone surface reconstruction. By using local phase symmetry, the bone is initially extracted from US images. BOC is then extracted by scanning one pixel on the bone boundary in each column of the US images using first phase features searching method. Quadratic polynomial fitting is utilized to refine and estimate the pixel location that fails to be detected during the extraction process. Hole filling method is then applied by utilize the polynomial coefficients to fill the gaps with new pixel. The proposed method is able to estimate the new pixel position and ensures smoothness and continuity of the contour path. Evaluations are done using cow and goat bones by comparing the resulted BOCs with the contours produced by manual segmentation and contours produced by canny edge detection. The evaluation shows that our proposed methods produces an excellent result with average MSE before and after hole filling at the value of 0.65.

  12. Respiratory Artefact Removal in Forced Oscillation Measurements: A Machine Learning Approach.

    PubMed

    Pham, Thuy T; Thamrin, Cindy; Robinson, Paul D; McEwan, Alistair L; Leong, Philip H W

    2017-08-01

    Respiratory artefact removal for the forced oscillation technique can be treated as an anomaly detection problem. Manual removal is currently considered the gold standard, but this approach is laborious and subjective. Most existing automated techniques used simple statistics and/or rejected anomalous data points. Unfortunately, simple statistics are insensitive to numerous artefacts, leading to low reproducibility of results. Furthermore, rejecting anomalous data points causes an imbalance between the inspiratory and expiratory contributions. From a machine learning perspective, such methods are unsupervised and can be considered simple feature extraction. We hypothesize that supervised techniques can be used to find improved features that are more discriminative and more highly correlated with the desired output. Features thus found are then used for anomaly detection by applying quartile thresholding, which rejects complete breaths if one of its features is out of range. The thresholds are determined by both saliency and performance metrics rather than qualitative assumptions as in previous works. Feature ranking indicates that our new landmark features are among the highest scoring candidates regardless of age across saliency criteria. F1-scores, receiver operating characteristic, and variability of the mean resistance metrics show that the proposed scheme outperforms previous simple feature extraction approaches. Our subject-independent detector, 1IQR-SU, demonstrated approval rates of 80.6% for adults and 98% for children, higher than existing methods. Our new features are more relevant. Our removal is objective and comparable to the manual method. This is a critical work to automate forced oscillation technique quality control.

  13. Ice nucleating agents allow embryo freezing without manual seeding.

    PubMed

    Teixeira, Magda; Buff, Samuel; Desnos, Hugo; Loiseau, Céline; Bruyère, Pierre; Joly, Thierry; Commin, Loris

    2017-12-01

    Embryo slow freezing protocols include a nucleation induction step called manual seeding. This step is time consuming, manipulator dependent and hard to standardize. It requires access to samples, which is not always possible within the configuration of systems, such as differential scanning calorimeters or cryomicroscopes. Ice nucleation can be induced by other methods, e.g., by the use of ice nucleating agents. Snomax is a commercial preparation of inactivated proteins extracted from Pseudomonas syringae. The aim of our study was to investigate if Snomax can be an alternative to manual seeding in the slow freezing of mouse embryos. The influence of Snomax on the pH and osmolality of the freezing medium was evaluated. In vitro development (blastocyst formation and hatching rates) of fresh embryos exposed to Snomax and embryo cryopreserved with and without Snomax was assessed. The mitochondrial activity of frozen-thawed blastocysts was assessed by JC-1 fluorescent staining. Snomax didn't alter the physicochemical properties of the freezing medium, and did not affect embryo development of fresh embryos. After cryopreservation, the substitution of manual seeding by the ice nucleating agent (INA) Snomax did not affect embryo development or embryo mitochondrial activity. In conclusion, Snomax seems to be an effective ice nucleating agent for the slow freezing of mouse embryos. Snomax can also be a valuable alternative to manual seeding in research protocols in which manual seeding cannot be performed (i.e., differential scanning calorimetry and cryomicroscopy). Copyright © 2017 Elsevier Inc. All rights reserved.

  14. Vaccine Hesitancy in Discussion Forums: Computer-Assisted Argument Mining with Topic Models.

    PubMed

    Skeppstedt, Maria; Kerren, Andreas; Stede, Manfred

    2018-01-01

    Arguments used when vaccination is debated on Internet discussion forums might give us valuable insights into reasons behind vaccine hesitancy. In this study, we applied automatic topic modelling on a collection of 943 discussion posts in which vaccine was debated, and six distinct discussion topics were detected by the algorithm. When manually coding the posts ranked as most typical for these six topics, a set of semantically coherent arguments were identified for each extracted topic. This indicates that topic modelling is a useful method for automatically identifying vaccine-related discussion topics and for identifying debate posts where these topics are discussed. This functionality could facilitate manual coding of salient arguments, and thereby form an important component in a system for computer-assisted coding of vaccine-related discussions.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bogunovic, Hrvoje; Pozo, Jose Maria; Villa-Uriol, Maria Cruz

    Purpose: To evaluate the suitability of an improved version of an automatic segmentation method based on geodesic active regions (GAR) for segmenting cerebral vasculature with aneurysms from 3D x-ray reconstruction angiography (3DRA) and time of flight magnetic resonance angiography (TOF-MRA) images available in the clinical routine. Methods: Three aspects of the GAR method have been improved: execution time, robustness to variability in imaging protocols, and robustness to variability in image spatial resolutions. The improved GAR was retrospectively evaluated on images from patients containing intracranial aneurysms in the area of the Circle of Willis and imaged with two modalities: 3DRA andmore » TOF-MRA. Images were obtained from two clinical centers, each using different imaging equipment. Evaluation included qualitative and quantitative analyses of the segmentation results on 20 images from 10 patients. The gold standard was built from 660 cross-sections (33 per image) of vessels and aneurysms, manually measured by interventional neuroradiologists. GAR has also been compared to an interactive segmentation method: isointensity surface extraction (ISE). In addition, since patients had been imaged with the two modalities, we performed an intermodality agreement analysis with respect to both the manual measurements and each of the two segmentation methods. Results: Both GAR and ISE differed from the gold standard within acceptable limits compared to the imaging resolution. GAR (ISE) had an average accuracy of 0.20 (0.24) mm for 3DRA and 0.27 (0.30) mm for TOF-MRA, and had a repeatability of 0.05 (0.20) mm. Compared to ISE, GAR had a lower qualitative error in the vessel region and a lower quantitative error in the aneurysm region. The repeatability of GAR was superior to manual measurements and ISE. The intermodality agreement was similar between GAR and the manual measurements. Conclusions: The improved GAR method outperformed ISE qualitatively as well as quantitatively and is suitable for segmenting 3DRA and TOF-MRA images from clinical routine.« less

  16. [A method for rapid extracting three-dimensional root model of vivo tooth from cone beam computed tomography data based on the anatomical characteristics of periodontal ligament].

    PubMed

    Zhao, Y J; Wang, S W; Liu, Y; Wang, Y

    2017-02-18

    To explore a new method for rapid extracting and rebuilding three-dimensional (3D) digital root model of vivo tooth from cone beam computed tomography (CBCT) data based on the anatomical characteristics of periodontal ligament, and to evaluate the extraction accuracy of the method. In the study, 15 extracted teeth (11 with single root, 4 with double roots) were collected from oral clinic and 3D digital root models of each tooth were obtained by 3D dental scanner with a high accuracy 0.02 mm in STL format. CBCT data for each patient were acquired before tooth extraction, DICOM data with a voxel size 0.3 mm were input to Mimics 18.0 software. Segmentation, Morphology operations, Boolean operations and Smart expanded function in Mimics software were used to edit teeth, bone and periodontal ligament threshold mask, and root threshold mask were automatically acquired after a series of mask operations. 3D digital root models were extracted in STL format finally. 3D morphology deviation between the extracted root models and corresponding vivo root models were compared in Geomagic Studio 2012 software. The 3D size errors in long axis, bucco-lingual direction and mesio-distal direction were also calculated. The average value of the 3D morphology deviation for 15 roots by calculating Root Mean Square (RMS) value was 0.22 mm, the average size errors in the mesio-distal direction, the bucco-lingual direction and the long axis were 0.46 mm, 0.36 mm and -0.68 mm separately. The average time of this new method for extracting single root was about 2-3 min. It could meet the accuracy requirement of the root 3D reconstruction fororal clinical use. This study established a new method for rapid extracting 3D root model of vivo tooth from CBCT data. It could simplify the traditional manual operation and improve the efficiency and automation of single root extraction. The strategy of this method for complete dentition extraction needs further research.

  17. Evaluating Unsupervised Methods to Size and Classify Suspended Particles Using Digital Holography

    NASA Astrophysics Data System (ADS)

    Davies, E. J.; Buscombe, D.; Graham, G.; Nimmo-Smith, A.

    2013-12-01

    The use of digital holography to image suspended particles in-situ using submersible systems is on the ascendancy. Such systems allow visualization of the in-focus particles without the depth-of-field issues associated with conventional imaging. The size and concentration of all particles, and each individual particle, can be rapidly and automatically assessed. The automated methods by which to extract these quantities can be readily evaluated using manual measurements. These methods are not possible using instruments based on optical and acoustic (back- or forward-) scattering, so-called 'sediment surrogate' methods, which are sensitive to the bulk quantities of all suspended particles in a sample volume, and rely on mathematically inverting a measured signal to derive the property of interest. Depending on the intended application, the number of holograms required to elucidate a process could range from tens to millions. Therefore manual particle extraction is not feasible for most data-sets. This has created a pressing need among the growing community of holography users, for accurate, automated processing which is comparable in output to more well-established in-situ sizing techniques such as laser diffraction. Here we discuss the computational considerations required to focus and segment individual particles from raw digital holograms, and then size and classify these particles by type; all using unsupervised (automated) image processing. To do so, we draw upon imagery from both controlled laboratory conditions to near-shore coastal environments, using different holographic system designs, and constituting a significant variety in particle types, sizes and shapes. We evaluate the success of these techniques, and suggest directions for future developments.

  18. A Rapid Segmentation-Insensitive "Digital Biopsy" Method for Radiomic Feature Extraction: Method and Pilot Study Using CT Images of Non-Small Cell Lung Cancer.

    PubMed

    Echegaray, Sebastian; Nair, Viswam; Kadoch, Michael; Leung, Ann; Rubin, Daniel; Gevaert, Olivier; Napel, Sandy

    2016-12-01

    Quantitative imaging approaches compute features within images' regions of interest. Segmentation is rarely completely automatic, requiring time-consuming editing by experts. We propose a new paradigm, called "digital biopsy," that allows for the collection of intensity- and texture-based features from these regions at least 1 order of magnitude faster than the current manual or semiautomated methods. A radiologist reviewed automated segmentations of lung nodules from 100 preoperative volume computed tomography scans of patients with non-small cell lung cancer, and manually adjusted the nodule boundaries in each section, to be used as a reference standard, requiring up to 45 minutes per nodule. We also asked a different expert to generate a digital biopsy for each patient using a paintbrush tool to paint a contiguous region of each tumor over multiple cross-sections, a procedure that required an average of <3 minutes per nodule. We simulated additional digital biopsies using morphological procedures. Finally, we compared the features extracted from these digital biopsies with our reference standard using intraclass correlation coefficient (ICC) to characterize robustness. Comparing the reference standard segmentations to our digital biopsies, we found that 84/94 features had an ICC >0.7; comparing erosions and dilations, using a sphere of 1.5-mm radius, of our digital biopsies to the reference standard segmentations resulted in 41/94 and 53/94 features, respectively, with ICCs >0.7. We conclude that many intensity- and texture-based features remain consistent between the reference standard and our method while substantially reducing the amount of operator time required.

  19. Tropical Timber Identification using Backpropagation Neural Network

    NASA Astrophysics Data System (ADS)

    Siregar, B.; Andayani, U.; Fatihah, N.; Hakim, L.; Fahmi, F.

    2017-01-01

    Each and every type of wood has different characteristics. Identifying the type of wood properly is important, especially for industries that need to know the type of timber specifically. However, it requires expertise in identifying the type of wood and only limited experts available. In addition, the manual identification even by experts is rather inefficient because it requires a lot of time and possibility of human errors. To overcome these problems, a digital image based method to identify the type of timber automatically is needed. In this study, backpropagation neural network is used as artificial intelligence component. Several stages were developed: a microscope image acquisition, pre-processing, feature extraction using gray level co-occurrence matrix and normalization of data extraction using decimal scaling features. The results showed that the proposed method was able to identify the timber with an accuracy of 94%.

  20. Automated In Vivo Platform for the Discovery of Functional Food Treatments of Hypercholesterolemia

    PubMed Central

    Littleton, Robert M.; Haworth, Kevin J.; Tang, Hong; Setchell, Kenneth D. R.; Nelson, Sandra; Hove, Jay R.

    2013-01-01

    The zebrafish is becoming an increasingly popular model system for both automated drug discovery and investigating hypercholesterolemia. Here we combine these aspects and for the first time develop an automated high-content confocal assay for treatments of hypercholesterolemia. We also create two algorithms for automated analysis of cardiodynamic data acquired by high-speed confocal microscopy. The first algorithm computes cardiac parameters solely from the frequency-domain representation of cardiodynamic data while the second uses both frequency- and time-domain data. The combined approach resulted in smaller differences relative to manual measurements. The methods are implemented to test the ability of a methanolic extract of the hawthorn plant (Crataegus laevigata) to treat hypercholesterolemia and its peripheral cardiovascular effects. Results demonstrate the utility of these methods and suggest the extract has both antihypercholesterolemic and postitively inotropic properties. PMID:23349685

  1. Automated in vivo platform for the discovery of functional food treatments of hypercholesterolemia.

    PubMed

    Littleton, Robert M; Haworth, Kevin J; Tang, Hong; Setchell, Kenneth D R; Nelson, Sandra; Hove, Jay R

    2013-01-01

    The zebrafish is becoming an increasingly popular model system for both automated drug discovery and investigating hypercholesterolemia. Here we combine these aspects and for the first time develop an automated high-content confocal assay for treatments of hypercholesterolemia. We also create two algorithms for automated analysis of cardiodynamic data acquired by high-speed confocal microscopy. The first algorithm computes cardiac parameters solely from the frequency-domain representation of cardiodynamic data while the second uses both frequency- and time-domain data. The combined approach resulted in smaller differences relative to manual measurements. The methods are implemented to test the ability of a methanolic extract of the hawthorn plant (Crataegus laevigata) to treat hypercholesterolemia and its peripheral cardiovascular effects. Results demonstrate the utility of these methods and suggest the extract has both antihypercholesterolemic and postitively inotropic properties.

  2. CT colonography: automated measurement of colonic polyps compared with manual techniques--human in vitro study.

    PubMed

    Taylor, Stuart A; Slater, Andrew; Halligan, Steve; Honeyfield, Lesley; Roddie, Mary E; Demeshski, Jamshid; Amin, Hamdam; Burling, David

    2007-01-01

    To prospectively investigate the relative accuracy and reproducibility of manual and automated computer software measurements by using polyps of known size in a human colectomy specimen. Institutional review board approval was obtained for the study; written consent for use of the surgical specimen was obtained. A colectomy specimen containing 27 polyps from a 16-year-old male patient with familial adenomatous polyposis was insufflated, submerged in a container with solution, and scanned at four-section multi-detector row computed tomography (CT). A histopathologist measured the maximum dimension of all polyps in the opened specimen. Digital photographs and line drawings were produced to aid CT-histologic measurement correlation. A novice (radiographic technician) and an experienced (radiologist) observer independently estimated polyp diameter with three methods: manual two-dimensional (2D) and manual three-dimensional (3D) measurement with software calipers and automated measurement with software (automatic). Data were analyzed with paired t tests and Bland-Altman limits of agreement. Seven polyps (

  3. Limited Area Coverage/High Resolution Picture Transmission (LAC/HRPT) tape IJ grid pixel extraction processor user's manual

    NASA Technical Reports Server (NTRS)

    Obrien, S. O. (Principal Investigator)

    1980-01-01

    The program, LACREG, extracted all pixels that are contained in a specific IJ grid section. The pixels, along with a header record are stored in a disk file defined by the user. The program will extract up to 99 IJ grid sections.

  4. Comparative performance evaluation of automated segmentation methods of hippocampus from magnetic resonance images of temporal lobe epilepsy patients

    PubMed Central

    Hosseini, Mohammad-Parsa; Nazem-Zadeh, Mohammad-Reza; Pompili, Dario; Jafari-Khouzani, Kourosh; Elisevich, Kost; Soltanian-Zadeh, Hamid

    2016-01-01

    Purpose: Segmentation of the hippocampus from magnetic resonance (MR) images is a key task in the evaluation of mesial temporal lobe epilepsy (mTLE) patients. Several automated algorithms have been proposed although manual segmentation remains the benchmark. Choosing a reliable algorithm is problematic since structural definition pertaining to multiple edges, missing and fuzzy boundaries, and shape changes varies among mTLE subjects. Lack of statistical references and guidance for quantifying the reliability and reproducibility of automated techniques has further detracted from automated approaches. The purpose of this study was to develop a systematic and statistical approach using a large dataset for the evaluation of automated methods and establish a method that would achieve results better approximating those attained by manual tracing in the epileptogenic hippocampus. Methods: A template database of 195 (81 males, 114 females; age range 32–67 yr, mean 49.16 yr) MR images of mTLE patients was used in this study. Hippocampal segmentation was accomplished manually and by two well-known tools (FreeSurfer and hammer) and two previously published methods developed at their institution [Automatic brain structure segmentation (ABSS) and LocalInfo]. To establish which method was better performing for mTLE cases, several voxel-based, distance-based, and volume-based performance metrics were considered. Statistical validations of the results using automated techniques were compared with the results of benchmark manual segmentation. Extracted metrics were analyzed to find the method that provided a more similar result relative to the benchmark. Results: Among the four automated methods, ABSS generated the most accurate results. For this method, the Dice coefficient was 5.13%, 14.10%, and 16.67% higher, Hausdorff was 22.65%, 86.73%, and 69.58% lower, precision was 4.94%, −4.94%, and 12.35% higher, and the root mean square (RMS) was 19.05%, 61.90%, and 65.08% lower than LocalInfo, FreeSurfer, and hammer, respectively. The Bland–Altman similarity analysis revealed a low bias for the ABSS and LocalInfo techniques compared to the others. Conclusions: The ABSS method for automated hippocampal segmentation outperformed other methods, best approximating what could be achieved by manual tracing. This study also shows that four categories of input data can cause automated segmentation methods to fail. They include incomplete studies, artifact, low signal-to-noise ratio, and inhomogeneity. Different scanner platforms and pulse sequences were considered as means by which to improve reliability of the automated methods. Other modifications were specially devised to enhance a particular method assessed in this study. PMID:26745947

  5. Locating articular cartilage in MR images

    NASA Astrophysics Data System (ADS)

    Folkesson, Jenny; Dam, Erik; Pettersen, Paola; Olsen, Ole F.; Nielsen, Mads; Christiansen, Claus

    2005-04-01

    Accurate computation of the thickness of the articular cartilage is of great importance when diagnosing and monitoring the progress of joint diseases such as osteoarthritis. A fully automated cartilage assessment method is preferable compared to methods using manual interaction in order to avoid inter- and intra-observer variability. As a first step in the cartilage assessment, we present an automatic method for locating articular cartilage in knee MRI using supervised learning. The next step will be to fit a variable shape model to the cartilage, initiated at the location found using the method presented in this paper. From the model, disease markers will be extracted for the quantitative evaluation of the cartilage. The cartilage is located using an ANN-classifier, where every voxel is classified as cartilage or non-cartilage based on prior knowledge of the cartilage structure. The classifier is tested using leave-one-out-evaluation, and we found the average sensitivity and specificity to be 91.0% and 99.4%, respectively. The center of mass calculated from voxels classified as cartilage are similar to the corresponding values calculated from manual segmentations, which confirms that this method can find a good initial position for a shape model.

  6. A New Variational Method for Bias Correction and Its Applications to Rodent Brain Extraction.

    PubMed

    Chang, Huibin; Huang, Weimin; Wu, Chunlin; Huang, Su; Guan, Cuntai; Sekar, Sakthivel; Bhakoo, Kishore Kumar; Duan, Yuping

    2017-03-01

    Brain extraction is an important preprocessing step for further analysis of brain MR images. Significant intensity inhomogeneity can be observed in rodent brain images due to the high-field MRI technique. Unlike most existing brain extraction methods that require bias corrected MRI, we present a high-order and L 0 regularized variational model for bias correction and brain extraction. The model is composed of a data fitting term, a piecewise constant regularization and a smooth regularization, which is constructed on a 3-D formulation for medical images with anisotropic voxel sizes. We propose an efficient multi-resolution algorithm for fast computation. At each resolution layer, we solve an alternating direction scheme, all subproblems of which have the closed-form solutions. The method is tested on three T2 weighted acquisition configurations comprising a total of 50 rodent brain volumes, which are with the acquisition field strengths of 4.7 Tesla, 9.4 Tesla and 17.6 Tesla, respectively. On one hand, we compare the results of bias correction with N3 and N4 in terms of the coefficient of variations on 20 different tissues of rodent brain. On the other hand, the results of brain extraction are compared against manually segmented gold standards, BET, BSE and 3-D PCNN based on a number of metrics. With the high accuracy and efficiency, our proposed method can facilitate automatic processing of large-scale brain studies.

  7. Teleoperated robotic sorting system

    DOEpatents

    Roos, Charles E.; Sommer, Jr., Edward J.; Parrish, Robert H.; Russell, James R.

    2008-06-24

    A method and apparatus are disclosed for classifying materials utilizing a computerized touch sensitive screen or other computerized pointing device for operator identification and electronic marking of spatial coordinates of materials to be extracted. An operator positioned at a computerized touch sensitive screen views electronic images of the mixture of materials to be sorted as they are conveyed past a sensor array which transmits sequences of images of the mixture either directly or through a computer to the touch sensitive display screen. The operator manually "touches" objects displayed on the screen to be extracted from the mixture thereby registering the spatial coordinates of the objects within the computer. The computer then tracks the registered objects as they are conveyed and directs automated devices including mechanical means such as air jets, robotic arms, or other mechanical diverters to extract the registered objects.

  8. Teleoperated robotic sorting system

    DOEpatents

    Roos, Charles E.; Sommer, Edward J.; Parrish, Robert H.; Russell, James R.

    2000-01-01

    A method and apparatus are disclosed for classifying materials utilizing a computerized touch sensitive screen or other computerized pointing device for operator identification and electronic marking of spatial coordinates of materials to be extracted. An operator positioned at a computerized touch sensitive screen views electronic images of the mixture of materials to be sorted as they are conveyed past a sensor array which transmits sequences of images of the mixture either directly or through a computer to the touch sensitive display screen. The operator manually "touches" objects displayed on the screen to be extracted from the mixture thereby registering the spatial coordinates of the objects within the computer. The computer then tracks the registered objects as they are conveyed and directs automated devices including mechanical means such as air jets, robotic arms, or other mechanical diverters to extract the registered objects.

  9. Morphological Feature Extraction for Automatic Registration of Multispectral Images

    NASA Technical Reports Server (NTRS)

    Plaza, Antonio; LeMoigne, Jacqueline; Netanyahu, Nathan S.

    2007-01-01

    The task of image registration can be divided into two major components, i.e., the extraction of control points or features from images, and the search among the extracted features for the matching pairs that represent the same feature in the images to be matched. Manual extraction of control features can be subjective and extremely time consuming, and often results in few usable points. On the other hand, automated feature extraction allows using invariant target features such as edges, corners, and line intersections as relevant landmarks for registration purposes. In this paper, we present an extension of a recently developed morphological approach for automatic extraction of landmark chips and corresponding windows in a fully unsupervised manner for the registration of multispectral images. Once a set of chip-window pairs is obtained, a (hierarchical) robust feature matching procedure, based on a multiresolution overcomplete wavelet decomposition scheme, is used for registration purposes. The proposed method is validated on a pair of remotely sensed scenes acquired by the Advanced Land Imager (ALI) multispectral instrument and the Hyperion hyperspectral instrument aboard NASA's Earth Observing-1 satellite.

  10. Automatic generation of investigator bibliographies for institutional research networking systems

    PubMed Central

    Johnson, Stephen B.; Bales, Michael E.; Dine, Daniel; Bakken, Suzanne; Albert, Paul J.; Weng, Chunhua

    2014-01-01

    Objective Publications are a key data source for investigator profiles and research networking systems. We developed ReCiter, an algorithm that automatically extracts bibliographies from PubMed using institutional information about the target investigators. Methods ReCiter executes a broad query against PubMed, groups the results into clusters that appear to constitute distinct author identities and selects the cluster that best matches the target investigator. Using information about investigators from one of our institutions, we compared ReCiter results to queries based on author name and institution and to citations extracted manually from the Scopus database. Five judges created a gold standard using citations of a random sample of 200 investigators. Results About half of the 10,471 potential investigators had no matching citations in PubMed, and about 45% had fewer than 70 citations. Interrater agreement (Fleiss’ kappa) for the gold standard was 0.81. Scopus achieved the best recall (sensitivity) of 0.81, while name-based queries had 0.78 and ReCiter had 0.69. ReCiter attained the best precision (positive predictive value) of 0.93 while Scopus had 0.85 and name-based queries had 0.31. Discussion ReCiter accesses the most current citation data, uses limited computational resources and minimizes manual entry by investigators. Generation of bibliographies using named-based queries will not yield high accuracy. Proprietary databases can perform well but requite manual effort. Automated generation with higher recall is possible but requires additional knowledge about investigators. PMID:24694772

  11. Detecting peatland drains with Object Based Image Analysis and Geoeye-1 imagery.

    PubMed

    Connolly, J; Holden, N M

    2017-12-01

    Peatlands play an important role in the global carbon cycle. They provide important ecosystem services including carbon sequestration and storage. Drainage disturbs peatland ecosystem services. Mapping drains is difficult and expensive and their spatial extent is, in many cases, unknown. An object based image analysis (OBIA) was performed on a very high resolution satellite image (Geoeye-1) to extract information about drain location and extent on a blanket peatland in Ireland. Two accuracy assessment methods: Error matrix and the completeness, correctness and quality (CCQ) were used to assess the extracted data across the peatland and at several sub sites. The cost of the OBIA method was compared with manual digitisation and field survey. The drain maps were also used to assess the costs relating to blocking drains vs. a business-as-usual scenario and estimating the impact of each on carbon fluxes at the study site. The OBIA method performed well at almost all sites. Almost 500 km of drains were detected within the peatland. In the error matrix method, overall accuracy (OA) of detecting the drains was 94% and the kappa statistic was 0.66. The OA for all sub-areas, except one, was 95-97%. The CCQ was 85%, 85% and 71% respectively. The OBIA method was the most cost effective way to map peatland drains and was at least 55% cheaper than either field survey or manual digitisation, respectively. The extracted drain maps were used constrain the study area CO 2 flux which was 19% smaller than the prescribed Peatland Code value for drained peatlands. The OBIA method used in this study showed that it is possible to accurately extract maps of fine scale peatland drains over large areas in a cost effective manner. The development of methods to map the spatial extent of drains is important as they play a critical role in peatland carbon dynamics. The objective of this study was to extract data on the spatial extent of drains on a blanket bog in the west of Ireland. The results show that information on drain extent and location can be extracted from high resolution imagery and mapped with a high degree of accuracy. Under Article 3.4 of the Kyoto Protocol Annex 1 parties can account for greenhouse gas emission by sources and removals by sinks resulting from "wetlands drainage and rewetting". The ability to map the spatial extent, density and location of peatlands drains means that Annex 1 parties can develop strategies for drain blocking to aid reduction of CO 2 emissions, DOC runoff and water discoloration. This paper highlights some uncertainty around using one-size-fits-all emission factors for GHG in drained peatlands and re-wetting scenarios. However, the OBIA method is robust and accurate and could be used to assess the extent of drains in peatlands across the globe aiding the refinement of peatland carbon dynamics .

  12. Detecting peatland drains with Object Based Image Analysis and Geoeye-1 imagery.

    PubMed

    Connolly, J; Holden, N M

    2017-12-01

    Peatlands play an important role in the global carbon cycle. They provide important ecosystem services including carbon sequestration and storage. Drainage disturbs peatland ecosystem services. Mapping drains is difficult and expensive and their spatial extent is, in many cases, unknown. An object based image analysis (OBIA) was performed on a very high resolution satellite image (Geoeye-1) to extract information about drain location and extent on a blanket peatland in Ireland. Two accuracy assessment methods: Error matrix and the completeness, correctness and quality ( CCQ ) were used to assess the extracted data across the peatland and at several sub sites. The cost of the OBIA method was compared with manual digitisation and field survey. The drain maps were also used to assess the costs relating to blocking drains vs. a business-as-usual scenario and estimating the impact of each on carbon fluxes at the study site. The OBIA method performed well at almost all sites. Almost 500 km of drains were detected within the peatland. In the error matrix method, overall accuracy (OA) of detecting the drains was 94% and the kappa statistic was 0.66. The OA for all sub-areas, except one, was 95-97%. The CCQ was 85%, 85% and 71% respectively. The OBIA method was the most cost effective way to map peatland drains and was at least 55% cheaper than either field survey or manual digitisation, respectively. The extracted drain maps were used constrain the study area CO 2 flux which was 19% smaller than the prescribed Peatland Code value for drained peatlands. The OBIA method used in this study showed that it is possible to accurately extract maps of fine scale peatland drains over large areas in a cost effective manner. The development of methods to map the spatial extent of drains is important as they play a critical role in peatland carbon dynamics. The objective of this study was to extract data on the spatial extent of drains on a blanket bog in the west of Ireland. The results show that information on drain extent and location can be extracted from high resolution imagery and mapped with a high degree of accuracy. Under Article 3.4 of the Kyoto Protocol Annex 1 parties can account for greenhouse gas emission by sources and removals by sinks resulting from "wetlands drainage and rewetting". The ability to map the spatial extent, density and location of peatlands drains means that Annex 1 parties can develop strategies for drain blocking to aid reduction of CO 2 emissions, DOC runoff and water discoloration. This paper highlights some uncertainty around using one-size-fits-all emission factors for GHG in drained peatlands and re-wetting scenarios. However, the OBIA method is robust and accurate and could be used to assess the extent of drains in peatlands across the globe aiding the refinement of peatland carbon dynamics .

  13. Comparative performance evaluation of automated segmentation methods of hippocampus from magnetic resonance images of temporal lobe epilepsy patients.

    PubMed

    Hosseini, Mohammad-Parsa; Nazem-Zadeh, Mohammad-Reza; Pompili, Dario; Jafari-Khouzani, Kourosh; Elisevich, Kost; Soltanian-Zadeh, Hamid

    2016-01-01

    Segmentation of the hippocampus from magnetic resonance (MR) images is a key task in the evaluation of mesial temporal lobe epilepsy (mTLE) patients. Several automated algorithms have been proposed although manual segmentation remains the benchmark. Choosing a reliable algorithm is problematic since structural definition pertaining to multiple edges, missing and fuzzy boundaries, and shape changes varies among mTLE subjects. Lack of statistical references and guidance for quantifying the reliability and reproducibility of automated techniques has further detracted from automated approaches. The purpose of this study was to develop a systematic and statistical approach using a large dataset for the evaluation of automated methods and establish a method that would achieve results better approximating those attained by manual tracing in the epileptogenic hippocampus. A template database of 195 (81 males, 114 females; age range 32-67 yr, mean 49.16 yr) MR images of mTLE patients was used in this study. Hippocampal segmentation was accomplished manually and by two well-known tools (FreeSurfer and hammer) and two previously published methods developed at their institution [Automatic brain structure segmentation (ABSS) and LocalInfo]. To establish which method was better performing for mTLE cases, several voxel-based, distance-based, and volume-based performance metrics were considered. Statistical validations of the results using automated techniques were compared with the results of benchmark manual segmentation. Extracted metrics were analyzed to find the method that provided a more similar result relative to the benchmark. Among the four automated methods, ABSS generated the most accurate results. For this method, the Dice coefficient was 5.13%, 14.10%, and 16.67% higher, Hausdorff was 22.65%, 86.73%, and 69.58% lower, precision was 4.94%, -4.94%, and 12.35% higher, and the root mean square (RMS) was 19.05%, 61.90%, and 65.08% lower than LocalInfo, FreeSurfer, and hammer, respectively. The Bland-Altman similarity analysis revealed a low bias for the ABSS and LocalInfo techniques compared to the others. The ABSS method for automated hippocampal segmentation outperformed other methods, best approximating what could be achieved by manual tracing. This study also shows that four categories of input data can cause automated segmentation methods to fail. They include incomplete studies, artifact, low signal-to-noise ratio, and inhomogeneity. Different scanner platforms and pulse sequences were considered as means by which to improve reliability of the automated methods. Other modifications were specially devised to enhance a particular method assessed in this study.

  14. Comparative evaluation of in-house manual, and commercial semi-automated and automated DNA extraction platforms in the sample preparation of human stool specimens for a Salmonella enterica 5'-nuclease assay.

    PubMed

    Schuurman, Tim; de Boer, Richard; Patty, Rachèl; Kooistra-Smid, Mirjam; van Zwet, Anton

    2007-12-01

    In the present study, three methods (NucliSens miniMAG [bioMérieux], MagNA Pure DNA Isolation Kit III Bacteria/Fungi [Roche], and a silica-guanidiniumthiocyanate {Si-GuSCN-F} procedure for extracting DNA from stool specimens were compared with regard to analytical performance (relative DNA recovery and down stream real-time PCR amplification of Salmonella enterica DNA), stability of the extracted DNA, hands-on time (HOT), total processing time (TPT), and costs. The Si-GuSCN-F procedure showed the highest analytical performance (relative recovery of 99%, S. enterica real-time PCR sensitivity of 91%) at the lowest associated costs per extraction (euro 4.28). However, this method did required the longest HOT (144 min) and subsequent TPT (176 min) when processing 24 extractions. Both miniMAG and MagNA Pure extraction showed similar performances at first (relative recoveries of 57% and 52%, S. enterica real-time PCR sensitivity of 85%). However, when difference in the observed Ct values after real-time PCR were taken into account, MagNA Pure resulted in a significant increase in Ct value compared to both miniMAG and Si-GuSCN-F (with on average +1.26 and +1.43 cycles). With regard to inhibition all methods showed relatively low inhibition rates (< 4%), with miniMAG providing the lowest rate (0.7%). Extracted DNA was stable for at least 1 year for all methods. HOT was lowest for MagNA Pure (60 min) and TPT was shortest for miniMAG (121 min). Costs, finally, were euro 4.28 for Si-GuSCN, euro 6.69 for MagNA Pure and euro 9.57 for miniMAG.

  15. Object-oriented classification of drumlins from digital elevation models

    NASA Astrophysics Data System (ADS)

    Saha, Kakoli

    Drumlins are common elements of glaciated landscapes which are easily identified by their distinct morphometric characteristics including shape, length/width ratio, elongation ratio, and uniform direction. To date, most researchers have mapped drumlins by tracing contours on maps, or through on-screen digitization directly on top of hillshaded digital elevation models (DEMs). This paper seeks to utilize the unique morphometric characteristics of drumlins and investigates automated extraction of the landforms as objects from DEMs by Definiens Developer software (V.7), using the 30 m United States Geological Survey National Elevation Dataset DEM as input. The Chautauqua drumlin field in Pennsylvania and upstate New York, USA was chosen as a study area. As the study area is huge (approximately covers 2500 sq.km. of area), small test areas were selected for initial testing of the method. Individual polygons representing the drumlins were extracted from the elevation data set by automated recognition, using Definiens' Multiresolution Segmentation tool, followed by rule-based classification. Subsequently parameters such as length, width and length-width ratio, perimeter and area were measured automatically. To test the accuracy of the method, a second base map was produced by manual on-screen digitization of drumlins from topographic maps and the same morphometric parameters were extracted from the mapped landforms using Definiens Developer. Statistical comparison showed a high agreement between the two methods confirming that object-oriented classification for extraction of drumlins can be used for mapping these landforms. The proposed method represents an attempt to solve the problem by providing a generalized rule-set for mass extraction of drumlins. To check that the automated extraction process was next applied to a larger area. Results showed that the proposed method is as successful for the bigger area as it was for the smaller test areas.

  16. Automatic media-adventitia IVUS image segmentation based on sparse representation framework and dynamic directional active contour model.

    PubMed

    Zakeri, Fahimeh Sadat; Setarehdan, Seyed Kamaledin; Norouzi, Somayye

    2017-10-01

    Segmentation of the arterial wall boundaries from intravascular ultrasound images is an important image processing task in order to quantify arterial wall characteristics such as shape, area, thickness and eccentricity. Since manual segmentation of these boundaries is a laborious and time consuming procedure, many researchers attempted to develop (semi-) automatic segmentation techniques as a powerful tool for educational and clinical purposes in the past but as yet there is no any clinically approved method in the market. This paper presents a deterministic-statistical strategy for automatic media-adventitia border detection by a fourfold algorithm. First, a smoothed initial contour is extracted based on the classification in the sparse representation framework which is combined with the dynamic directional convolution vector field. Next, an active contour model is utilized for the propagation of the initial contour toward the interested borders. Finally, the extracted contour is refined in the leakage, side branch openings and calcification regions based on the image texture patterns. The performance of the proposed algorithm is evaluated by comparing the results to those manually traced borders by an expert on 312 different IVUS images obtained from four different patients. The statistical analysis of the results demonstrates the efficiency of the proposed method in the media-adventitia border detection with enough consistency in the leakage and calcification regions. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. Annotated chemical patent corpus: a gold standard for text mining.

    PubMed

    Akhondi, Saber A; Klenner, Alexander G; Tyrchan, Christian; Manchala, Anil K; Boppana, Kiran; Lowe, Daniel; Zimmermann, Marc; Jagarlapudi, Sarma A R P; Sayle, Roger; Kors, Jan A; Muresan, Sorel

    2014-01-01

    Exploring the chemical and biological space covered by patent applications is crucial in early-stage medicinal chemistry activities. Patent analysis can provide understanding of compound prior art, novelty checking, validation of biological assays, and identification of new starting points for chemical exploration. Extracting chemical and biological entities from patents through manual extraction by expert curators can take substantial amount of time and resources. Text mining methods can help to ease this process. To validate the performance of such methods, a manually annotated patent corpus is essential. In this study we have produced a large gold standard chemical patent corpus. We developed annotation guidelines and selected 200 full patents from the World Intellectual Property Organization, United States Patent and Trademark Office, and European Patent Office. The patents were pre-annotated automatically and made available to four independent annotator groups each consisting of two to ten annotators. The annotators marked chemicals in different subclasses, diseases, targets, and modes of action. Spelling mistakes and spurious line break due to optical character recognition errors were also annotated. A subset of 47 patents was annotated by at least three annotator groups, from which harmonized annotations and inter-annotator agreement scores were derived. One group annotated the full set. The patent corpus includes 400,125 annotations for the full set and 36,537 annotations for the harmonized set. All patents and annotated entities are publicly available at www.biosemantics.org.

  18. Annotated Chemical Patent Corpus: A Gold Standard for Text Mining

    PubMed Central

    Akhondi, Saber A.; Klenner, Alexander G.; Tyrchan, Christian; Manchala, Anil K.; Boppana, Kiran; Lowe, Daniel; Zimmermann, Marc; Jagarlapudi, Sarma A. R. P.; Sayle, Roger; Kors, Jan A.; Muresan, Sorel

    2014-01-01

    Exploring the chemical and biological space covered by patent applications is crucial in early-stage medicinal chemistry activities. Patent analysis can provide understanding of compound prior art, novelty checking, validation of biological assays, and identification of new starting points for chemical exploration. Extracting chemical and biological entities from patents through manual extraction by expert curators can take substantial amount of time and resources. Text mining methods can help to ease this process. To validate the performance of such methods, a manually annotated patent corpus is essential. In this study we have produced a large gold standard chemical patent corpus. We developed annotation guidelines and selected 200 full patents from the World Intellectual Property Organization, United States Patent and Trademark Office, and European Patent Office. The patents were pre-annotated automatically and made available to four independent annotator groups each consisting of two to ten annotators. The annotators marked chemicals in different subclasses, diseases, targets, and modes of action. Spelling mistakes and spurious line break due to optical character recognition errors were also annotated. A subset of 47 patents was annotated by at least three annotator groups, from which harmonized annotations and inter-annotator agreement scores were derived. One group annotated the full set. The patent corpus includes 400,125 annotations for the full set and 36,537 annotations for the harmonized set. All patents and annotated entities are publicly available at www.biosemantics.org. PMID:25268232

  19. A computationally efficient method for incorporating spike waveform information into decoding algorithms.

    PubMed

    Ventura, Valérie; Todorova, Sonia

    2015-05-01

    Spike-based brain-computer interfaces (BCIs) have the potential to restore motor ability to people with paralysis and amputation, and have shown impressive performance in the lab. To transition BCI devices from the lab to the clinic, decoding must proceed automatically and in real time, which prohibits the use of algorithms that are computationally intensive or require manual tweaking. A common choice is to avoid spike sorting and treat the signal on each electrode as if it came from a single neuron, which is fast, easy, and therefore desirable for clinical use. But this approach ignores the kinematic information provided by individual neurons recorded on the same electrode. The contribution of this letter is a linear decoding model that extracts kinematic information from individual neurons without spike-sorting the electrode signals. The method relies on modeling sample averages of waveform features as functions of kinematics, which is automatic and requires minimal data storage and computation. In offline reconstruction of arm trajectories of a nonhuman primate performing reaching tasks, the proposed method performs as well as decoders based on expertly manually and automatically sorted spikes.

  20. An Internet-Based Method for Extracting Nursing Home Resident Sedative Medication Data From Pharmacy Packing Systems: Descriptive Evaluation

    PubMed Central

    Gee, Peter; Westbury, Juanita; Bindoff, Ivan; Peterson, Gregory

    2017-01-01

    Background Inappropriate use of sedating medication has been reported in nursing homes for several decades. The Reducing Use of Sedatives (RedUSe) project was designed to address this issue through a combination of audit, feedback, staff education, and medication review. The project significantly reduced sedative use in a controlled trial of 25 Tasmanian nursing homes. To expand the project to 150 nursing homes across Australia, an improved and scalable method of data collection was required. This paper describes and evaluates a method for remotely extracting, transforming, and validating electronic resident and medication data from community pharmacies supplying medications to nursing homes. Objective The aim of this study was to develop and evaluate an electronic method for extracting and enriching data on psychotropic medication use in nursing homes, on a national scale. Methods An application uploaded resident details and medication data from computerized medication packing systems in the pharmacies supplying participating nursing homes. The server converted medication codes used by the packing systems to Australian Medicines Terminology coding and subsequently to Anatomical Therapeutic Chemical (ATC) codes for grouping. Medications of interest, in this case antipsychotics and benzodiazepines, were automatically identified and quantified during the upload. This data was then validated on the Web by project staff and a “champion nurse” at the participating home. Results Of participating nursing homes, 94.6% (142/150) had resident and medication records uploaded. Facilitating an upload for one pharmacy took an average of 15 min. A total of 17,722 resident profiles were extracted, representing 95.6% (17,722/18,537) of the homes’ residents. For these, 546,535 medication records were extracted, of which, 28,053 were identified as antipsychotics or benzodiazepines. Of these, 8.17% (2291/28,053) were modified during validation and verification stages, and 4.75% (1398/29,451) were added. The champion nurse required a mean of 33 min website interaction to verify data, compared with 60 min for manual data entry. Conclusions The results show that the electronic data collection process is accurate: 95.25% (28,053/29,451) of sedative medications being taken by residents were identified and, of those, 91.83% (25,762/28,053) were correct without any manual intervention. The process worked effectively for nearly all homes. Although the pharmacy packing systems contain some invalid patient records, and data is sometimes incorrectly recorded, validation steps can overcome these problems and provide sufficiently accurate data for the purposes of reporting medication use in individual nursing homes. PMID:28778844

  1. CAMERA: An integrated strategy for compound spectra extraction and annotation of LC/MS data sets

    PubMed Central

    Kuhl, Carsten; Tautenhahn, Ralf; Böttcher, Christoph; Larson, Tony R.; Neumann, Steffen

    2013-01-01

    Liquid chromatography coupled to mass spectrometry is routinely used for metabolomics experiments. In contrast to the fairly routine and automated data acquisition steps, subsequent compound annotation and identification require extensive manual analysis and thus form a major bottle neck in data interpretation. Here we present CAMERA, a Bioconductor package integrating algorithms to extract compound spectra, annotate isotope and adduct peaks, and propose the accurate compound mass even in highly complex data. To evaluate the algorithms, we compared the annotation of CAMERA against a manually defined annotation for a mixture of known compounds spiked into a complex matrix at different concentrations. CAMERA successfully extracted accurate masses for 89.7% and 90.3% of the annotatable compounds in positive and negative ion mode, respectively. Furthermore, we present a novel annotation approach that combines spectral information of data acquired in opposite ion modes to further improve the annotation rate. We demonstrate the utility of CAMERA in two different, easily adoptable plant metabolomics experiments, where the application of CAMERA drastically reduced the amount of manual analysis. PMID:22111785

  2. Simultaneous determination of fumonisins B1 and B2 in different types of maize by matrix solid phase dispersion and HPLC-MS/MS.

    PubMed

    de Oliveira, Gabriel Barros; de Castro Gomes Vieira, Carolyne Menezes; Orlando, Ricardo Mathias; Faria, Adriana Ferreira

    2017-10-15

    This work involved the optimization and validation of a method, according to Directive 2002/657/EC and the Analytical Quality Assurance Manual of Ministério da Agricultura, Pecuária e Abastecimento, Brazil, for simultaneous extraction and determination of fumonisins B1 and B2 in maize. The extraction procedure was based on a matrix solid phase dispersion approach, the optimization of which employed a sequence of different factorial designs. A liquid chromatography-tandem mass spectrometry method was developed for determining these analytes using the selected reaction monitoring mode. The optimized method employed only 1g of silica gel for dispersion and elution with 70% ammonium formate aqueous buffer (50mmolL -1 , pH 9), representing a simple, cheap and chemically friendly sample preparation method. Trueness (recoveries: 86-106%), precision (RSD ≤19%), decision limits, detection capabilities and measurement uncertainties were calculated for the validated method. The method scope was expanded to popcorn kernels, white maize kernels and yellow maize grits. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. Classification of Mls Point Clouds in Urban Scenes Using Detrended Geometric Features from Supervoxel-Based Local Contexts

    NASA Astrophysics Data System (ADS)

    Sun, Z.; Xu, Y.; Hoegner, L.; Stilla, U.

    2018-05-01

    In this work, we propose a classification method designed for the labeling of MLS point clouds, with detrended geometric features extracted from the points of the supervoxel-based local context. To achieve the analysis of complex 3D urban scenes, acquired points of the scene should be tagged with individual labels of different classes. Thus, assigning a unique label to the points of an object that belong to the same category plays an essential role in the entire 3D scene analysis workflow. Although plenty of studies in this field have been reported, this work is still a challenging task. Specifically, in this work: 1) A novel geometric feature extraction method, detrending the redundant and in-salient information in the local context, is proposed, which is proved to be effective for extracting local geometric features from the 3D scene. 2) Instead of using individual point as basic element, the supervoxel-based local context is designed to encapsulate geometric characteristics of points, providing a flexible and robust solution for feature extraction. 3) Experiments using complex urban scene with manually labeled ground truth are conducted, and the performance of proposed method with respect to different methods is analyzed. With the testing dataset, we have obtained a result of 0.92 for overall accuracy for assigning eight semantic classes.

  4. Assessment of commercial NLP engines for medication information extraction from dictated clinical notes.

    PubMed

    Jagannathan, V; Mullett, Charles J; Arbogast, James G; Halbritter, Kevin A; Yellapragada, Deepthi; Regulapati, Sushmitha; Bandaru, Pavani

    2009-04-01

    We assessed the current state of commercial natural language processing (NLP) engines for their ability to extract medication information from textual clinical documents. Two thousand de-identified discharge summaries and family practice notes were submitted to four commercial NLP engines with the request to extract all medication information. The four sets of returned results were combined to create a comparison standard which was validated against a manual, physician-derived gold standard created from a subset of 100 reports. Once validated, the individual vendor results for medication names, strengths, route, and frequency were compared against this automated standard with precision, recall, and F measures calculated. Compared with the manual, physician-derived gold standard, the automated standard was successful at accurately capturing medication names (F measure=93.2%), but performed less well with strength (85.3%) and route (80.3%), and relatively poorly with dosing frequency (48.3%). Moderate variability was seen in the strengths of the four vendors. The vendors performed better with the structured discharge summaries than with the clinic notes in an analysis comparing the two document types. Although automated extraction may serve as the foundation for a manual review process, it is not ready to automate medication lists without human intervention.

  5. Bayesian convolutional neural network based MRI brain extraction on nonhuman primates.

    PubMed

    Zhao, Gengyan; Liu, Fang; Oler, Jonathan A; Meyerand, Mary E; Kalin, Ned H; Birn, Rasmus M

    2018-07-15

    Brain extraction or skull stripping of magnetic resonance images (MRI) is an essential step in neuroimaging studies, the accuracy of which can severely affect subsequent image processing procedures. Current automatic brain extraction methods demonstrate good results on human brains, but are often far from satisfactory on nonhuman primates, which are a necessary part of neuroscience research. To overcome the challenges of brain extraction in nonhuman primates, we propose a fully-automated brain extraction pipeline combining deep Bayesian convolutional neural network (CNN) and fully connected three-dimensional (3D) conditional random field (CRF). The deep Bayesian CNN, Bayesian SegNet, is used as the core segmentation engine. As a probabilistic network, it is not only able to perform accurate high-resolution pixel-wise brain segmentation, but also capable of measuring the model uncertainty by Monte Carlo sampling with dropout in the testing stage. Then, fully connected 3D CRF is used to refine the probability result from Bayesian SegNet in the whole 3D context of the brain volume. The proposed method was evaluated with a manually brain-extracted dataset comprising T1w images of 100 nonhuman primates. Our method outperforms six popular publicly available brain extraction packages and three well-established deep learning based methods with a mean Dice coefficient of 0.985 and a mean average symmetric surface distance of 0.220 mm. A better performance against all the compared methods was verified by statistical tests (all p-values < 10 -4 , two-sided, Bonferroni corrected). The maximum uncertainty of the model on nonhuman primate brain extraction has a mean value of 0.116 across all the 100 subjects. The behavior of the uncertainty was also studied, which shows the uncertainty increases as the training set size decreases, the number of inconsistent labels in the training set increases, or the inconsistency between the training set and the testing set increases. Copyright © 2018 Elsevier Inc. All rights reserved.

  6. Calibration of the Software Architecture Sizing and Estimation Tool (SASET).

    DTIC Science & Technology

    1995-09-01

    model is of more value than the uncalibrated one. Also, as will be discussed in Chapters 3 and 4, there are quite a few manual (and undocumented) steps...complexity, normalized effective size, and normalized effort. One other field ("development phases included") was extracted manually since it was not listed...Bowden, R.G., Cheadle, W.G., & Ratliff, R.W. SASET 3.0 Technical Reference Manual . Publication S-3730-93-2. Denver: Martin Marietta Astronautics

  7. Validation of a semi-automatic protocol for the assessment of the tear meniscus central area based on open-source software

    NASA Astrophysics Data System (ADS)

    Pena-Verdeal, Hugo; Garcia-Resua, Carlos; Yebra-Pimentel, Eva; Giraldez, Maria J.

    2017-08-01

    Purpose: Different lower tear meniscus parameters can be clinical assessed on dry eye diagnosis. The aim of this study was to propose and analyse the variability of a semi-automatic method for measuring lower tear meniscus central area (TMCA) by using open source software. Material and methods: On a group of 105 subjects, one video of the lower tear meniscus after fluorescein instillation was generated by a digital camera attached to a slit-lamp. A short light beam (3x5 mm) with moderate illumination in the central portion of the meniscus (6 o'clock) was used. Images were extracted from each video by a masked observer. By using an open source software based on Java (NIH ImageJ), a further observer measured in a masked and randomized order the TMCA in the short light beam illuminated area by two methods: (1) manual method, where TMCA images was "manually" measured; (2) semi-automatic method, where TMCA images were transformed in an 8-bit-binary image, then holes inside this shape were filled and on the isolated shape, the area size was obtained. Finally, both measurements, manual and semi-automatic, were compared. Results: Paired t-test showed no statistical difference between both techniques results (p = 0.102). Pearson correlation between techniques show a significant positive near to perfect correlation (r = 0.99; p < 0.001). Conclusions: This study showed a useful tool to objectively measure the frontal central area of the meniscus in photography by free open source software.

  8. A new background distribution-based active contour model for three-dimensional lesion segmentation in breast DCE-MRI

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Hui; Liu, Yiping; Qiu, Tianshuang

    2014-08-15

    Purpose: To develop and evaluate a computerized semiautomatic segmentation method for accurate extraction of three-dimensional lesions from dynamic contrast-enhanced magnetic resonance images (DCE-MRIs) of the breast. Methods: The authors propose a new background distribution-based active contour model using level set (BDACMLS) to segment lesions in breast DCE-MRIs. The method starts with manual selection of a region of interest (ROI) that contains the entire lesion in a single slice where the lesion is enhanced. Then the lesion volume from the volume data of interest, which is captured automatically, is separated. The core idea of BDACMLS is a new signed pressure functionmore » which is based solely on the intensity distribution combined with pathophysiological basis. To compare the algorithm results, two experienced radiologists delineated all lesions jointly to obtain the ground truth. In addition, results generated by other different methods based on level set (LS) are also compared with the authors’ method. Finally, the performance of the proposed method is evaluated by several region-based metrics such as the overlap ratio. Results: Forty-two studies with 46 lesions that contain 29 benign and 17 malignant lesions are evaluated. The dataset includes various typical pathologies of the breast such as invasive ductal carcinoma, ductal carcinomain situ, scar carcinoma, phyllodes tumor, breast cysts, fibroadenoma, etc. The overlap ratio for BDACMLS with respect to manual segmentation is 79.55% ± 12.60% (mean ± s.d.). Conclusions: A new active contour model method has been developed and shown to successfully segment breast DCE-MRI three-dimensional lesions. The results from this model correspond more closely to manual segmentation, solve the weak-edge-passed problem, and improve the robustness in segmenting different lesions.« less

  9. Simple and efficient method for region of interest value extraction from picture archiving and communication system viewer with optical character recognition software and macro program.

    PubMed

    Lee, Young Han; Park, Eun Hae; Suh, Jin-Suck

    2015-01-01

    The objectives are: 1) to introduce a simple and efficient method for extracting region of interest (ROI) values from a Picture Archiving and Communication System (PACS) viewer using optical character recognition (OCR) software and a macro program, and 2) to evaluate the accuracy of this method with a PACS workstation. This module was designed to extract the ROI values on the images of the PACS, and created as a development tool by using open-source OCR software and an open-source macro program. The principal processes are as follows: (1) capture a region of the ROI values as a graphic file for OCR, (2) recognize the text from the captured image by OCR software, (3) perform error-correction, (4) extract the values including area, average, standard deviation, max, and min values from the text, (5) reformat the values into temporary strings with tabs, and (6) paste the temporary strings into the spreadsheet. This principal process was repeated for the number of ROIs. The accuracy of this module was evaluated on 1040 recognitions from 280 randomly selected ROIs of the magnetic resonance images. The input times of ROIs were compared between conventional manual method and this extraction module-assisted input method. The module for extracting ROI values operated successfully using the OCR and macro programs. The values of the area, average, standard deviation, maximum, and minimum could be recognized and error-corrected with AutoHotkey-coded module. The average input times using the conventional method and the proposed module-assisted method were 34.97 seconds and 7.87 seconds, respectively. A simple and efficient method for ROI value extraction was developed with open-source OCR and a macro program. Accurate inputs of various numbers from ROIs can be extracted with this module. The proposed module could be applied to the next generation of PACS or existing PACS that have not yet been upgraded. Copyright © 2015 AUR. Published by Elsevier Inc. All rights reserved.

  10. Forensic print extraction using 3D technology and its processing

    NASA Astrophysics Data System (ADS)

    Rajeev, Srijith; Shreyas, Kamath K. M.; Panetta, Karen; Agaian, Sos S.

    2017-05-01

    Biometric evidence plays a crucial role in criminal scene analysis. Forensic prints can be extracted from any solid surface such as firearms, doorknobs, carpets and mugs. Prints such as fingerprints, palm prints, footprints and lip-prints can be classified into patent, latent, and three-dimensional plastic prints. Traditionally, law enforcement officers capture these forensic traits using an electronic device or extract them manually, and save the data electronically using special scanners. The reliability and accuracy of the method depends on the ability of the officer or the electronic device to extract and analyze the data. Furthermore, the 2-D acquisition and processing system is laborious and cumbersome. This can lead to the increase in false positive and true negative rates in print matching. In this paper, a method and system to extract forensic prints from any surface, irrespective of its shape, is presented. First, a suitable 3-D camera is used to capture images of the forensic print, and then the 3-D image is processed and unwrapped to obtain 2-D equivalent biometric prints. Computer simulations demonstrate the effectiveness of using 3-D technology for biometric matching of fingerprints, palm prints, and lip-prints. This system can be further extended to other biometric and non-biometric modalities.

  11. Quantifying the robustness of [18F]FDG-PET/CT radiomic features with respect to tumor delineation in head and neck and pancreatic cancer patients.

    PubMed

    Belli, Maria Luisa; Mori, Martina; Broggi, Sara; Cattaneo, Giovanni Mauro; Bettinardi, Valentino; Dell'Oca, Italo; Fallanca, Federico; Passoni, Paolo; Vanoli, Emilia Giovanna; Calandrino, Riccardo; Di Muzio, Nadia; Picchio, Maria; Fiorino, Claudio

    2018-05-01

    To investigate the robustness of PET radiomic features (RF) against tumour delineation uncertainty in two clinically relevant situations. Twenty-five head-and-neck (HN) and 25 pancreatic cancer patients previously treated with 18 F-Fluorodeoxyglucose (FDG) positron emission tomography/computed tomography (PET/CT)-based planning optimization were considered. Seven FDG-based contours were delineated for tumour (T) and positive lymph nodes (N, for HN patients only) following manual (2 observers), semi-automatic (based on SUV maximum gradient: PET_Edge) and automatic (40%, 50%, 60%, 70% SUV_max thresholds) methods. Seventy-three RF (14 of first order and 59 of higher order) were extracted using the CGITA software (v.1.4). The impact of delineation on volume agreement and RF was assessed by DICE and Intra-class Correlation Coefficients (ICC). A large disagreement between manual and SUV_max method was found for thresholds  ≥50%. Inter-observer variability showed median DICE values between 0.81 (HN-T) and 0.73 (pancreas). Volumes defined by PET_Edge were better consistent with the manual ones compared to SUV40%. Regarding RF, 19%/19%/47% of the features showed ICC < 0.80 between observers for HN-N/HN-T/pancreas, mostly in the Voxel-alignment matrix and in the intensity-size zone matrix families. RFs with ICC < 0.80 against manual delineation (taking the worst value) increased to 44%/36%/61% for PET_Edge and to 69%/53%/75% for SUV40%. About 80%/50% of 72 RF were consistent between observers for HN/pancreas patients. PET_edge was sufficiently robust against manual delineation while SUV40% showed a worse performance. This result suggests the possibility to replace manual with semi-automatic delineation of HN and pancreas tumours in studies including PET radiomic analyses. Copyright © 2018 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  12. An optimized work-flow to reduce time-to-detection of carbapenemase-producing Enterobacteriaceae (CPE) using direct testing from rectal swabs.

    PubMed

    O'Connor, C; Kiernan, M G; Finnegan, C; O'Hara, M; Power, L; O'Connell, N H; Dunne, C P

    2017-05-04

    Rapid detection of patients with carbapenemase-producing Enterobacteriaceae (CPE) is essential for the prevention of nosocomial cross-transmission, allocation of isolation facilities and to protect patient safety. Here, we aimed to design a new laboratory work-flow, utilizing existing laboratory resources, in order to reduce time-to-diagnosis of CPE. A review of the current CPE testing processes and of the literature was performed to identify a real-time commercial polymerase chain reaction (PCR) assay that could facilitate batch testing of CPE clinical specimens, with adequate CPE gene coverage. Stool specimens (210) were collected; CPE-positive inpatients (n = 10) and anonymized community stool specimens (n = 200). Rectal swabs (eSwab™) were inoculated from collected stool specimens and a manual DNA extraction method (QIAamp® DNA Stool Mini Kit) was employed. Extracted DNA was then processed on the Check-Direct CPE® assay. The three step process of making the eSwab™, extracting DNA manually and running the Check-Direct CPE® assay, took <5 min, 1 h 30 min and 1 h 50 min, respectively. It was time efficient with a result available in under 4 h, comparing favourably with the existing method of CPE screening; average time-to-diagnosis of 48/72 h. Utilizing this CPE work-flow would allow a 'same-day' result. Antimicrobial susceptibility testing results, as is current practice, would remain a 'next-day' result. In conclusion, the Check-Direct CPE® assay was easily integrated into a local laboratory work-flow and could facilitate a large volume of CPE screening specimens in a single batch, making it cost-effective and convenient for daily CPE testing.

  13. Multi-Modal Glioblastoma Segmentation: Man versus Machine

    PubMed Central

    Pica, Alessia; Schucht, Philippe; Beck, Jürgen; Verma, Rajeev Kumar; Slotboom, Johannes; Reyes, Mauricio; Wiest, Roland

    2014-01-01

    Background and Purpose Reproducible segmentation of brain tumors on magnetic resonance images is an important clinical need. This study was designed to evaluate the reliability of a novel fully automated segmentation tool for brain tumor image analysis in comparison to manually defined tumor segmentations. Methods We prospectively evaluated preoperative MR Images from 25 glioblastoma patients. Two independent expert raters performed manual segmentations. Automatic segmentations were performed using the Brain Tumor Image Analysis software (BraTumIA). In order to study the different tumor compartments, the complete tumor volume TV (enhancing part plus non-enhancing part plus necrotic core of the tumor), the TV+ (TV plus edema) and the contrast enhancing tumor volume CETV were identified. We quantified the overlap between manual and automated segmentation by calculation of diameter measurements as well as the Dice coefficients, the positive predictive values, sensitivity, relative volume error and absolute volume error. Results Comparison of automated versus manual extraction of 2-dimensional diameter measurements showed no significant difference (p = 0.29). Comparison of automated versus manual segmentation of volumetric segmentations showed significant differences for TV+ and TV (p<0.05) but no significant differences for CETV (p>0.05) with regard to the Dice overlap coefficients. Spearman's rank correlation coefficients (ρ) of TV+, TV and CETV showed highly significant correlations between automatic and manual segmentations. Tumor localization did not influence the accuracy of segmentation. Conclusions In summary, we demonstrated that BraTumIA supports radiologists and clinicians by providing accurate measures of cross-sectional diameter-based tumor extensions. The automated volume measurements were comparable to manual tumor delineation for CETV tumor volumes, and outperformed inter-rater variability for overlap and sensitivity. PMID:24804720

  14. Methicillin-resistant Staphylococcus aureus: an overview for manual therapists().

    PubMed

    Green, Bart N; Johnson, Claire D; Egan, Jonathon Todd; Rosenthal, Michael; Griffith, Erin A; Evans, Marion Willard

    2012-03-01

    Methicillin-resistant Staphylococcus aureus (MRSA) is associated with difficult-to-treat infections and high levels of morbidity. Manual practitioners work in environments where MRSA is a common acquired infection. The purpose of this review is to provide a practical overview of MRSA as it applies to the manual therapy professions (eg, physical and occupational therapy, athletic training, chiropractic, osteopathy, massage, sports medicine) and to discuss how to identify and prevent MRSA infections in manual therapy work environments. PubMed and CINAHL were searched from the beginning of their respective indexing years through June 2011 using the search terms MRSA, methicillin-resistant Staphylococcus aureus, and Staphylococcus aureus. Texts and authoritative Web sites were also reviewed. Pertinent articles from the authors' libraries were included if they were not already identified in the literature search. Articles were included if they were applicable to ambulatory health care environments in which manual therapists work or if the content of the article related to the clinical management of MRSA. Following information extraction, 95 citations were included in this review, to include 76 peer-reviewed journal articles, 16 government Web sites, and 3 textbooks. Information was organized into 10 clinically relevant categories for presentation. Information was organized into the following clinically relevant categories: microbiology, development of MRSA, risk factors for infection, clinical presentation, diagnostic tests, screening tests, reporting, treatment, prevention for patients and athletes, and prevention for health care workers. Methicillin-resistant S aureus is a health risk in the community and to patients and athletes treated by manual therapists. Manual practitioners can play an essential role in recognizing MRSA infections and helping to control its transmission in the health care environment and the community. Essential methods for protecting patients and health care workers include being aware of presenting signs, patient education, and using appropriate hand and clinic hygiene.

  15. A combined Fuzzy and Naive Bayesian strategy can be used to assign event codes to injury narratives.

    PubMed

    Marucci-Wellman, H; Lehto, M; Corns, H

    2011-12-01

    Bayesian methods show promise for classifying injury narratives from large administrative datasets into cause groups. This study examined a combined approach where two Bayesian models (Fuzzy and Naïve) were used to either classify a narrative or select it for manual review. Injury narratives were extracted from claims filed with a worker's compensation insurance provider between January 2002 and December 2004. Narratives were separated into a training set (n=11,000) and prediction set (n=3,000). Expert coders assigned two-digit Bureau of Labor Statistics Occupational Injury and Illness Classification event codes to each narrative. Fuzzy and Naïve Bayesian models were developed using manually classified cases in the training set. Two semi-automatic machine coding strategies were evaluated. The first strategy assigned cases for manual review if the Fuzzy and Naïve models disagreed on the classification. The second strategy selected additional cases for manual review from the Agree dataset using prediction strength to reach a level of 50% computer coding and 50% manual coding. When agreement alone was used as the filtering strategy, the majority were coded by the computer (n=1,928, 64%) leaving 36% for manual review. The overall combined (human plus computer) sensitivity was 0.90 and positive predictive value (PPV) was >0.90 for 11 of 18 2-digit event categories. Implementing the 2nd strategy improved results with an overall sensitivity of 0.95 and PPV >0.90 for 17 of 18 categories. A combined Naïve-Fuzzy Bayesian approach can classify some narratives with high accuracy and identify others most beneficial for manual review, reducing the burden on human coders.

  16. Systematic optimization of ethyl glucuronide extraction conditions from scalp hair by design of experiments and its potential effect on cut-off values appraisal.

    PubMed

    Alladio, Eugenio; Biosa, Giulia; Seganti, Fabrizio; Di Corcia, Daniele; Salomone, Alberto; Vincenti, Marco; Baumgartner, Markus R

    2018-05-11

    The quantitative determination of ethyl glucuronide (EtG) in hair samples is consistently used throughout the world to assess chronic excessive alcohol consumption. For administrative and legal purposes, the analytical results are compared with cut-off values recognized by regulatory authorities and scientific societies. However, it has been recently recognized that the analytical results depend on the hair sample pretreatment procedures, including the crumbling and extraction conditions. A systematic evaluation of the EtG extraction conditions from pulverized scalp hair was conducted by design of experiments (DoE) considering the extraction time, temperature, pH, and solvent composition as potential influencing factors. It was concluded that an overnight extraction at 60°C with pure water at neutral pH represents the most effective conditions to achieve high extraction yields. The absence of differential degradation of the internal standard (isotopically-labeled EtG) under such conditions was confirmed and the overall analytical method was validated according to SGWTOX and ISO17025 criteria. Twenty real hair samples with different EtG content were analyzed with three commonly accepted procedures: (a) hair manually cut in snippets and extracted at room temperature; (b) pulverized hair extracted at room temperature; (c) hair treated with the optimized method. Average increments of EtG concentration around 69% (from a to c) and 29% (from b to c) were recorded. In light of these results, the authors urge the scientific community to undertake an inter-laboratory study with the aim of defining more in detail the optimal hair EtG detection method and verifying the corresponding cut-off level for legal enforcements. This article is protected by copyright. All rights reserved.

  17. IMU-Based Gait Recognition Using Convolutional Neural Networks and Multi-Sensor Fusion.

    PubMed

    Dehzangi, Omid; Taherisadr, Mojtaba; ChangalVala, Raghvendar

    2017-11-27

    The wide spread usage of wearable sensors such as in smart watches has provided continuous access to valuable user generated data such as human motion that could be used to identify an individual based on his/her motion patterns such as, gait. Several methods have been suggested to extract various heuristic and high-level features from gait motion data to identify discriminative gait signatures and distinguish the target individual from others. However, the manual and hand crafted feature extraction is error prone and subjective. Furthermore, the motion data collected from inertial sensors have complex structure and the detachment between manual feature extraction module and the predictive learning models might limit the generalization capabilities. In this paper, we propose a novel approach for human gait identification using time-frequency (TF) expansion of human gait cycles in order to capture joint 2 dimensional (2D) spectral and temporal patterns of gait cycles. Then, we design a deep convolutional neural network (DCNN) learning to extract discriminative features from the 2D expanded gait cycles and jointly optimize the identification model and the spectro-temporal features in a discriminative fashion. We collect raw motion data from five inertial sensors placed at the chest, lower-back, right hand wrist, right knee, and right ankle of each human subject synchronously in order to investigate the impact of sensor location on the gait identification performance. We then present two methods for early (input level) and late (decision score level) multi-sensor fusion to improve the gait identification generalization performance. We specifically propose the minimum error score fusion (MESF) method that discriminatively learns the linear fusion weights of individual DCNN scores at the decision level by minimizing the error rate on the training data in an iterative manner. 10 subjects participated in this study and hence, the problem is a 10-class identification task. Based on our experimental results, 91% subject identification accuracy was achieved using the best individual IMU and 2DTF-DCNN. We then investigated our proposed early and late sensor fusion approaches, which improved the gait identification accuracy of the system to 93.36% and 97.06%, respectively.

  18. [Learning curve of vacuum extraction in residency: a preliminary study].

    PubMed

    Velemir, L; Vendittelli, F; Bonnefoy, C; Accoceberry, M; Savary, D; Gallot, D

    2009-09-01

    The aim of this study was to assess the lurning curve of young residents for vacuum extraction. All vacuum extractions performed in our department by five residents (< or =5th semester) during a study period of nine months were systematically supervised by a senior who fulfilled an assessment questionnaire from which was calculated a score reflecting the quality of the extraction. Fifty-four vacuum extractions were assessed with a mean of 10.8+/-2.9 (range, 10-13) procedures by resident. We compared the group including the six first procedures performed by each resident (group 1, n = 30) with the group including the following procedures (group 2, n = 24). We observed in the group 2 compared to the group 1, a significant improvement of the scores mean (12.3+/-5.4 vs 8.4+/-6.2, p = 0.016) and a significant reduction of the need for manual assistance by the senior (12.5% vs 40%, p = 0.034). We report a method for the learning and assessment of vacuum extraction feasible at "the bed" of the patient. This approach allows to observe a significant progression of the resident for the technique of vacuum extraction on a dozen of procedures.

  19. Automatic Determination of the Conic Coronal Mass Ejection Model Parameters

    NASA Technical Reports Server (NTRS)

    Pulkkinen, A.; Oates, T.; Taktakishvili, A.

    2009-01-01

    Characterization of the three-dimensional structure of solar transients using incomplete plane of sky data is a difficult problem whose solutions have potential for societal benefit in terms of space weather applications. In this paper transients are characterized in three dimensions by means of conic coronal mass ejection (CME) approximation. A novel method for the automatic determination of cone model parameters from observed halo CMEs is introduced. The method uses both standard image processing techniques to extract the CME mass from white-light coronagraph images and a novel inversion routine providing the final cone parameters. A bootstrap technique is used to provide model parameter distributions. When combined with heliospheric modeling, the cone model parameter distributions will provide direct means for ensemble predictions of transient propagation in the heliosphere. An initial validation of the automatic method is carried by comparison to manually determined cone model parameters. It is shown using 14 halo CME events that there is reasonable agreement, especially between the heliocentric locations of the cones derived with the two methods. It is argued that both the heliocentric locations and the opening half-angles of the automatically determined cones may be more realistic than those obtained from the manual analysis

  20. An On-Line Solid Phase Extraction-Liquid Chromatography-Tandem Mass Spectrometry Method for the Determination of Perfluoroalkyl Acids in Drinking and Surface Waters

    PubMed Central

    Mazzoni, Michela; Rusconi, Marianna; Valsecchi, Sara; Martins, Claudia P. B.; Polesello, Stefano

    2015-01-01

    An UHPLC-MS/MS multiresidue method based on an on-line solid phase extraction (SPE) procedure was developed for the simultaneous determination of 9 perfluorinated carboxylates (from 4 to 12 carbon atoms) and 3 perfluorinated sulphonates (from 4 to 8 carbon atoms). This work proposes using an on-line solid phase extraction before chromatographic separation and analysis to replace traditional methods of off-line SPE before direct injection to LC-MS/MS. Manual sample preparation was reduced to sample centrifugation and acidification, thus eliminating several procedural errors and significantly reducing time-consuming and costs. Ionization suppression between target perfluorinated analytes and their coeluting SIL-IS were detected for homologues with a number of carbon atoms less than 9, but the quantitation was not affected. Total matrix effect corrected by SIL-IS, inclusive of extraction efficacy, and of ionization efficiency, ranged between −34 and +39%. The percentage of recoveries, between 76 and 134%, calculated in different matrices (tap water and rivers impacted by different pollutions) was generally satisfactory. LODs and LOQs of this on-line SPE method, which also incorporate recovery losses, ranged from 0.2 to 5.0 ng/L and from 1 to 20 ng/L, respectively. Validated on-line SPE-LC/MS/MS method has been applied in a wide survey for the determination of perfluoroalkyl acids in Italian surface and ground waters. PMID:25834752

  1. PREDOSE: A Semantic Web Platform for Drug Abuse Epidemiology using Social Media

    PubMed Central

    Cameron, Delroy; Smith, Gary A.; Daniulaityte, Raminta; Sheth, Amit P.; Dave, Drashti; Chen, Lu; Anand, Gaurish; Carlson, Robert; Watkins, Kera Z.; Falck, Russel

    2013-01-01

    Objectives The role of social media in biomedical knowledge mining, including clinical, medical and healthcare informatics, prescription drug abuse epidemiology and drug pharmacology, has become increasingly significant in recent years. Social media offers opportunities for people to share opinions and experiences freely in online communities, which may contribute information beyond the knowledge of domain professionals. This paper describes the development of a novel Semantic Web platform called PREDOSE (PREscription Drug abuse Online Surveillance and Epidemiology), which is designed to facilitate the epidemiologic study of prescription (and related) drug abuse practices using social media. PREDOSE uses web forum posts and domain knowledge, modeled in a manually created Drug Abuse Ontology (DAO) (pronounced dow), to facilitate the extraction of semantic information from User Generated Content (UGC). A combination of lexical, pattern-based and semantics-based techniques is used together with the domain knowledge to extract fine-grained semantic information from UGC. In a previous study, PREDOSE was used to obtain the datasets from which new knowledge in drug abuse research was derived. Here, we report on various platform enhancements, including an updated DAO, new components for relationship and triple extraction, and tools for content analysis, trend detection and emerging patterns exploration, which enhance the capabilities of the PREDOSE platform. Given these enhancements, PREDOSE is now more equipped to impact drug abuse research by alleviating traditional labor-intensive content analysis tasks. Methods Using custom web crawlers that scrape UGC from publicly available web forums, PREDOSE first automates the collection of web-based social media content for subsequent semantic annotation. The annotation scheme is modeled in the DAO, and includes domain specific knowledge such as prescription (and related) drugs, methods of preparation, side effects, routes of administration, etc. The DAO is also used to help recognize three types of data, namely: 1) entities, 2) relationships and 3) triples. PREDOSE then uses a combination of lexical and semantic-based techniques to extract entities and relationships from the scraped content, and a top-down approach for triple extraction that uses patterns expressed in the DAO. In addition, PREDOSE uses publicly available lexicons to identify initial sentiment expressions in text, and then a probabilistic optimization algorithm (from related research) to extract the final sentiment expressions. Together, these techniques enable the capture of fine-grained semantic information from UGC, and querying, search, trend analysis and overall content analysis of social media related to prescription drug abuse. Moreover, extracted data are also made available to domain experts for the creation of training and test sets for use in evaluation and refinements in information extraction techniques. Results A recent evaluation of the information extraction techniques applied in the PREDOSE platform indicates 85% precision and 72% recall in entity identification, on a manually created gold standard dataset. In another study, PREDOSE achieved 36% precision in relationship identification and 33% precision in triple extraction, through manual evaluation by domain experts. Given the complexity of the relationship and triple extraction tasks and the abstruse nature of social media texts, we interpret these as favorable initial results. Extracted semantic information is currently in use in an online discovery support system, by prescription drug abuse researchers at the Center for Interventions, Treatment and Addictions Research (CITAR) at Wright State University. Conclusion A comprehensive platform for entity, relationship, triple and sentiment extraction from such abstruse texts has never been developed for drug abuse research. PREDOSE has already demonstrated the importance of mining social media by providing data from which new findings in drug abuse research were uncovered. Given the recent platform enhancements, including the refined DAO, components for relationship and triple extraction, and tools for content, trend and emerging pattern analysis, it is expected that PREDOSE will play a significant role in advancing drug abuse epidemiology in future. PMID:23892295

  2. Determination of ethyl glucuronide in hair: a rapid sample pretreatment involving simultaneous milling and extraction.

    PubMed

    Mönch, Bettina; Becker, Roland; Nehls, Irene

    2014-01-01

    A combination of simultaneous milling and extraction known as micropulverized extraction was developed for the quantification of the alcohol marker ethyl glucuronide (EtG) in hair samples using a homogeneous reference material and a mixer mill. Best extraction results from 50 mg of hair were obtained with 2-mL plastic tubes containing two steel balls (∅ = 5 mm), 0.5 mL of water and with an oscillating frequency of 30 s(-1) over a period of 30 min. EtG was quantified employing a validated GC-MS procedure involving derivatization with pentafluoropropionic acid anhydride. This micropulverization procedure was compared with dry milling followed by separate aqueous extraction and with aqueous extraction after manual cutting to millimeter-size snippets. Micropulverization yielded 28.0 ± 1.70 pg/mg and was seen to be superior to manually cutting (23.0 ± 0.83 pg/mg) and equivalent to dry grinding (27.7 ± 1.71 pg/mg) with regard to completeness of EtG extraction. The option to process up to 20 samples simultaneously makes micropulverization especially valuable for the high throughput of urgent samples.

  3. Automated extraction of subdural electrode grid from post-implant MRI scans for epilepsy surgery

    NASA Astrophysics Data System (ADS)

    Pozdin, Maksym A.; Skrinjar, Oskar

    2005-04-01

    This paper presents an automated algorithm for extraction of Subdural Electrode Grid (SEG) from post-implant MRI scans for epilepsy surgery. Post-implant MRI scans are corrupted by the image artifacts caused by implanted electrodes. The artifacts appear as dark spherical voids and given that the cerebrospinal fluid is also dark in T1-weigthed MRI scans, it is a difficult and time-consuming task to manually locate SEG position relative to brain structures of interest. The proposed algorithm reliably and accurately extracts SEG from post-implant MRI scan, i.e. finds its shape and position relative to brain structures of interest. The algorithm was validated against manually determined electrode locations, and the average error was 1.6mm for the three tested subjects.

  4. Automated solid-phase extraction and liquid chromatography for assay of cyclosporine in whole blood.

    PubMed

    Kabra, P M; Wall, J H; Dimson, P

    1987-12-01

    In this rapid, precise, accurate, cost-effective, automated liquid-chromatographic procedure for determining cyclosporine in whole blood, the cyclosporine is extracted from 0.5 mL of whole blood together with 300 micrograms of cyclosporin D per liter, added as internal standard, by using an Advanced Automated Sample Processing unit. The on-line solid-phase extraction is performed on an octasilane sorbent cartridge, which is interfaced with a RP-8 guard column and an octyl analytical column, packed with 5-microns packing material. Both columns are eluted with a mobile phase containing acetonitrile/methanol/water (53/20/27 by vol) at a flow rate of 1.5 mL/min and column temperature of 70 degrees C. Absolute recovery of cyclosporine exceeded 85% and the standard curve was linear to 5000 micrograms/L. Within-run and day-to-day CVs were less than 8%. Correlation between automated and manual Bond-Elut extraction methods was excellent (r = 0.987). None of 18 drugs and four steroids tested interfered.

  5. Automated detection of lung nodules with three-dimensional convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Pérez, Gustavo; Arbeláez, Pablo

    2017-11-01

    Lung cancer is the cancer type with highest mortality rate worldwide. It has been shown that early detection with computer tomography (CT) scans can reduce deaths caused by this disease. Manual detection of cancer nodules is costly and time-consuming. We present a general framework for the detection of nodules in lung CT images. Our method consists of the pre-processing of a patient's CT with filtering and lung extraction from the entire volume using a previously calculated mask for each patient. From the extracted lungs, we perform a candidate generation stage using morphological operations, followed by the training of a three-dimensional convolutional neural network for feature representation and classification of extracted candidates for false positive reduction. We perform experiments on the publicly available LIDC-IDRI dataset. Our candidate extraction approach is effective to produce precise candidates with a recall of 99.6%. In addition, false positive reduction stage manages to successfully classify candidates and increases precision by a factor of 7.000.

  6. High-throughput and direct measurement of androgen levels using turbulent flow chromatography liquid chromatography-triple quadrupole mass spectrometry (TFC-LC-TQMS) to discover chemicals that modulate dihydrotestosterone production in human prostate cancer cells.

    PubMed

    Kang, Kyungsu; Peng, Lei; Jung, Yu-Jin; Kim, Joo Yeon; Lee, Eun Ha; Lee, Hee Ju; Kim, Sang Min; Sung, Sang Hyun; Pan, Cheol-Ho; Choi, Yongsoo

    2018-02-01

    To develop a high-throughput screening system to measure the conversion of testosterone to dihydrotestosterone (DHT) in cultured human prostate cancer cells using turbulent flow chromatography liquid chromatography-triple quadrupole mass spectrometry (TFC-LC-TQMS). After optimizing the cell reaction system, this method demonstrated a screening capability of 103 samples, including 78 single compounds and 25 extracts, in less than 12 h without manual sample preparation. Consequently, fucoxanthin, phenethyl caffeate, and Curcuma longa L. extract were validated as bioactive chemicals that inhibited DHT production in cultured DU145 cells. In addition, naringenin boosted DHT production in DU145 cells. The method can facilitate the discovery of bioactive chemicals that modulate the DHT production, and four phytochemicals are potential candidates of nutraceuticals to adjust DHT levels in male hormonal dysfunction.

  7. Detection of Perlger-Huet anomaly based on augmented fast marching method and speeded up robust features.

    PubMed

    Sun, Minglei; Yang, Shaobao; Jiang, Jinling; Wang, Qiwei

    2015-01-01

    Pelger-Huet anomaly (PHA) and Pseudo Pelger-Huet anomaly (PPHA) are neutrophil with abnormal morphology. They have the bilobed or unilobed nucleus and excessive clumping chromatin. Currently, detection of this kind of cell mainly depends on the manual microscopic examination by a clinician, thus, the quality of detection is limited by the efficiency and a certain subjective consciousness of the clinician. In this paper, a detection method for PHA and PPHA is proposed based on karyomorphism and chromatin distribution features. Firstly, the skeleton of the nucleus is extracted using an augmented Fast Marching Method (AFMM) and width distribution is obtained through distance transform. Then, caryoplastin in the nucleus is extracted based on Speeded Up Robust Features (SURF) and a K-nearest-neighbor (KNN) classifier is constructed to analyze the features. Experiment shows that the sensitivity and specificity of this method achieved 87.5% and 83.33%, which means that the detection accuracy of PHA is acceptable. Meanwhile, the detection method should be helpful to the automatic morphological classification of blood cells.

  8. Comparison of manual methods of extracting genomic DNA from dried blood spots collected on different cards: implications for clinical practice.

    PubMed

    Molteni, C G; Terranova, L; Zampiero, A; Galeone, C; Principi, N; Esposito, S

    2013-01-01

    Isolating genomic DNA from blood samples is essential when studying the associations between genetic variants and susceptibility to a given clinical condition, or its severity. This study of three extraction techniques and two types of commercially available cards involved 219 children attending our outpatient pediatric clinic for follow-up laboratory tests after they had been hospitalised. An aliquot of venous blood was drawn into plastic tubes without additives and, after several inversions, 80 microL were put on circles of common paper cards and Whatman FTA-treated cards. Three extraction methods were compared: the Qiagen Investigator, Gensolve, and Masterpure. The best method in terms of final DNA yield was Masterpure, which led to a significantly higher yield regardless of the type of card (p less than 0.001), followed by Qiagen Investigator and Gensolve. Masterpure was also the best in terms of price, seemed to be simple and reliable, and required less hands-on time than other techniques. These conclusions support the use of Masterpure in studies that evaluate the associations between genetic variants and the severity or prevalence of infectious diseases.

  9. Enhancing Comparative Effectiveness Research With Automated Pediatric Pneumonia Detection in a Multi-Institutional Clinical Repository: A PHIS+ Pilot Study.

    PubMed

    Meystre, Stephane; Gouripeddi, Ramkiran; Tieder, Joel; Simmons, Jeffrey; Srivastava, Rajendu; Shah, Samir

    2017-05-15

    Community-acquired pneumonia is a leading cause of pediatric morbidity. Administrative data are often used to conduct comparative effectiveness research (CER) with sufficient sample sizes to enhance detection of important outcomes. However, such studies are prone to misclassification errors because of the variable accuracy of discharge diagnosis codes. The aim of this study was to develop an automated, scalable, and accurate method to determine the presence or absence of pneumonia in children using chest imaging reports. The multi-institutional PHIS+ clinical repository was developed to support pediatric CER by expanding an administrative database of children's hospitals with detailed clinical data. To develop a scalable approach to find patients with bacterial pneumonia more accurately, we developed a Natural Language Processing (NLP) application to extract relevant information from chest diagnostic imaging reports. Domain experts established a reference standard by manually annotating 282 reports to train and then test the NLP application. Findings of pleural effusion, pulmonary infiltrate, and pneumonia were automatically extracted from the reports and then used to automatically classify whether a report was consistent with bacterial pneumonia. Compared with the annotated diagnostic imaging reports reference standard, the most accurate implementation of machine learning algorithms in our NLP application allowed extracting relevant findings with a sensitivity of .939 and a positive predictive value of .925. It allowed classifying reports with a sensitivity of .71, a positive predictive value of .86, and a specificity of .962. When compared with each of the domain experts manually annotating these reports, the NLP application allowed for significantly higher sensitivity (.71 vs .527) and similar positive predictive value and specificity . NLP-based pneumonia information extraction of pediatric diagnostic imaging reports performed better than domain experts in this pilot study. NLP is an efficient method to extract information from a large collection of imaging reports to facilitate CER. ©Stephane Meystre, Ramkiran Gouripeddi, Joel Tieder, Jeffrey Simmons, Rajendu Srivastava, Samir Shah. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 15.05.2017.

  10. Shock Test Program. Shock Isolation Design Manual for SAFEGUARD TSE Systems and Equipment.

    DTIC Science & Technology

    1973-08-01

    under the direction of the Huntsville Division, Corps of Engineers. Much of the information in Part I and Part II is extracted from " Development of...49 3.2.2 Configuration Development 1-50 3.2.3 Suspension Arrangements 1-51 TABLE OF CONTENTS (Continued) Page 3.2.4 Platform Design 1-56 3.2.5 Elastic...shock isolation systems were considered in the design of the early NIKE-X facilities. As more descriptive prediction methods were developed and employed

  11. Research on Method of Interactive Segmentation Based on Remote Sensing Images

    NASA Astrophysics Data System (ADS)

    Yang, Y.; Li, H.; Han, Y.; Yu, F.

    2017-09-01

    In this paper, we aim to solve the object extraction problem in remote sensing images using interactive segmentation tools. Firstly, an overview of the interactive segmentation algorithm is proposed. Then, our detailed implementation of intelligent scissors and GrabCut for remote sensing images is described. Finally, several experiments on different typical features (water area, vegetation) in remote sensing images are performed respectively. Compared with the manual result, it indicates that our tools maintain good feature boundaries and show good performance.

  12. Automatic signal extraction, prioritizing and filtering approaches in detecting post-marketing cardiovascular events associated with targeted cancer drugs from the FDA Adverse Event Reporting System (FAERS).

    PubMed

    Xu, Rong; Wang, Quanqiu

    2014-02-01

    Targeted drugs dramatically improve the treatment outcomes in cancer patients; however, these innovative drugs are often associated with unexpectedly high cardiovascular toxicity. Currently, cardiovascular safety represents both a challenging issue for drug developers, regulators, researchers, and clinicians and a concern for patients. While FDA drug labels have captured many of these events, spontaneous reporting systems are a main source for post-marketing drug safety surveillance in 'real-world' (outside of clinical trials) cancer patients. In this study, we present approaches to extracting, prioritizing, filtering, and confirming cardiovascular events associated with targeted cancer drugs from the FDA Adverse Event Reporting System (FAERS). The dataset includes records of 4,285,097 patients from FAERS. We first extracted drug-cardiovascular event (drug-CV) pairs from FAERS through named entity recognition and mapping processes. We then compared six ranking algorithms in prioritizing true positive signals among extracted pairs using known drug-CV pairs derived from FDA drug labels. We also developed three filtering algorithms to further improve precision. Finally, we manually validated extracted drug-CV pairs using 21 million published MEDLINE records. We extracted a total of 11,173 drug-CV pairs from FAERS. We showed that ranking by frequency is significantly more effective than by the five standard signal detection methods (246% improvement in precision for top-ranked pairs). The filtering algorithm we developed further improved overall precision by 91.3%. By manual curation using literature evidence, we show that about 51.9% of the 617 drug-CV pairs that appeared in both FAERS and MEDLINE sentences are true positives. In addition, 80.6% of these positive pairs have not been captured by FDA drug labeling. The unique drug-CV association dataset that we created based on FAERS could facilitate our understanding and prediction of cardiotoxic events associated with targeted cancer drugs. Copyright © 2013 Elsevier Inc. All rights reserved.

  13. Effective Information Extraction Framework for Heterogeneous Clinical Reports Using Online Machine Learning and Controlled Vocabularies

    PubMed Central

    Zheng, Shuai; Ghasemzadeh, Nima; Hayek, Salim S; Quyyumi, Arshed A

    2017-01-01

    Background Extracting structured data from narrated medical reports is challenged by the complexity of heterogeneous structures and vocabularies and often requires significant manual effort. Traditional machine-based approaches lack the capability to take user feedbacks for improving the extraction algorithm in real time. Objective Our goal was to provide a generic information extraction framework that can support diverse clinical reports and enables a dynamic interaction between a human and a machine that produces highly accurate results. Methods A clinical information extraction system IDEAL-X has been built on top of online machine learning. It processes one document at a time, and user interactions are recorded as feedbacks to update the learning model in real time. The updated model is used to predict values for extraction in subsequent documents. Once prediction accuracy reaches a user-acceptable threshold, the remaining documents may be batch processed. A customizable controlled vocabulary may be used to support extraction. Results Three datasets were used for experiments based on report styles: 100 cardiac catheterization procedure reports, 100 coronary angiographic reports, and 100 integrated reports—each combines history and physical report, discharge summary, outpatient clinic notes, outpatient clinic letter, and inpatient discharge medication report. Data extraction was performed by 3 methods: online machine learning, controlled vocabularies, and a combination of these. The system delivers results with F1 scores greater than 95%. Conclusions IDEAL-X adopts a unique online machine learning–based approach combined with controlled vocabularies to support data extraction for clinical reports. The system can quickly learn and improve, thus it is highly adaptable. PMID:28487265

  14. The Influence of Endmember Selection Method in Extracting Impervious Surface from Airborne Hyperspectral Imagery

    NASA Astrophysics Data System (ADS)

    Wang, J.; Feng, B.

    2016-12-01

    Impervious surface area (ISA) has long been studied as an important input into moisture flux models. In general, ISA impedes groundwater recharge, increases stormflow/flood frequency, and alters in-stream and riparian habitats. Urban area is recognized as one of the richest ISA environment. Urban ISA mapping assists flood prevention and urban planning. Hyperspectral imagery (HI), for its ability to detect subtle spectral signature, becomes an ideal candidate in urban ISA mapping. To map ISA from HI involves endmember (EM) selection. The high degree of spatial and spectral heterogeneity of urban environment puts great difficulty in this task: a compromise point is needed between the automatic degree and the good representativeness of the method. The study tested one manual and two semi-automatic EM selection strategies. The manual and the first semi-automatic methods have been widely used in EM selection. The second semi-automatic EM selection method is rather new and has been only proposed for moderate spatial resolution satellite. The manual method visually selected the EM candidates from eight landcover types in the original image. The first semi-automatic method chose the EM candidates using a threshold over the pixel purity index (PPI) map. The second semi-automatic method used the triangle shape of the HI scatter plot in the n-Dimension visualizer to identify the V-I-S (vegetation-impervious surface-soil) EM candidates: the pixels locate at the triangle points. The initial EM candidates from the three methods were further refined by three indexes (EM average RMSE, minimum average spectral angle, and count based EM selection) and generated three spectral libraries, which were used to classify the test image. Spectral angle mapper was applied. The accuracy reports for the classification results were generated. The overall accuracy are 85% for the manual method, 81% for the PPI method, and 87% for the V-I-S method. The V-I-S EM selection method performs best in this study. This fact proves the value of V-I-S EM selection method in not only moderate spatial resolution satellite image but also the more and more accessible high spatial resolution airborne image. This semi-automatic EM selection method can be adopted into a wide range of remote sensing images and provide ISA map for hydrology analysis.

  15. Maximized Inter-Class Weighted Mean for Fast and Accurate Mitosis Cells Detection in Breast Cancer Histopathology Images.

    PubMed

    Nateghi, Ramin; Danyali, Habibollah; Helfroush, Mohammad Sadegh

    2017-08-14

    Based on the Nottingham criteria, the number of mitosis cells in histopathological slides is an important factor in diagnosis and grading of breast cancer. For manual grading of mitosis cells, histopathology slides of the tissue are examined by pathologists at 40× magnification for each patient. This task is very difficult and time-consuming even for experts. In this paper, a fully automated method is presented for accurate detection of mitosis cells in histopathology slide images. First a method based on maximum-likelihood is employed for segmentation and extraction of mitosis cell. Then a novel Maximized Inter-class Weighted Mean (MIWM) method is proposed that aims at reducing the number of extracted non-mitosis candidates that results in reducing the false positive mitosis detection rate. Finally, segmented candidates are classified into mitosis and non-mitosis classes by using a support vector machine (SVM) classifier. Experimental results demonstrate a significant improvement in accuracy of mitosis cells detection in different grades of breast cancer histopathological images.

  16. Lung dynamic MRI deblurring using low-rank decomposition and dictionary learning.

    PubMed

    Gou, Shuiping; Wang, Yueyue; Wu, Jiaolong; Lee, Percy; Sheng, Ke

    2015-04-01

    Lung dynamic MRI (dMRI) has emerged to be an appealing tool to quantify lung motion for both planning and treatment guidance purposes. However, this modality can result in blurry images due to intrinsically low signal-to-noise ratio in the lung and spatial/temporal interpolation. The image blurring could adversely affect the image processing that depends on the availability of fine landmarks. The purpose of this study is to reduce dMRI blurring using image postprocessing. To enhance the image quality and exploit the spatiotemporal continuity of dMRI sequences, a low-rank decomposition and dictionary learning (LDDL) method was employed to deblur lung dMRI and enhance the conspicuity of lung blood vessels. Fifty frames of continuous 2D coronal dMRI frames using a steady state free precession sequence were obtained from five subjects including two healthy volunteer and three lung cancer patients. In LDDL, the lung dMRI was decomposed into sparse and low-rank components. Dictionary learning was employed to estimate the blurring kernel based on the whole image, low-rank or sparse component of the first image in the lung MRI sequence. Deblurring was performed on the whole image sequences using deconvolution based on the estimated blur kernel. The deblurring results were quantified using an automated blood vessel extraction method based on the classification of Hessian matrix filtered images. Accuracy of automated extraction was calculated using manual segmentation of the blood vessels as the ground truth. In the pilot study, LDDL based on the blurring kernel estimated from the sparse component led to performance superior to the other ways of kernel estimation. LDDL consistently improved image contrast and fine feature conspicuity of the original MRI without introducing artifacts. The accuracy of automated blood vessel extraction was on average increased by 16% using manual segmentation as the ground truth. Image blurring in dMRI images can be effectively reduced using a low-rank decomposition and dictionary learning method using kernels estimated by the sparse component.

  17. Searching Remotely Sensed Images for Meaningful Nested Gestalten

    NASA Astrophysics Data System (ADS)

    Michaelsen, E.; Muench, D.; Arens, M.

    2016-06-01

    Even non-expert human observers sometimes still outperform automatic extraction of man-made objects from remotely sensed data. We conjecture that some of this remarkable capability can be explained by Gestalt mechanisms. Gestalt algebra gives a mathematical structure capturing such part-aggregate relations and the laws to form an aggregate called Gestalt. Primitive Gestalten are obtained from an input image and the space of all possible Gestalt algebra terms is searched for well-assessed instances. This can be a very challenging combinatorial effort. The contribution at hand gives some tools and structures unfolding a finite and comparably small subset of the possible combinations. Yet, the intended Gestalten still are contained and found with high probability and moderate efforts. Experiments are made with images obtained from a virtual globe system, and use the SIFT method for extraction of the primitive Gestalten. Comparison is made with manually extracted ground-truth Gestalten salient to human observers.

  18. Extracting genetic alteration information for personalized cancer therapy from ClinicalTrials.gov.

    PubMed

    Xu, Jun; Lee, Hee-Jin; Zeng, Jia; Wu, Yonghui; Zhang, Yaoyun; Huang, Liang-Chin; Johnson, Amber; Holla, Vijaykumar; Bailey, Ann M; Cohen, Trevor; Meric-Bernstam, Funda; Bernstam, Elmer V; Xu, Hua

    2016-07-01

    Clinical trials investigating drugs that target specific genetic alterations in tumors are important for promoting personalized cancer therapy. The goal of this project is to create a knowledge base of cancer treatment trials with annotations about genetic alterations from ClinicalTrials.gov. We developed a semi-automatic framework that combines advanced text-processing techniques with manual review to curate genetic alteration information in cancer trials. The framework consists of a document classification system to identify cancer treatment trials from ClinicalTrials.gov and an information extraction system to extract gene and alteration pairs from the Title and Eligibility Criteria sections of clinical trials. By applying the framework to trials at ClinicalTrials.gov, we created a knowledge base of cancer treatment trials with genetic alteration annotations. We then evaluated each component of the framework against manually reviewed sets of clinical trials and generated descriptive statistics of the knowledge base. The automated cancer treatment trial identification system achieved a high precision of 0.9944. Together with the manual review process, it identified 20 193 cancer treatment trials from ClinicalTrials.gov. The automated gene-alteration extraction system achieved a precision of 0.8300 and a recall of 0.6803. After validation by manual review, we generated a knowledge base of 2024 cancer trials that are labeled with specific genetic alteration information. Analysis of the knowledge base revealed the trend of increased use of targeted therapy for cancer, as well as top frequent gene-alteration pairs of interest. We expect this knowledge base to be a valuable resource for physicians and patients who are seeking information about personalized cancer therapy. © The Author 2016. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  19. Automatic classification of written descriptions by healthy adults: An overview of the application of natural language processing and machine learning techniques to clinical discourse analysis

    PubMed Central

    Toledo, Cíntia Matsuda; Cunha, Andre; Scarton, Carolina; Aluísio, Sandra

    2014-01-01

    Discourse production is an important aspect in the evaluation of brain-injured individuals. We believe that studies comparing the performance of brain-injured subjects with that of healthy controls must use groups with compatible education. A pioneering application of machine learning methods using Brazilian Portuguese for clinical purposes is described, highlighting education as an important variable in the Brazilian scenario. Objective The aims were to describe how to: (i) develop machine learning classifiers using features generated by natural language processing tools to distinguish descriptions produced by healthy individuals into classes based on their years of education; and (ii) automatically identify the features that best distinguish the groups. Methods The approach proposed here extracts linguistic features automatically from the written descriptions with the aid of two Natural Language Processing tools: Coh-Metrix-Port and AIC. It also includes nine task-specific features (three new ones, two extracted manually, besides description time; type of scene described – simple or complex; presentation order – which type of picture was described first; and age). In this study, the descriptions by 144 of the subjects studied in Toledo18 were used,which included 200 healthy Brazilians of both genders. Results and Conclusion A Support Vector Machine (SVM) with a radial basis function (RBF) kernel is the most recommended approach for the binary classification of our data, classifying three of the four initial classes. CfsSubsetEval (CFS) is a strong candidate to replace manual feature selection methods. PMID:29213908

  20. Extracting three-dimensional orientation and tractography of myofibers using optical coherence tomography

    PubMed Central

    Gan, Yu; Fleming, Christine P.

    2013-01-01

    Abnormal changes in orientation of myofibers are associated with various cardiac diseases such as arrhythmia, irregular contraction, and cardiomyopathy. To extract fiber information, we present a method of quantifying fiber orientation and reconstructing three-dimensional tractography of myofibers using optical coherence tomography (OCT). A gradient based algorithm was developed to quantify fiber orientation in three dimensions and particle filtering technique was employed to track myofibers. Prior to image processing, three-dimensional image data set were acquired from all cardiac chambers and ventricular septum of swine hearts using OCT system without optical clearing. The algorithm was validated through rotation test and comparison with manual measurements. The experimental results demonstrate that we are able to visualize three-dimensional fiber tractography in myocardium tissues. PMID:24156071

  1. Extraction of Coastlines with Fuzzy Approach Using SENTINEL-1 SAR Image

    NASA Astrophysics Data System (ADS)

    Demir, N.; Kaynarca, M.; Oy, S.

    2016-06-01

    Coastlines are important features for water resources, sea products, energy resources etc. Coastlines are changed dynamically, thus automated methods are necessary for analysing and detecting the changes along the coastlines. In this study, Sentinel-1 C band SAR image has been used to extract the coastline with fuzzy logic approach. The used SAR image has VH polarisation and 10x10m. spatial resolution, covers 57 sqkm area from the south-east of Puerto-Rico. Additionally, radiometric calibration is applied to reduce atmospheric and orbit error, and speckle filter is used to reduce the noise. Then the image is terrain-corrected using SRTM digital surface model. Classification of SAR image is a challenging task since SAR and optical sensors have very different properties. Even between different bands of the SAR sensors, the images look very different. So, the classification of SAR image is difficult with the traditional unsupervised methods. In this study, a fuzzy approach has been applied to distinguish the coastal pixels than the land surface pixels. The standard deviation and the mean, median values are calculated to use as parameters in fuzzy approach. The Mean-standard-deviation (MS) Large membership function is used because the large amounts of land and ocean pixels dominate the SAR image with large mean and standard deviation values. The pixel values are multiplied with 1000 to easify the calculations. The mean is calculated as 23 and the standard deviation is calculated as 12 for the whole image. The multiplier parameters are selected as a: 0.58, b: 0.05 to maximize the land surface membership. The result is evaluated using airborne LIDAR data, only for the areas where LIDAR dataset is available and secondly manually digitized coastline. The laser points which are below 0,5 m are classified as the ocean points. The 3D alpha-shapes algorithm is used to detect the coastline points from LIDAR data. Minimum distances are calculated between the LIDAR points of coastline with the extracted coastline. The statistics of the distances are calculated as following; the mean is 5.82m, standard deviation is 5.83m and the median value is 4.08 m. Secondly, the extracted coastline is also evaluated with manually created lines on SAR image. Both lines are converted to dense points with 1 m interval. Then the closest distances are calculated between the points from extracted coastline and manually created coastline. The mean is 5.23m, standard deviation is 4.52m. and the median value is 4.13m for the calculated distances. The evaluation values are within the accuracy of used SAR data for both quality assessment approaches.

  2. [A semimicroquality evaluation method on Panax notoginseng and its application in analysis of continuous cropping obstacles research samples].

    PubMed

    Cao, Yi; Wang, Chao-Qun; Xu, Feng; Jia, Xiu-Hong; Liu, Guang-Xue; Yang, Sheng-Chao; Long, Guang-Qiang; Chen, Zhong-Jian; Wei, Fu-Zhou; Yang, Shao-Zhou; Fukuda, Kozo; Wang, Xuan; Cai, Shao-Qing

    2016-10-01

    Panax notoginseng is a commonly used traditional Chinese medicine with blood activating effect while has continuous cropping obstacle problem in planting process. In present study, a semimicroextraction method with water-saturated n-butanol on 0.1 g notoginseng sample was established with good repeatability (RSD<2.5%) and 9.6%-20.6% higher extraction efficiency of seven saponins than the conventional method. A total of 16 characteristic peaks were identified by LC-MS-IT-TOF, including eight 20(S)-protopanaxatriol (PPT) type saponins and eight 20(S)-protopanaxadiol (PPD) type saponins. The established method was utilized to evaluate the quality of notoginseng samples cultivated by manual intervened methods to overcome continuous cropping obstacles.As a result, HPLC fingerprint similarity, content of Fa and ratio of notoginsenoside K and notoginsenoside Fa (N-K/Fa) were found out to be as valuatable markers of the quality of samples in continuous cropping obstacle research, of which N-K/Fa could also be applied to the analysis of notoginseng samples with different growth years.Notoginseng samples with continuous cropping obstacle had HPLC fingerprint similarity lower than 0.87, in consistent with normal sample, and had significant lower content of notoginsenoside Fa and significant higher N-K/Fa (2.35-4.74) than normal group (0.45-1.33). All samples in the first group with manual intervention showed high similarity with normal group (>0.87), similar content of common peaks and N-K/Fa (0.42-2.06). The content of notoginsenoside K in the second group with manual intervention was higher than normal group. All samples except two displayed similarity higher than 0.87 and possessed content of 16 saponins close to normal group. The result showed that notoginseng samples with continuous cropping obstacle had lower quality than normal sample. And manual intervened methods could improve their quality in different levels.The method established in this study was simple, fast and accurate, and the markers may provide new guides for quality control in continuous cropping obstacle research of notoginseng. Copyright© by the Chinese Pharmaceutical Association.

  3. Pelvic artery calcification detection on CT scans using convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Liu, Jiamin; Lu, Le; Yao, Jianhua; Bagheri, Mohammadhadi; Summers, Ronald M.

    2017-03-01

    Artery calcification is observed commonly in elderly patients, especially in patients with chronic kidney disease, and may affect coronary, carotid and peripheral arteries. Vascular calcification has been associated with many clinical outcomes. Manual identification of calcification in CT scans requires substantial expert interaction, which makes it time-consuming and infeasible for large-scale studies. Many works have been proposed for coronary artery calcification detection in cardiac CT scans. In these works, coronary artery extraction is commonly required for calcification detection. However, there are few works about abdominal or pelvic artery calcification detection. In this work, we present a method for automatic pelvic artery calcification detection on CT scan. This method uses the recent advanced faster region-based convolutional neural network (R-CNN) to directly identify artery calcification without a need for artery extraction since pelvic artery extraction itself is challenging. Our method first generates category-independent region proposals for each slice of the input CT scan using region proposal networks (RPN). Then, each region proposal is jointly classified and refined by softmax classifier and bounding box regressor. We applied the detection method to 500 images from 20 CT scans of patients for evaluation. The detection system achieved a 77.4% average precision and a 85% sensitivity at 1 false positive per image.

  4. Filtering large-scale event collections using a combination of supervised and unsupervised learning for event trigger classification.

    PubMed

    Mehryary, Farrokh; Kaewphan, Suwisa; Hakala, Kai; Ginter, Filip

    2016-01-01

    Biomedical event extraction is one of the key tasks in biomedical text mining, supporting various applications such as database curation and hypothesis generation. Several systems, some of which have been applied at a large scale, have been introduced to solve this task. Past studies have shown that the identification of the phrases describing biological processes, also known as trigger detection, is a crucial part of event extraction, and notable overall performance gains can be obtained by solely focusing on this sub-task. In this paper we propose a novel approach for filtering falsely identified triggers from large-scale event databases, thus improving the quality of knowledge extraction. Our method relies on state-of-the-art word embeddings, event statistics gathered from the whole biomedical literature, and both supervised and unsupervised machine learning techniques. We focus on EVEX, an event database covering the whole PubMed and PubMed Central Open Access literature containing more than 40 million extracted events. The top most frequent EVEX trigger words are hierarchically clustered, and the resulting cluster tree is pruned to identify words that can never act as triggers regardless of their context. For rarely occurring trigger words we introduce a supervised approach trained on the combination of trigger word classification produced by the unsupervised clustering method and manual annotation. The method is evaluated on the official test set of BioNLP Shared Task on Event Extraction. The evaluation shows that the method can be used to improve the performance of the state-of-the-art event extraction systems. This successful effort also translates into removing 1,338,075 of potentially incorrect events from EVEX, thus greatly improving the quality of the data. The method is not solely bound to the EVEX resource and can be thus used to improve the quality of any event extraction system or database. The data and source code for this work are available at: http://bionlp-www.utu.fi/trigger-clustering/.

  5. Automatic concept extraction from spoken medical reports.

    PubMed

    Happe, André; Pouliquen, Bruno; Burgun, Anita; Cuggia, Marc; Le Beux, Pierre

    2003-07-01

    The objective of this project is to investigate methods whereby a combination of speech recognition and automated indexing methods substitute for current transcription and indexing practices. We based our study on existing speech recognition software programs and on NOMINDEX, a tool that extracts MeSH concepts from medical text in natural language and that is mainly based on a French medical lexicon and on the UMLS. For each document, the process consists of three steps: (1) dictation and digital audio recording, (2) speech recognition, (3) automatic indexing. The evaluation consisted of a comparison between the set of concepts extracted by NOMINDEX after the speech recognition phase and the set of keywords manually extracted from the initial document. The method was evaluated on a set of 28 patient discharge summaries extracted from the MENELAS corpus in French, corresponding to in-patients admitted for coronarography. The overall precision was 73% and the overall recall was 90%. Indexing errors were mainly due to word sense ambiguity and abbreviations. A specific issue was the fact that the standard French translation of MeSH terms lacks diacritics. A preliminary evaluation of speech recognition tools showed that the rate of accurate recognition was higher than 98%. Only 3% of the indexing errors were generated by inadequate speech recognition. We discuss several areas to focus on to improve this prototype. However, the very low rate of indexing errors due to speech recognition errors highlights the potential benefits of combining speech recognition techniques and automatic indexing.

  6. An automatic system to detect and extract texts in medical images for de-identification

    NASA Astrophysics Data System (ADS)

    Zhu, Yingxuan; Singh, P. D.; Siddiqui, Khan; Gillam, Michael

    2010-03-01

    Recently, there is an increasing need to share medical images for research purpose. In order to respect and preserve patient privacy, most of the medical images are de-identified with protected health information (PHI) before research sharing. Since manual de-identification is time-consuming and tedious, so an automatic de-identification system is necessary and helpful for the doctors to remove text from medical images. A lot of papers have been written about algorithms of text detection and extraction, however, little has been applied to de-identification of medical images. Since the de-identification system is designed for end-users, it should be effective, accurate and fast. This paper proposes an automatic system to detect and extract text from medical images for de-identification purposes, while keeping the anatomic structures intact. First, considering the text have a remarkable contrast with the background, a region variance based algorithm is used to detect the text regions. In post processing, geometric constraints are applied to the detected text regions to eliminate over-segmentation, e.g., lines and anatomic structures. After that, a region based level set method is used to extract text from the detected text regions. A GUI for the prototype application of the text detection and extraction system is implemented, which shows that our method can detect most of the text in the images. Experimental results validate that our method can detect and extract text in medical images with a 99% recall rate. Future research of this system includes algorithm improvement, performance evaluation, and computation optimization.

  7. A multiscale decomposition approach to detect abnormal vasculature in the optic disc.

    PubMed

    Agurto, Carla; Yu, Honggang; Murray, Victor; Pattichis, Marios S; Nemeth, Sheila; Barriga, Simon; Soliz, Peter

    2015-07-01

    This paper presents a multiscale method to detect neovascularization in the optic disc (NVD) using fundus images. Our method is applied to a manually selected region of interest (ROI) containing the optic disc. All the vessels in the ROI are segmented by adaptively combining contrast enhancement methods with a vessel segmentation technique. Textural features extracted using multiscale amplitude-modulation frequency-modulation, morphological granulometry, and fractal dimension are used. A linear SVM is used to perform the classification, which is tested by means of 10-fold cross-validation. The performance is evaluated using 300 images achieving an AUC of 0.93 with maximum accuracy of 88%. Copyright © 2015 Elsevier Ltd. All rights reserved.

  8. A Cross-Lingual Similarity Measure for Detecting Biomedical Term Translations

    PubMed Central

    Bollegala, Danushka; Kontonatsios, Georgios; Ananiadou, Sophia

    2015-01-01

    Bilingual dictionaries for technical terms such as biomedical terms are an important resource for machine translation systems as well as for humans who would like to understand a concept described in a foreign language. Often a biomedical term is first proposed in English and later it is manually translated to other languages. Despite the fact that there are large monolingual lexicons of biomedical terms, only a fraction of those term lexicons are translated to other languages. Manually compiling large-scale bilingual dictionaries for technical domains is a challenging task because it is difficult to find a sufficiently large number of bilingual experts. We propose a cross-lingual similarity measure for detecting most similar translation candidates for a biomedical term specified in one language (source) from another language (target). Specifically, a biomedical term in a language is represented using two types of features: (a) intrinsic features that consist of character n-grams extracted from the term under consideration, and (b) extrinsic features that consist of unigrams and bigrams extracted from the contextual windows surrounding the term under consideration. We propose a cross-lingual similarity measure using each of those feature types. First, to reduce the dimensionality of the feature space in each language, we propose prototype vector projection (PVP)—a non-negative lower-dimensional vector projection method. Second, we propose a method to learn a mapping between the feature spaces in the source and target language using partial least squares regression (PLSR). The proposed method requires only a small number of training instances to learn a cross-lingual similarity measure. The proposed PVP method outperforms popular dimensionality reduction methods such as the singular value decomposition (SVD) and non-negative matrix factorization (NMF) in a nearest neighbor prediction task. Moreover, our experimental results covering several language pairs such as English–French, English–Spanish, English–Greek, and English–Japanese show that the proposed method outperforms several other feature projection methods in biomedical term translation prediction tasks. PMID:26030738

  9. Ultrasound-guided fine-needle aspiration biopsy of clinically suspicious thyroid nodules with an automatic aspirator: a novel technique.

    PubMed

    Nagarajah, James; Sheu-Grabellus, Sien-Yi; Farahati, Jamshid; Kamruddin, Kamer A; Bockisch, Andreas; Schmid, Kurt Werner; Görges, Rainer

    2012-07-01

    Fine-needle aspiration biopsy (FNAB) is a simple technique for the investigation of suspicious thyroid nodules. However, low success rates are reported in the literature. The aim of this prospective study was to compare the clinical performance and impact of an automatic aspirator, referred to here as Aspirator 3, to those of the manual technique for the FNAB of clinically suspicious thyroid nodules. One hundred nine consecutive patients with 121 clinically suspicious thyroid nodules underwent a biopsy twice of the same site with the clinically approved Aspirator 3 and with the manual technique. The number of follicular cell formations and the total number of follicular cells in the aspirate were counted using the ThinPrep® method. With the Aspirator 3, the total number and the mean number of extracted cell formations were significantly higher than the values achieved with the manual technique (total: 3222 vs. 1951, p=0.02; mean: 27 vs. 16). The total number of cells that were biopsied was also higher when the Aspirator 3 was utilized (47,480 vs. 23,080, p=0.005). Overall, the Aspirator 3 was superior in 65 biopsies, and the manual technique was superior in 39 biopsies. In terms of cell formations and the total number of cells aspirated, the Aspirator 3 was superior to the manual technique. Further, the Aspirator 3 was more convenient to use and had a greater precision in needle guidance.

  10. Automated detection of feeding strikes by larval fish using continuous high-speed digital video: a novel method to extract quantitative data from fast, sparse kinematic events.

    PubMed

    Shamur, Eyal; Zilka, Miri; Hassner, Tal; China, Victor; Liberzon, Alex; Holzman, Roi

    2016-06-01

    Using videography to extract quantitative data on animal movement and kinematics constitutes a major tool in biomechanics and behavioral ecology. Advanced recording technologies now enable acquisition of long video sequences encompassing sparse and unpredictable events. Although such events may be ecologically important, analysis of sparse data can be extremely time-consuming and potentially biased; data quality is often strongly dependent on the training level of the observer and subject to contamination by observer-dependent biases. These constraints often limit our ability to study animal performance and fitness. Using long videos of foraging fish larvae, we provide a framework for the automated detection of prey acquisition strikes, a behavior that is infrequent yet critical for larval survival. We compared the performance of four video descriptors and their combinations against manually identified feeding events. For our data, the best single descriptor provided a classification accuracy of 77-95% and detection accuracy of 88-98%, depending on fish species and size. Using a combination of descriptors improved the accuracy of classification by ∼2%, but did not improve detection accuracy. Our results indicate that the effort required by an expert to manually label videos can be greatly reduced to examining only the potential feeding detections in order to filter false detections. Thus, using automated descriptors reduces the amount of manual work needed to identify events of interest from weeks to hours, enabling the assembly of an unbiased large dataset of ecologically relevant behaviors. © 2016. Published by The Company of Biologists Ltd.

  11. Assessment of quantitative cortical biomarkers in the developing brain of preterm infants

    NASA Astrophysics Data System (ADS)

    Moeskops, Pim; Benders, Manon J. N. L.; Pearlman, Paul C.; Kersbergen, Karina J.; Leemans, Alexander; Viergever, Max A.; Išgum, Ivana

    2013-02-01

    The cerebral cortex rapidly develops its folding during the second and third trimester of pregnancy. In preterm birth, this growth might be disrupted and influence neurodevelopment. The aim of this work is to extract quantitative biomarkers describing the cortex and evaluate them on a set of preterm infants without brain pathology. For this study, a set of 19 preterm - but otherwise healthy - infants scanned coronally with 3T MRI at the postmenstrual age of 30 weeks were selected. In ten patients (test set), the gray and white matter were manually annotated by an expert on the T2-weighted scans. Manual segmentations were used to extract cortical volume, surface area, thickness, and curvature using voxel-based methods. To compute these biomarkers per region in every patient, a template brain image has been generated by iterative registration and averaging of the scans of the remaining nine patients. This template has been manually divided in eight regions, and is transformed to every test image using elastic registration. In the results, gray and white matter volumes and cortical surface area appear symmetric between hemispheres, but small regional differences are visible. Cortical thickness seems slightly higher in the right parietal lobe than in other regions. The parietal lobes exhibit a higher global curvature, indicating more complex folding compared to other regions. The proposed approach can potentially - together with an automatic segmentation algorithm - be applied as a tool to assist in early diagnosis of abnormalities and prediction of the development of the cognitive abilities of these children.

  12. Automated processing of whole blood samples for the determination of immunosuppressants by liquid chromatography tandem-mass spectrometry.

    PubMed

    Vogeser, Michael; Spöhrer, Ute

    2006-01-01

    Liquid chromatography tandem-mass spectrometry (LC-MS/MS) is an efficient technology for routine determination of immunosuppressants in whole blood; however, time-consuming manual sample preparation remains a significant limitation of this technique. Using a commercially available robotic pipetting system (Tecan Freedom EVO), we developed an automated sample-preparation protocol for quantification of tacrolimus in whole blood by LC-MS/MS. Barcode reading, sample resuspension, transfer of whole blood aliquots into a deep-well plate, addition of internal standard solution, mixing, and protein precipitation by addition of an organic solvent is performed by the robotic system. After centrifugation of the plate, the deproteinized supernatants are submitted to on-line solid phase extraction, using column switching prior to LC-MS/MS analysis. The only manual actions within the entire process are decapping of the tubes, and transfer of the deep-well plate from the robotic system to a centrifuge and finally to the HPLC autosampler. Whole blood pools were used to assess the reproducibility of the entire analytical system for measuring tacrolimus concentrations. A total coefficient of variation of 1.7% was found for the entire automated analytical process (n=40; mean tacrolimus concentration, 5.3 microg/L). Close agreement between tacrolimus results obtained after manual and automated sample preparation was observed. The analytical system described here, comprising automated protein precipitation, on-line solid phase extraction and LC-MS/MS analysis, is convenient and precise, and minimizes hands-on time and the risk of mistakes in the quantification of whole blood immunosuppressant concentrations compared to conventional methods.

  13. Validation of a DNA IQ-based extraction method for TECAN robotic liquid handling workstations for processing casework.

    PubMed

    Frégeau, Chantal J; Lett, C Marc; Fourney, Ron M

    2010-10-01

    A semi-automated DNA extraction process for casework samples based on the Promega DNA IQ™ system was optimized and validated on TECAN Genesis 150/8 and Freedom EVO robotic liquid handling stations configured with fixed tips and a TECAN TE-Shake™ unit. The use of an orbital shaker during the extraction process promoted efficiency with respect to DNA capture, magnetic bead/DNA complex washes and DNA elution. Validation studies determined the reliability and limitations of this shaker-based process. Reproducibility with regards to DNA yields for the tested robotic workstations proved to be excellent and not significantly different than that offered by the manual phenol/chloroform extraction. DNA extraction of animal:human blood mixtures contaminated with soil demonstrated that a human profile was detectable even in the presence of abundant animal blood. For exhibits containing small amounts of biological material, concordance studies confirmed that DNA yields for this shaker-based extraction process are equivalent or greater to those observed with phenol/chloroform extraction as well as our original validated automated magnetic bead percolation-based extraction process. Our data further supports the increasing use of robotics for the processing of casework samples. Crown Copyright © 2009. Published by Elsevier Ireland Ltd. All rights reserved.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hewitt, A.D.; Jenkins, T.F.

    An on-site method has been developed for estimating concentrations of TNT, RDX, 2,4-DNT, and the two most commonly encountered environmental transformation products of TNT, 2-amino-4,6-dinitrotoluene and 4-amino-2,6- dinitrotoluene, in soil and groundwater using gas chromatography and the nitrogen-phosphorus detector (NPD). Soil samples (20 g) are extracted by shaking with 20 mL of acetone, and extracts are filtered through a Millex SR (0.5- micrometers) filter. Groundwater samples (1 L) were passed through SDB-RPS extraction disks that were subsequently extracted with 5 mL of acetone. A 1- micro-L volume of a soil or water extract is manually injected into a field- transportablemore » gas chromatograph equipped with a NPD and a heated injection port. Separations are conducted on a Restek Crossbond 100% dimethyl polysiloxane column, 6 m x 0.53-mm i.d., 1.5 mm, using nitrogen carrier gas at 9.5 mL/min. Retention times range from 3.0 min. for 2,4-dinitrotoluene (2,4-DNT) to 5.6 min. for 2-amino-4,6-dinitrotoluene. Method detection limits were less than 0.16 mg/ kg for soil and less than 1.0 microgram/L for groundwater. One of the major advantages of this method, over currently available colorimetric and enzyme immunoassay on-site methods, is the ability to quantify individual target analytes that often coexist in soils and groundwater contaminated with explosive residues. This method will be particularly useful at military antitank firing ranges where it is necessary to quantify residual concentrations of RDX in the presence of high concentrations of HMX, and when the transformation products of TNT need to be identified.« less

  15. Myocardial scar segmentation from magnetic resonance images using convolutional neural network

    NASA Astrophysics Data System (ADS)

    Zabihollahy, Fatemeh; White, James A.; Ukwatta, Eranga

    2018-02-01

    Accurate segmentation of the myocardial fibrosis or scar may provide important advancements for the prediction and management of malignant ventricular arrhythmias in patients with cardiovascular disease. In this paper, we propose a semi-automated method for segmentation of myocardial scar from late gadolinium enhancement magnetic resonance image (LGE-MRI) using a convolutional neural network (CNN). In contrast to image intensitybased methods, CNN-based algorithms have the potential to improve the accuracy of scar segmentation through the creation of high-level features from a combination of convolutional, detection and pooling layers. Our developed algorithm was trained using 2,336,703 image patches extracted from 420 slices of five 3D LGE-MR datasets, then validated on 2,204,178 patches from a testing dataset of seven 3D LGE-MR images including 624 slices, all obtained from patients with chronic myocardial infarction. For evaluation of the algorithm, we compared the algorithmgenerated segmentations to manual delineations by experts. Our CNN-based method reported an average Dice similarity coefficient (DSC), precision, and recall of 94.50 +/- 3.62%, 96.08 +/- 3.10%, and 93.96 +/- 3.75% as the accuracy of segmentation, respectively. As compared to several intensity threshold-based methods for scar segmentation, the results of our developed method have a greater agreement with manual expert segmentation.

  16. Iterative normalization method for improved prostate cancer localization with multispectral magnetic resonance imaging

    NASA Astrophysics Data System (ADS)

    Liu, Xin; Samil Yetik, Imam

    2012-04-01

    Use of multispectral magnetic resonance imaging has received a great interest for prostate cancer localization in research and clinical studies. Manual extraction of prostate tumors from multispectral magnetic resonance imaging is inefficient and subjective, while automated segmentation is objective and reproducible. For supervised, automated segmentation approaches, learning is essential to obtain the information from training dataset. However, in this procedure, all patients are assumed to have similar properties for the tumor and normal tissues, and the segmentation performance suffers since the variations across patients are ignored. To conquer this difficulty, we propose a new iterative normalization method based on relative intensity values of tumor and normal tissues to normalize multispectral magnetic resonance images and improve segmentation performance. The idea of relative intensity mimics the manual segmentation performed by human readers, who compare the contrast between regions without knowing the actual intensity values. We compare the segmentation performance of the proposed method with that of z-score normalization followed by support vector machine, local active contours, and fuzzy Markov random field. Our experimental results demonstrate that our method outperforms the three other state-of-the-art algorithms, and was found to have specificity of 0.73, sensitivity of 0.69, and accuracy of 0.79, significantly better than alternative methods.

  17. Rapid detection of Salmonella in pet food: design and evaluation of integrated methods based on real-time PCR detection.

    PubMed

    Balachandran, Priya; Friberg, Maria; Vanlandingham, V; Kozak, K; Manolis, Amanda; Brevnov, Maxim; Crowley, Erin; Bird, Patrick; Goins, David; Furtado, Manohar R; Petrauskene, Olga V; Tebbs, Robert S; Charbonneau, Duane

    2012-02-01

    Reducing the risk of Salmonella contamination in pet food is critical for both companion animals and humans, and its importance is reflected by the substantial increase in the demand for pathogen testing. Accurate and rapid detection of foodborne pathogens improves food safety, protects the public health, and benefits food producers by assuring product quality while facilitating product release in a timely manner. Traditional culture-based methods for Salmonella screening are laborious and can take 5 to 7 days to obtain definitive results. In this study, we developed two methods for the detection of low levels of Salmonella in pet food using real-time PCR: (i) detection of Salmonella in 25 g of dried pet food in less than 14 h with an automated magnetic bead-based nucleic acid extraction method and (ii) detection of Salmonella in 375 g of composite dry pet food matrix in less than 24 h with a manual centrifugation-based nucleic acid preparation method. Both methods included a preclarification step using a novel protocol that removes food matrix-associated debris and PCR inhibitors and improves the sensitivity of detection. Validation studies revealed no significant differences between the two real-time PCR methods and the standard U.S. Food and Drug Administration Bacteriological Analytical Manual (chapter 5) culture confirmation method.

  18. Natural Language Processing Techniques for Extracting and Categorizing Finding Measurements in Narrative Radiology Reports.

    PubMed

    Sevenster, M; Buurman, J; Liu, P; Peters, J F; Chang, P J

    2015-01-01

    Accumulating quantitative outcome parameters may contribute to constructing a healthcare organization in which outcomes of clinical procedures are reproducible and predictable. In imaging studies, measurements are the principal category of quantitative para meters. The purpose of this work is to develop and evaluate two natural language processing engines that extract finding and organ measurements from narrative radiology reports and to categorize extracted measurements by their "temporality". The measurement extraction engine is developed as a set of regular expressions. The engine was evaluated against a manually created ground truth. Automated categorization of measurement temporality is defined as a machine learning problem. A ground truth was manually developed based on a corpus of radiology reports. A maximum entropy model was created using features that characterize the measurement itself and its narrative context. The model was evaluated in a ten-fold cross validation protocol. The measurement extraction engine has precision 0.994 and recall 0.991. Accuracy of the measurement classification engine is 0.960. The work contributes to machine understanding of radiology reports and may find application in software applications that process medical data.

  19. NEFI: Network Extraction From Images

    PubMed Central

    Dirnberger, M.; Kehl, T.; Neumann, A.

    2015-01-01

    Networks are amongst the central building blocks of many systems. Given a graph of a network, methods from graph theory enable a precise investigation of its properties. Software for the analysis of graphs is widely available and has been applied to study various types of networks. In some applications, graph acquisition is relatively simple. However, for many networks data collection relies on images where graph extraction requires domain-specific solutions. Here we introduce NEFI, a tool that extracts graphs from images of networks originating in various domains. Regarding previous work on graph extraction, theoretical results are fully accessible only to an expert audience and ready-to-use implementations for non-experts are rarely available or insufficiently documented. NEFI provides a novel platform allowing practitioners to easily extract graphs from images by combining basic tools from image processing, computer vision and graph theory. Thus, NEFI constitutes an alternative to tedious manual graph extraction and special purpose tools. We anticipate NEFI to enable time-efficient collection of large datasets. The analysis of these novel datasets may open up the possibility to gain new insights into the structure and function of various networks. NEFI is open source and available at http://nefi.mpi-inf.mpg.de. PMID:26521675

  20. Multichannel Convolutional Neural Network for Biological Relation Extraction.

    PubMed

    Quan, Chanqin; Hua, Lei; Sun, Xiao; Bai, Wenjun

    2016-01-01

    The plethora of biomedical relations which are embedded in medical logs (records) demands researchers' attention. Previous theoretical and practical focuses were restricted on traditional machine learning techniques. However, these methods are susceptible to the issues of "vocabulary gap" and data sparseness and the unattainable automation process in feature extraction. To address aforementioned issues, in this work, we propose a multichannel convolutional neural network (MCCNN) for automated biomedical relation extraction. The proposed model has the following two contributions: (1) it enables the fusion of multiple (e.g., five) versions in word embeddings; (2) the need for manual feature engineering can be obviated by automated feature learning with convolutional neural network (CNN). We evaluated our model on two biomedical relation extraction tasks: drug-drug interaction (DDI) extraction and protein-protein interaction (PPI) extraction. For DDI task, our system achieved an overall f -score of 70.2% compared to the standard linear SVM based system (e.g., 67.0%) on DDIExtraction 2013 challenge dataset. And for PPI task, we evaluated our system on Aimed and BioInfer PPI corpus; our system exceeded the state-of-art ensemble SVM system by 2.7% and 5.6% on f -scores.

  1. Comparison of manual and semi-automatic DNA extraction protocols for the barcoding characterization of hematophagous louse flies (Diptera: Hippoboscidae).

    PubMed

    Gutiérrez-López, Rafael; Martínez-de la Puente, Josué; Gangoso, Laura; Soriguer, Ramón C; Figuerola, Jordi

    2015-06-01

    The barcoding of life initiative provides a universal molecular tool to distinguish animal species based on the amplification and sequencing of a fragment of the subunit 1 of the cytochrome oxidase (COI) gene. Obtaining good quality DNA for barcoding purposes is a limiting factor, especially in studies conducted on small-sized samples or those requiring the maintenance of the organism as a voucher. In this study, we compared the number of positive amplifications and the quality of the sequences obtained using DNA extraction methods that also differ in their economic costs and time requirements and we applied them for the genetic characterization of louse flies. Four DNA extraction methods were studied: chloroform/isoamyl alcohol, HotShot procedure, Qiagen DNeasy(®) Tissue and Blood Kit and DNA Kit Maxwell(®) 16LEV. All the louse flies were morphologically identified as Ornithophila gestroi and a single COI-based haplotype was identified. The number of positive amplifications did not differ significantly among DNA extraction procedures. However, the quality of the sequences was significantly lower for the case of the chloroform/isoamyl alcohol procedure with respect to the rest of methods tested here. These results may be useful for the genetic characterization of louse flies, leaving most of the remaining insect as a voucher. © 2015 The Society for Vector Ecology.

  2. Integrated Shoreline Extraction Approach with Use of Rasat MS and SENTINEL-1A SAR Images

    NASA Astrophysics Data System (ADS)

    Demir, N.; Oy, S.; Erdem, F.; Şeker, D. Z.; Bayram, B.

    2017-09-01

    Shorelines are complex ecosystems and highly important socio-economic environments. They may change rapidly due to both natural and human-induced effects. Determination of movements along the shoreline and monitoring of the changes are essential for coastline management, modeling of sediment transportation and decision support systems. Remote sensing provides an opportunity to obtain rapid, up-to-date and reliable information for monitoring of shoreline. In this study, approximately 120 km of Antalya-Kemer shoreline which is under the threat of erosion, deposition, increasing of inhabitants and urbanization and touristic hotels, has been selected as the study area. In the study, RASAT pansharpened and SENTINEL-1A SAR images have been used to implement proposed shoreline extraction methods. The main motivation of this study is to combine the land/water body segmentation results of both RASAT MS and SENTINEL-1A SAR images to improve the quality of the results. The initial land/water body segmentation has been obtained using RASAT image by means of Random Forest classification method. This result has been used as training data set to define fuzzy parameters for shoreline extraction from SENTINEL-1A SAR image. Obtained results have been compared with the manually digitized shoreline. The accuracy assessment has been performed by calculating perpendicular distances between reference data and extracted shoreline by proposed method. As a result, the mean difference has been calculated around 1 pixel.

  3. Adaptable, high recall, event extraction system with minimal configuration.

    PubMed

    Miwa, Makoto; Ananiadou, Sophia

    2015-01-01

    Biomedical event extraction has been a major focus of biomedical natural language processing (BioNLP) research since the first BioNLP shared task was held in 2009. Accordingly, a large number of event extraction systems have been developed. Most such systems, however, have been developed for specific tasks and/or incorporated task specific settings, making their application to new corpora and tasks problematic without modification of the systems themselves. There is thus a need for event extraction systems that can achieve high levels of accuracy when applied to corpora in new domains, without the need for exhaustive tuning or modification, whilst retaining competitive levels of performance. We have enhanced our state-of-the-art event extraction system, EventMine, to alleviate the need for task-specific tuning. Task-specific details are specified in a configuration file, while extensive task-specific parameter tuning is avoided through the integration of a weighting method, a covariate shift method, and their combination. The task-specific configuration and weighting method have been employed within the context of two different sub-tasks of BioNLP shared task 2013, i.e. Cancer Genetics (CG) and Pathway Curation (PC), removing the need to modify the system specifically for each task. With minimal task specific configuration and tuning, EventMine achieved the 1st place in the PC task, and 2nd in the CG, achieving the highest recall for both tasks. The system has been further enhanced following the shared task by incorporating the covariate shift method and entity generalisations based on the task definitions, leading to further performance improvements. We have shown that it is possible to apply a state-of-the-art event extraction system to new tasks with high levels of performance, without having to modify the system internally. Both covariate shift and weighting methods are useful in facilitating the production of high recall systems. These methods and their combination can adapt a model to the target data with no deep tuning and little manual configuration.

  4. Dereplication of plant phenolics using a mass-spectrometry database independent method.

    PubMed

    Borges, Ricardo M; Taujale, Rahil; de Souza, Juliana Santana; de Andrade Bezerra, Thaís; Silva, Eder Lana E; Herzog, Ronny; Ponce, Francesca V; Wolfender, Jean-Luc; Edison, Arthur S

    2018-05-29

    Dereplication, an approach to sidestep the efforts involved in the isolation of known compounds, is generally accepted as being the first stage of novel discoveries in natural product research. It is based on metabolite profiling analysis of complex natural extracts. To present the application of LipidXplorer for automatic targeted dereplication of phenolics in plant crude extracts based on direct infusion high-resolution tandem mass spectrometry data. LipidXplorer uses a user-defined molecular fragmentation query language (MFQL) to search for specific characteristic fragmentation patterns in large data sets and highlight the corresponding metabolites. To this end, MFQL files were written to dereplicate common phenolics occurring in plant extracts. Complementary MFQL files were used for validation purposes. New MFQL files with molecular formula restrictions for common classes of phenolic natural products were generated for the metabolite profiling of different representative crude plant extracts. This method was evaluated against an open-source software for mass-spectrometry data processing (MZMine®) and against manual annotation based on published data. The targeted LipidXplorer method implemented using common phenolic fragmentation patterns, was found to be able to annotate more phenolics than MZMine® that is based on automated queries on the available databases. Additionally, screening for ascarosides, natural products with unrelated structures to plant phenolics collected from the nematode Caenorhabditis elegans, demonstrated the specificity of this method by cross-testing both groups of chemicals in both plants and nematodes. Copyright © 2018 John Wiley & Sons, Ltd.

  5. Localized Segment Based Processing for Automatic Building Extraction from LiDAR Data

    NASA Astrophysics Data System (ADS)

    Parida, G.; Rajan, K. S.

    2017-05-01

    The current methods of object segmentation and extraction and classification of aerial LiDAR data is manual and tedious task. This work proposes a technique for object segmentation out of LiDAR data. A bottom-up geometric rule based approach was used initially to devise a way to segment buildings out of the LiDAR datasets. For curved wall surfaces, comparison of localized surface normals was done to segment buildings. The algorithm has been applied to both synthetic datasets as well as real world dataset of Vaihingen, Germany. Preliminary results show successful segmentation of the buildings objects from a given scene in case of synthetic datasets and promissory results in case of real world data. The advantages of the proposed work is non-dependence on any other form of data required except LiDAR. It is an unsupervised method of building segmentation, thus requires no model training as seen in supervised techniques. It focuses on extracting the walls of the buildings to construct the footprint, rather than focussing on roof. The focus on extracting the wall to reconstruct the buildings from a LiDAR scene is crux of the method proposed. The current segmentation approach can be used to get 2D footprints of the buildings, with further scope to generate 3D models. Thus, the proposed method can be used as a tool to get footprints of buildings in urban landscapes, helping in urban planning and the smart cities endeavour.

  6. PATCH image processor user's manual

    NASA Technical Reports Server (NTRS)

    Nieves, M. J. (Principal Investigator)

    1980-01-01

    The patch image processor extracts patches in various size (32 x 32, 64 x 64, 128 x 128, and 256 x 256 pixels) from full frame LANDSAT imagery data. With the patches that are extracted, a patch image mosaic is created in the image processing system, IMDACS, format.

  7. Automatic classification of written descriptions by healthy adults: An overview of the application of natural language processing and machine learning techniques to clinical discourse analysis.

    PubMed

    Toledo, Cíntia Matsuda; Cunha, Andre; Scarton, Carolina; Aluísio, Sandra

    2014-01-01

    Discourse production is an important aspect in the evaluation of brain-injured individuals. We believe that studies comparing the performance of brain-injured subjects with that of healthy controls must use groups with compatible education. A pioneering application of machine learning methods using Brazilian Portuguese for clinical purposes is described, highlighting education as an important variable in the Brazilian scenario. The aims were to describe how to:(i) develop machine learning classifiers using features generated by natural language processing tools to distinguish descriptions produced by healthy individuals into classes based on their years of education; and(ii) automatically identify the features that best distinguish the groups. The approach proposed here extracts linguistic features automatically from the written descriptions with the aid of two Natural Language Processing tools: Coh-Metrix-Port and AIC. It also includes nine task-specific features (three new ones, two extracted manually, besides description time; type of scene described - simple or complex; presentation order - which type of picture was described first; and age). In this study, the descriptions by 144 of the subjects studied in Toledo 18 were used,which included 200 healthy Brazilians of both genders. A Support Vector Machine (SVM) with a radial basis function (RBF) kernel is the most recommended approach for the binary classification of our data, classifying three of the four initial classes. CfsSubsetEval (CFS) is a strong candidate to replace manual feature selection methods.

  8. Automated image analysis for quantitative fluorescence in situ hybridization with environmental samples.

    PubMed

    Zhou, Zhi; Pons, Marie Noëlle; Raskin, Lutgarde; Zilles, Julie L

    2007-05-01

    When fluorescence in situ hybridization (FISH) analyses are performed with complex environmental samples, difficulties related to the presence of microbial cell aggregates and nonuniform background fluorescence are often encountered. The objective of this study was to develop a robust and automated quantitative FISH method for complex environmental samples, such as manure and soil. The method and duration of sample dispersion were optimized to reduce the interference of cell aggregates. An automated image analysis program that detects cells from 4',6'-diamidino-2-phenylindole (DAPI) micrographs and extracts the maximum and mean fluorescence intensities for each cell from corresponding FISH images was developed with the software Visilog. Intensity thresholds were not consistent even for duplicate analyses, so alternative ways of classifying signals were investigated. In the resulting method, the intensity data were divided into clusters using fuzzy c-means clustering, and the resulting clusters were classified as target (positive) or nontarget (negative). A manual quality control confirmed this classification. With this method, 50.4, 72.1, and 64.9% of the cells in two swine manure samples and one soil sample, respectively, were positive as determined with a 16S rRNA-targeted bacterial probe (S-D-Bact-0338-a-A-18). Manual counting resulted in corresponding values of 52.3, 70.6, and 61.5%, respectively. In two swine manure samples and one soil sample 21.6, 12.3, and 2.5% of the cells were positive with an archaeal probe (S-D-Arch-0915-a-A-20), respectively. Manual counting resulted in corresponding values of 22.4, 14.0, and 2.9%, respectively. This automated method should facilitate quantitative analysis of FISH images for a variety of complex environmental samples.

  9. Extraction of pure components from overlapped signals in gas chromatography-mass spectrometry (GC-MS)

    PubMed Central

    Likić, Vladimir A

    2009-01-01

    Gas chromatography-mass spectrometry (GC-MS) is a widely used analytical technique for the identification and quantification of trace chemicals in complex mixtures. When complex samples are analyzed by GC-MS it is common to observe co-elution of two or more components, resulting in an overlap of signal peaks observed in the total ion chromatogram. In such situations manual signal analysis is often the most reliable means for the extraction of pure component signals; however, a systematic manual analysis over a number of samples is both tedious and prone to error. In the past 30 years a number of computational approaches were proposed to assist in the process of the extraction of pure signals from co-eluting GC-MS components. This includes empirical methods, comparison with library spectra, eigenvalue analysis, regression and others. However, to date no approach has been recognized as best, nor accepted as standard. This situation hampers general GC-MS capabilities, and in particular has implications for the development of robust, high-throughput GC-MS analytical protocols required in metabolic profiling and biomarker discovery. Here we first discuss the nature of GC-MS data, and then review some of the approaches proposed for the extraction of pure signals from co-eluting components. We summarize and classify different approaches to this problem, and examine why so many approaches proposed in the past have failed to live up to their full promise. Finally, we give some thoughts on the future developments in this field, and suggest that the progress in general computing capabilities attained in the past two decades has opened new horizons for tackling this important problem. PMID:19818154

  10. Control technology appendices for pollution control technical manuals. Final report, June 1982-February 1983

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1983-04-01

    The document is one of six technical handbooks prepared by EPA to help government officials granting permits to build synfuels facilities, synfuels process developers, and other interested parties. They provide technical data on waste streams from synfuels facilities and technologies capable of controlling them. Process technologies covered in the manuals include coal gasification, coal liquefaction by direct and idirect processing, and the extraction of oil from shale. The manuals offer no regulatory guidance, allowing the industry flexibility in deciding how best to comply with environmental regulations.

  11. Automatic Quantification of Radiographic Wrist Joint Space Width of Patients With Rheumatoid Arthritis.

    PubMed

    Huo, Yinghe; Vincken, Koen L; van der Heijde, Desiree; de Hair, Maria J H; Lafeber, Floris P; Viergever, Max A

    2017-11-01

    Objective: Wrist joint space narrowing is a main radiographic outcome of rheumatoid arthritis (RA). Yet, automatic radiographic wrist joint space width (JSW) quantification for RA patients has not been widely investigated. The aim of this paper is to present an automatic method to quantify the JSW of three wrist joints that are least affected by bone overlapping and are frequently involved in RA. These joints are located around the scaphoid bone, viz. the multangular-navicular, capitate-navicular-lunate, and radiocarpal joints. Methods: The joint space around the scaphoid bone is detected by using consecutive searches of separate path segments, where each segment location aids in constraining the subsequent one. For joint margin delineation, first the boundary not affected by X-ray projection is extracted, followed by a backtrace process to obtain the actual joint margin. The accuracy of the quantified JSW is evaluated by comparison with the manually obtained ground truth. Results: Two of the 50 radiographs used for evaluation of the method did not yield a correct path through all three wrist joints. The delineated joint margins of the remaining 48 radiographs were used for JSW quantification. It was found that 90% of the joints had a JSW deviating less than 20% from the mean JSW of manual indications, with the mean JSW error less than 10%. Conclusion: The proposed method is able to automatically quantify the JSW of radiographic wrist joints reliably. The proposed method may aid clinical researchers to study the progression of wrist joint damage in RA studies. Objective: Wrist joint space narrowing is a main radiographic outcome of rheumatoid arthritis (RA). Yet, automatic radiographic wrist joint space width (JSW) quantification for RA patients has not been widely investigated. The aim of this paper is to present an automatic method to quantify the JSW of three wrist joints that are least affected by bone overlapping and are frequently involved in RA. These joints are located around the scaphoid bone, viz. the multangular-navicular, capitate-navicular-lunate, and radiocarpal joints. Methods: The joint space around the scaphoid bone is detected by using consecutive searches of separate path segments, where each segment location aids in constraining the subsequent one. For joint margin delineation, first the boundary not affected by X-ray projection is extracted, followed by a backtrace process to obtain the actual joint margin. The accuracy of the quantified JSW is evaluated by comparison with the manually obtained ground truth. Results: Two of the 50 radiographs used for evaluation of the method did not yield a correct path through all three wrist joints. The delineated joint margins of the remaining 48 radiographs were used for JSW quantification. It was found that 90% of the joints had a JSW deviating less than 20% from the mean JSW of manual indications, with the mean JSW error less than 10%. Conclusion: The proposed method is able to automatically quantify the JSW of radiographic wrist joints reliably. The proposed method may aid clinical researchers to study the progression of wrist joint damage in RA studies.

  12. NMRNet: A deep learning approach to automated peak picking of protein NMR spectra.

    PubMed

    Klukowski, Piotr; Augoff, Michal; Zieba, Maciej; Drwal, Maciej; Gonczarek, Adam; Walczak, Michal J

    2018-03-14

    Automated selection of signals in protein NMR spectra, known as peak picking, has been studied for over 20 years, nevertheless existing peak picking methods are still largely deficient. Accurate and precise automated peak picking would accelerate the structure calculation, and analysis of dynamics and interactions of macromolecules. Recent advancement in handling big data, together with an outburst of machine learning techniques, offer an opportunity to tackle the peak picking problem substantially faster than manual picking and on par with human accuracy. In particular, deep learning has proven to systematically achieve human-level performance in various recognition tasks, and thus emerges as an ideal tool to address automated identification of NMR signals. We have applied a convolutional neural network for visual analysis of multidimensional NMR spectra. A comprehensive test on 31 manually-annotated spectra has demonstrated top-tier average precision (AP) of 0.9596, 0.9058 and 0.8271 for backbone, side-chain and NOESY spectra, respectively. Furthermore, a combination of extracted peak lists with automated assignment routine, FLYA, outperformed other methods, including the manual one, and led to correct resonance assignment at the levels of 90.40%, 89.90% and 90.20% for three benchmark proteins. The proposed model is a part of a Dumpling software (platform for protein NMR data analysis), and is available at https://dumpling.bio/. michaljerzywalczak@gmail.compiotr.klukowski@pwr.edu.pl. Supplementary data are available at Bioinformatics online.

  13. Optimization of the reference region method for dual pharmacokinetic modeling using Gd-DTPA/MRI and (18) F-FDG/PET.

    PubMed

    Poulin, Éric; Lebel, Réjean; Croteau, Étienne; Blanchette, Marie; Tremblay, Luc; Lecomte, Roger; Bentourkia, M'hamed; Lepage, Martin

    2015-02-01

    The combination of MRI and positron emission tomography (PET) offers new possibilities for the development of novel methodologies. In pharmacokinetic image analysis, the blood concentration of the imaging compound as a function of time, [i.e., the arterial input function (AIF)] is required for MRI and PET. In this study, we tested whether an AIF extracted from a reference region (RR) in MRI can be used as a surrogate for the manually sampled (18) F-FDG AIF for pharmacokinetic modeling. An MRI contrast agent, gadolinium-diethylenetriaminepentaacetic acid (Gd-DTPA) and a radiotracer, (18) F-fluorodeoxyglucose ((18) F-FDG), were simultaneously injected in a F98 glioblastoma rat model. A correction to the RR AIF for Gd-DTPA is proposed to adequately represent the manually sampled AIF. A previously published conversion method was applied to convert this AIF into a (18) F-FDG AIF. The tumor metabolic rate of glucose (TMRGlc) calculated with the manually sampled (18) F-FDG AIF, the (18) F-FDG AIF converted from the RR AIF and the (18) F-FDG AIF converted from the corrected RR AIF were found not statistically different (P>0.05). An AIF derived from an RR in MRI can be accurately converted into a (18) F-FDG AIF and used in PET pharmacokinetic modeling. © 2014 Wiley Periodicals, Inc.

  14. Extracting microRNA-gene relations from biomedical literature using distant supervision

    PubMed Central

    Clarke, Luka A.; Couto, Francisco M.

    2017-01-01

    Many biomedical relation extraction approaches are based on supervised machine learning, requiring an annotated corpus. Distant supervision aims at training a classifier by combining a knowledge base with a corpus, reducing the amount of manual effort necessary. This is particularly useful for biomedicine because many databases and ontologies have been made available for many biological processes, while the availability of annotated corpora is still limited. We studied the extraction of microRNA-gene relations from text. MicroRNA regulation is an important biological process due to its close association with human diseases. The proposed method, IBRel, is based on distantly supervised multi-instance learning. We evaluated IBRel on three datasets, and the results were compared with a co-occurrence approach as well as a supervised machine learning algorithm. While supervised learning outperformed on two of those datasets, IBRel obtained an F-score 28.3 percentage points higher on the dataset for which there was no training set developed specifically. To demonstrate the applicability of IBRel, we used it to extract 27 miRNA-gene relations from recently published papers about cystic fibrosis. Our results demonstrate that our method can be successfully used to extract relations from literature about a biological process without an annotated corpus. The source code and data used in this study are available at https://github.com/AndreLamurias/IBRel. PMID:28263989

  15. Extracting microRNA-gene relations from biomedical literature using distant supervision.

    PubMed

    Lamurias, Andre; Clarke, Luka A; Couto, Francisco M

    2017-01-01

    Many biomedical relation extraction approaches are based on supervised machine learning, requiring an annotated corpus. Distant supervision aims at training a classifier by combining a knowledge base with a corpus, reducing the amount of manual effort necessary. This is particularly useful for biomedicine because many databases and ontologies have been made available for many biological processes, while the availability of annotated corpora is still limited. We studied the extraction of microRNA-gene relations from text. MicroRNA regulation is an important biological process due to its close association with human diseases. The proposed method, IBRel, is based on distantly supervised multi-instance learning. We evaluated IBRel on three datasets, and the results were compared with a co-occurrence approach as well as a supervised machine learning algorithm. While supervised learning outperformed on two of those datasets, IBRel obtained an F-score 28.3 percentage points higher on the dataset for which there was no training set developed specifically. To demonstrate the applicability of IBRel, we used it to extract 27 miRNA-gene relations from recently published papers about cystic fibrosis. Our results demonstrate that our method can be successfully used to extract relations from literature about a biological process without an annotated corpus. The source code and data used in this study are available at https://github.com/AndreLamurias/IBRel.

  16. Advancing The Cancer Genome Atlas glioma MRI collections with expert segmentation labels and radiomic features

    PubMed Central

    Bakas, Spyridon; Akbari, Hamed; Sotiras, Aristeidis; Bilello, Michel; Rozycki, Martin; Kirby, Justin S.; Freymann, John B.; Farahani, Keyvan; Davatzikos, Christos

    2017-01-01

    Gliomas belong to a group of central nervous system tumors, and consist of various sub-regions. Gold standard labeling of these sub-regions in radiographic imaging is essential for both clinical and computational studies, including radiomic and radiogenomic analyses. Towards this end, we release segmentation labels and radiomic features for all pre-operative multimodal magnetic resonance imaging (MRI) (n=243) of the multi-institutional glioma collections of The Cancer Genome Atlas (TCGA), publicly available in The Cancer Imaging Archive (TCIA). Pre-operative scans were identified in both glioblastoma (TCGA-GBM, n=135) and low-grade-glioma (TCGA-LGG, n=108) collections via radiological assessment. The glioma sub-region labels were produced by an automated state-of-the-art method and manually revised by an expert board-certified neuroradiologist. An extensive panel of radiomic features was extracted based on the manually-revised labels. This set of labels and features should enable i) direct utilization of the TCGA/TCIA glioma collections towards repeatable, reproducible and comparative quantitative studies leading to new predictive, prognostic, and diagnostic assessments, as well as ii) performance evaluation of computer-aided segmentation methods, and comparison to our state-of-the-art method. PMID:28872634

  17. Leaf epidermis images for robust identification of plants

    PubMed Central

    da Silva, Núbia Rosa; Oliveira, Marcos William da Silva; Filho, Humberto Antunes de Almeida; Pinheiro, Luiz Felipe Souza; Rossatto, Davi Rodrigo; Kolb, Rosana Marta; Bruno, Odemir Martinez

    2016-01-01

    This paper proposes a methodology for plant analysis and identification based on extracting texture features from microscopic images of leaf epidermis. All the experiments were carried out using 32 plant species with 309 epidermal samples captured by an optical microscope coupled to a digital camera. The results of the computational methods using texture features were compared to the conventional approach, where quantitative measurements of stomatal traits (density, length and width) were manually obtained. Epidermis image classification using texture has achieved a success rate of over 96%, while success rate was around 60% for quantitative measurements taken manually. Furthermore, we verified the robustness of our method accounting for natural phenotypic plasticity of stomata, analysing samples from the same species grown in different environments. Texture methods were robust even when considering phenotypic plasticity of stomatal traits with a decrease of 20% in the success rate, as quantitative measurements proved to be fully sensitive with a decrease of 77%. Results from the comparison between the computational approach and the conventional quantitative measurements lead us to discover how computational systems are advantageous and promising in terms of solving problems related to Botany, such as species identification. PMID:27217018

  18. Convolutional neural network for high-accuracy functional near-infrared spectroscopy in a brain-computer interface: three-class classification of rest, right-, and left-hand motor execution.

    PubMed

    Trakoolwilaiwan, Thanawin; Behboodi, Bahareh; Lee, Jaeseok; Kim, Kyungsoo; Choi, Ji-Woong

    2018-01-01

    The aim of this work is to develop an effective brain-computer interface (BCI) method based on functional near-infrared spectroscopy (fNIRS). In order to improve the performance of the BCI system in terms of accuracy, the ability to discriminate features from input signals and proper classification are desired. Previous studies have mainly extracted features from the signal manually, but proper features need to be selected carefully. To avoid performance degradation caused by manual feature selection, we applied convolutional neural networks (CNNs) as the automatic feature extractor and classifier for fNIRS-based BCI. In this study, the hemodynamic responses evoked by performing rest, right-, and left-hand motor execution tasks were measured on eight healthy subjects to compare performances. Our CNN-based method provided improvements in classification accuracy over conventional methods employing the most commonly used features of mean, peak, slope, variance, kurtosis, and skewness, classified by support vector machine (SVM) and artificial neural network (ANN). Specifically, up to 6.49% and 3.33% improvement in classification accuracy was achieved by CNN compared with SVM and ANN, respectively.

  19. Feature ranking and rank aggregation for automatic sleep stage classification: a comparative study.

    PubMed

    Najdi, Shirin; Gharbali, Ali Abdollahi; Fonseca, José Manuel

    2017-08-18

    Nowadays, sleep quality is one of the most important measures of healthy life, especially considering the huge number of sleep-related disorders. Identifying sleep stages using polysomnographic (PSG) signals is the traditional way of assessing sleep quality. However, the manual process of sleep stage classification is time-consuming, subjective and costly. Therefore, in order to improve the accuracy and efficiency of the sleep stage classification, researchers have been trying to develop automatic classification algorithms. Automatic sleep stage classification mainly consists of three steps: pre-processing, feature extraction and classification. Since classification accuracy is deeply affected by the extracted features, a poor feature vector will adversely affect the classifier and eventually lead to low classification accuracy. Therefore, special attention should be given to the feature extraction and selection process. In this paper the performance of seven feature selection methods, as well as two feature rank aggregation methods, were compared. Pz-Oz EEG, horizontal EOG and submental chin EMG recordings of 22 healthy males and females were used. A comprehensive feature set including 49 features was extracted from these recordings. The extracted features are among the most common and effective features used in sleep stage classification from temporal, spectral, entropy-based and nonlinear categories. The feature selection methods were evaluated and compared using three criteria: classification accuracy, stability, and similarity. Simulation results show that MRMR-MID achieves the highest classification performance while Fisher method provides the most stable ranking. In our simulations, the performance of the aggregation methods was in the average level, although they are known to generate more stable results and better accuracy. The Borda and RRA rank aggregation methods could not outperform significantly the conventional feature ranking methods. Among conventional methods, some of them slightly performed better than others, although the choice of a suitable technique is dependent on the computational complexity and accuracy requirements of the user.

  20. Automatic Detection and Recognition of Craters Based on the Spectral Features of Lunar Rocks and Minerals

    NASA Astrophysics Data System (ADS)

    Ye, L.; Xu, X.; Luan, D.; Jiang, W.; Kang, Z.

    2017-07-01

    Crater-detection approaches can be divided into four categories: manual recognition, shape-profile fitting algorithms, machine-learning methods and geological information-based analysis using terrain and spectral data. The mainstream method is Shape-profile fitting algorithms. Many scholars throughout the world use the illumination gradient information to fit standard circles by least square method. Although this method has achieved good results, it is difficult to identify the craters with poor "visibility", complex structure and composition. Moreover, the accuracy of recognition is difficult to be improved due to the multiple solutions and noise interference. Aiming at the problem, we propose a method for the automatic extraction of impact craters based on spectral characteristics of the moon rocks and minerals: 1) Under the condition of sunlight, the impact craters are extracted from MI by condition matching and the positions as well as diameters of the craters are obtained. 2) Regolith is spilled while lunar is impacted and one of the elements of lunar regolith is iron. Therefore, incorrectly extracted impact craters can be removed by judging whether the crater contains "non iron" element. 3) Craters which are extracted correctly, are divided into two types: simple type and complex type according to their diameters. 4) Get the information of titanium and match the titanium distribution of the complex craters with normal distribution curve, then calculate the goodness of fit and set the threshold. The complex craters can be divided into two types: normal distribution curve type of titanium and non normal distribution curve type of titanium. We validated our proposed method with MI acquired by SELENE. Experimental results demonstrate that the proposed method has good performance in the test area.

  1. Automatic multiscale enhancement and segmentation of pulmonary vessels in CT pulmonary angiography images for CAD applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou Chuan; Chan, H.-P.; Sahiner, Berkman

    2007-12-15

    The authors are developing a computerized pulmonary vessel segmentation method for a computer-aided pulmonary embolism (PE) detection system on computed tomographic pulmonary angiography (CTPA) images. Because PE only occurs inside pulmonary arteries, an automatic and accurate segmentation of the pulmonary vessels in 3D CTPA images is an essential step for the PE CAD system. To segment the pulmonary vessels within the lung, the lung regions are first extracted using expectation-maximization (EM) analysis and morphological operations. The authors developed a 3D multiscale filtering technique to enhance the pulmonary vascular structures based on the analysis of eigenvalues of the Hessian matrix atmore » multiple scales. A new response function of the filter was designed to enhance all vascular structures including the vessel bifurcations and suppress nonvessel structures such as the lymphoid tissues surrounding the vessels. An EM estimation is then used to segment the vascular structures by extracting the high response voxels at each scale. The vessel tree is finally reconstructed by integrating the segmented vessels at all scales based on a 'connected component' analysis. Two CTPA cases containing PEs were used to evaluate the performance of the system. One of these two cases also contained pleural effusion disease. Two experienced thoracic radiologists provided the gold standard of pulmonary vessels including both arteries and veins by manually tracking the arterial tree and marking the center of the vessels using a computer graphical user interface. The accuracy of vessel tree segmentation was evaluated by the percentage of the 'gold standard' vessel center points overlapping with the segmented vessels. The results show that 96.2% (2398/2494) and 96.3% (1910/1984) of the manually marked center points in the arteries overlapped with segmented vessels for the case without and with other lung diseases. For the manually marked center points in all vessels including arteries and veins, the segmentation accuracy are 97.0% (4546/4689) and 93.8% (4439/4732) for the cases without and with other lung diseases, respectively. Because of the lack of ground truth for the vessels, in addition to quantitative evaluation of the vessel segmentation performance, visual inspection was conducted to evaluate the segmentation. The results demonstrate that vessel segmentation using our method can extract the pulmonary vessels accurately and is not degraded by PE occlusion to the vessels in these test cases.« less

  2. Novel non-contact control system of electric bed for medical healthcare.

    PubMed

    Lo, Chi-Chun; Tsai, Shang-Ho; Lin, Bor-Shyh

    2017-03-01

    A novel non-contact controller of the electric bed for medical healthcare was proposed in this study. Nowadays, the electric beds are widely used for hospitals and home-care, and the conventional control method of the electric beds usually involves in the manual operation. However, it is more difficult for the disabled and bedridden patients, who might totally depend on others, to operate the conventional electric beds by themselves. Different from the current controlling method, the proposed system provides a new concept of controlling the electric bed via visual stimuli, without manual operation. The disabled patients could operate the electric bed by focusing on the control icons of a visual stimulus tablet in the proposed system. Besides, a wearable and wireless EEG acquisition module was also implemented to monitor the EEG signals of patients. The experimental results showed that the proposed system successfully measured and extracted the EEG features related to visual stimuli, and the disabled patients could operate the adjustable function of the electric bed by themselves to effectively reduce the long-term care burden.

  3. Manual tracing versus smartphone application (app) tracing: a comparative study.

    PubMed

    Sayar, Gülşilay; Kilinc, Delal Dara

    2017-11-01

    This study aimed to compare the results of conventional manual cephalometric tracing with those acquired with smartphone application cephalometric tracing. The cephalometric radiographs of 55 patients (25 females and 30 males) were traced via the manual and app methods and were subsequently examined with Steiner's analysis. Five skeletal measurements, five dental measurements and two soft tissue measurements were managed based on 21 landmarks. The durations of the performances of the two methods were also compared. SNA (Sella, Nasion, A point angle) and SNB (Sella, Nasion, B point angle) values for the manual method were statistically lower (p < .001) than those for the app method. The ANB value for the manual method was statistically lower than that of app method. L1-NB (°) and upper lip protrusion values for the manual method were statistically higher than those for the app method. Go-GN/SN, U1-NA (°) and U1-NA (mm) values for manual method were statistically lower than those for the app method. No differences between the two methods were found in the L1-NB (mm), occlusal plane to SN, interincisal angle or lower lip protrusion values. Although statistically significant differences were found between the two methods, the cephalometric tracing proceeded faster with the app method than with the manual method.

  4. Diffuse optical tomography using semiautomated coregistered ultrasound measurements

    NASA Astrophysics Data System (ADS)

    Mostafa, Atahar; Vavadi, Hamed; Uddin, K. M. Shihab; Zhu, Quing

    2017-12-01

    Diffuse optical tomography (DOT) has demonstrated huge potential in breast cancer diagnosis and treatment monitoring. DOT image reconstruction guided by ultrasound (US) improves the diffused light localization and lesion reconstruction accuracy. However, DOT reconstruction depends on tumor geometry provided by coregistered US. Experienced operators can manually measure these lesion parameters; however, training and measurement time are needed. The wide clinical use of this technique depends on its robustness and faster imaging reconstruction capability. This article introduces a semiautomated procedure that automatically extracts lesion information from US images and incorporates it into the optical reconstruction. An adaptive threshold-based image segmentation is used to obtain tumor boundaries. For some US images, posterior shadow can extend to the chest wall and make the detection of deeper lesion boundary difficult. This problem can be solved using a Hough transform. The proposed procedure was validated from data of 20 patients. Optical reconstruction results using the proposed procedure were compared with those reconstructed using extracted tumor information from an experienced user. Mean optical absorption obtained from manual measurement was 0.21±0.06 cm-1 for malignant and 0.12±0.06 cm-1 for benign cases, whereas for the proposed method it was 0.24±0.08 cm-1 and 0.12±0.05 cm-1, respectively.

  5. Automated processing of electronic medical records is a reliable method of determining aspirin use in populations at risk for cardiovascular events.

    PubMed

    Pakhomov, Serguei Vs; Shah, Nilay D; Hanson, Penny; Balasubramaniam, Saranya C; Smith, Steven A

    2010-01-01

    Low-dose aspirin reduces cardiovascular risk; however, monitoring over-the-counter medication use relies on the time-consuming and costly manual review of medical records. Our objective is to validate natural language processing (NLP) of the electronic medical record (EMR) for extracting medication exposure and contraindication information. The text of EMRs for 499 patients with type 2 diabetes was searched using NLP for evidence of aspirin use and its contraindications. The results were compared to a standardised manual records review. Of the 499 patients, 351 (70%) were using aspirin and 148 (30%) were not, according to manual review. NLP correctly identified 346 of the 351 aspirin-positive and 134 of the 148 aspirin-negative patients, indicating a sensitivity of 99% (95% CI 97-100) and specificity of 91% (95% CI 88-97). Of the 148 aspirin-negative patients, 66 (45%) had contraindications and 82 (55%) did not, according to manual review. NLP search for contraindications correctly identified 61 of the 66 patients with contraindications and 58 of the 82 patients without, yielding a sensitivity of 92% (95% CI 84-97) and a specificity of 71% (95% CI 60-80). NLP of the EMR is accurate in ascertaining documented aspirin use and could potentially be used for epidemiological research as a source of cardiovascular risk factor information.

  6. TES: A Text Extraction System.

    ERIC Educational Resources Information Center

    Goh, A.; Hui, S. C.

    1996-01-01

    Describes how TES, a text extraction system, is able to electronically retrieve a set of sentences from a document to form an indicative abstract. Discusses various text abstraction techniques and related work in the area, provides an overview of the TES system, and compares system results against manually produced abstracts. (LAM)

  7. [Comparison of techniques for coliform bacteria extraction from sediment of Xochimilco Lake, Mexico].

    PubMed

    Fernández-Rendón, Carlos L; Barrera-Escorcia, Guadalupe

    2013-01-01

    The need to separate bacteria from sediment in order to appropriately count them has led to test the efficacy of different techniques. In this research, traditional techniques such as manual shaking, homogenization, ultrasonication, and surfactant are compared. Moreover, the possibility of using a set of enzymes (pancreatine) and an antibiotic (ampicillin) for sediment coliform extraction is proposed. Samples were obtained from Xochimilco Lake in Mexico City. The most probable number of coliform bacteria was determined after applying the appropriate separation procedure. Most of the techniques tested led to numbers similar to those of the control (manual shaking). Only with the use of ampicillin, a greater total coliform concentration was observed (Mann-Whitney, z = 2.09; p = 0.03). It is possible to propose the use of ampicillin as a technique for total coliform extraction; however, it is necessary to consider sensitivity of bacteria to the antibiotic.

  8. Automated and Adaptable Quantification of Cellular Alignment from Microscopic Images for Tissue Engineering Applications

    PubMed Central

    Xu, Feng; Beyazoglu, Turker; Hefner, Evan; Gurkan, Umut Atakan

    2011-01-01

    Cellular alignment plays a critical role in functional, physical, and biological characteristics of many tissue types, such as muscle, tendon, nerve, and cornea. Current efforts toward regeneration of these tissues include replicating the cellular microenvironment by developing biomaterials that facilitate cellular alignment. To assess the functional effectiveness of the engineered microenvironments, one essential criterion is quantification of cellular alignment. Therefore, there is a need for rapid, accurate, and adaptable methodologies to quantify cellular alignment for tissue engineering applications. To address this need, we developed an automated method, binarization-based extraction of alignment score (BEAS), to determine cell orientation distribution in a wide variety of microscopic images. This method combines a sequenced application of median and band-pass filters, locally adaptive thresholding approaches and image processing techniques. Cellular alignment score is obtained by applying a robust scoring algorithm to the orientation distribution. We validated the BEAS method by comparing the results with the existing approaches reported in literature (i.e., manual, radial fast Fourier transform-radial sum, and gradient based approaches). Validation results indicated that the BEAS method resulted in statistically comparable alignment scores with the manual method (coefficient of determination R2=0.92). Therefore, the BEAS method introduced in this study could enable accurate, convenient, and adaptable evaluation of engineered tissue constructs and biomaterials in terms of cellular alignment and organization. PMID:21370940

  9. Use of 2D U-Net Convolutional Neural Networks for Automated Cartilage and Meniscus Segmentation of Knee MR Imaging Data to Determine Relaxometry and Morphometry.

    PubMed

    Norman, Berk; Pedoia, Valentina; Majumdar, Sharmila

    2018-03-27

    Purpose To analyze how automatic segmentation translates in accuracy and precision to morphology and relaxometry compared with manual segmentation and increases the speed and accuracy of the work flow that uses quantitative magnetic resonance (MR) imaging to study knee degenerative diseases such as osteoarthritis (OA). Materials and Methods This retrospective study involved the analysis of 638 MR imaging volumes from two data cohorts acquired at 3.0 T: (a) spoiled gradient-recalled acquisition in the steady state T1 ρ -weighted images and (b) three-dimensional (3D) double-echo steady-state (DESS) images. A deep learning model based on the U-Net convolutional network architecture was developed to perform automatic segmentation. Cartilage and meniscus compartments were manually segmented by skilled technicians and radiologists for comparison. Performance of the automatic segmentation was evaluated on Dice coefficient overlap with the manual segmentation, as well as by the automatic segmentations' ability to quantify, in a longitudinally repeatable way, relaxometry and morphology. Results The models produced strong Dice coefficients, particularly for 3D-DESS images, ranging between 0.770 and 0.878 in the cartilage compartments to 0.809 and 0.753 for the lateral meniscus and medial meniscus, respectively. The models averaged 5 seconds to generate the automatic segmentations. Average correlations between manual and automatic quantification of T1 ρ and T2 values were 0.8233 and 0.8603, respectively, and 0.9349 and 0.9384 for volume and thickness, respectively. Longitudinal precision of the automatic method was comparable with that of the manual one. Conclusion U-Net demonstrates efficacy and precision in quickly generating accurate segmentations that can be used to extract relaxation times and morphologic characterization and values that can be used in the monitoring and diagnosis of OA. © RSNA, 2018 Online supplemental material is available for this article.

  10. Analysis of 2D Phase Contrast MRI in Renal Arteries by Self Organizing Maps

    NASA Astrophysics Data System (ADS)

    Zöllner, Frank G.; Schad, Lothar R.

    We present an approach based on self organizing maps to segment renal arteries from 2D PC Cine MR, images to measure blood velocity and flow. Such information are important in grading renal artery stenosis and support the decision on surgical interventions like percu-tan transluminal angioplasty. Results show that the renal arteries could be extracted automatically. The corresponding velocity profiles show high correlation (r=0.99) compared those from manual delineated vessels. Furthermore, the method could detect possible blood flow patterns within the vessel.

  11. Sortal anaphora resolution to enhance relation extraction from biomedical literature.

    PubMed

    Kilicoglu, Halil; Rosemblat, Graciela; Fiszman, Marcelo; Rindflesch, Thomas C

    2016-04-14

    Entity coreference is common in biomedical literature and it can affect text understanding systems that rely on accurate identification of named entities, such as relation extraction and automatic summarization. Coreference resolution is a foundational yet challenging natural language processing task which, if performed successfully, is likely to enhance such systems significantly. In this paper, we propose a semantically oriented, rule-based method to resolve sortal anaphora, a specific type of coreference that forms the majority of coreference instances in biomedical literature. The method addresses all entity types and relies on linguistic components of SemRep, a broad-coverage biomedical relation extraction system. It has been incorporated into SemRep, extending its core semantic interpretation capability from sentence level to discourse level. We evaluated our sortal anaphora resolution method in several ways. The first evaluation specifically focused on sortal anaphora relations. Our methodology achieved a F1 score of 59.6 on the test portion of a manually annotated corpus of 320 Medline abstracts, a 4-fold improvement over the baseline method. Investigating the impact of sortal anaphora resolution on relation extraction, we found that the overall effect was positive, with 50 % of the changes involving uninformative relations being replaced by more specific and informative ones, while 35 % of the changes had no effect, and only 15 % were negative. We estimate that anaphora resolution results in changes in about 1.5 % of approximately 82 million semantic relations extracted from the entire PubMed. Our results demonstrate that a heavily semantic approach to sortal anaphora resolution is largely effective for biomedical literature. Our evaluation and error analysis highlight some areas for further improvements, such as coordination processing and intra-sentential antecedent selection.

  12. Feature extraction and classification of clouds in high resolution panchromatic satellite imagery

    NASA Astrophysics Data System (ADS)

    Sharghi, Elan

    The development of sophisticated remote sensing sensors is rapidly increasing, and the vast amount of satellite imagery collected is too much to be analyzed manually by a human image analyst. It has become necessary for a tool to be developed to automate the job of an image analyst. This tool would need to intelligently detect and classify objects of interest through computer vision algorithms. Existing software called the Rapid Image Exploitation Resource (RAPIER®) was designed by engineers at Space and Naval Warfare Systems Center Pacific (SSC PAC) to perform exactly this function. This software automatically searches for anomalies in the ocean and reports the detections as a possible ship object. However, if the image contains a high percentage of cloud coverage, a high number of false positives are triggered by the clouds. The focus of this thesis is to explore various feature extraction and classification methods to accurately distinguish clouds from ship objects. An examination of a texture analysis method, line detection using the Hough transform, and edge detection using wavelets are explored as possible feature extraction methods. The features are then supplied to a K-Nearest Neighbors (KNN) or Support Vector Machine (SVM) classifier. Parameter options for these classifiers are explored and the optimal parameters are determined.

  13. Automatic extraction of three-dimensional thoracic aorta geometric model from phase contrast MRI for morphometric and hemodynamic characterization.

    PubMed

    Volonghi, Paola; Tresoldi, Daniele; Cadioli, Marcello; Usuelli, Antonio M; Ponzini, Raffaele; Morbiducci, Umberto; Esposito, Antonio; Rizzo, Giovanna

    2016-02-01

    To propose and assess a new method that automatically extracts a three-dimensional (3D) geometric model of the thoracic aorta (TA) from 3D cine phase contrast MRI (PCMRI) acquisitions. The proposed method is composed of two steps: segmentation of the TA and creation of the 3D geometric model. The segmentation algorithm, based on Level Set, was set and applied to healthy subjects acquired in three different modalities (with and without SENSE reduction factors). Accuracy was evaluated using standard quality indices. The 3D model is characterized by the vessel surface mesh and its centerline; the comparison of models obtained from the three different datasets was also carried out in terms of radius of curvature (RC) and average tortuosity (AT). In all datasets, the segmentation quality indices confirmed very good agreement between manual and automatic contours (average symmetric distance < 1.44 mm, DICE Similarity Coefficient > 0.88). The 3D models extracted from the three datasets were found to be comparable, with differences of less than 10% for RC and 11% for AT. Our method was found effective on PCMRI data to provide a 3D geometric model of the TA, to support morphometric and hemodynamic characterization of the aorta. © 2015 Wiley Periodicals, Inc.

  14. Extraction of Martian valley networks from digital topography

    NASA Technical Reports Server (NTRS)

    Stepinski, T. F.; Collier, M. L.

    2004-01-01

    We have developed a novel method for delineating valley networks on Mars. The valleys are inferred from digital topography by an autonomous computer algorithm as drainage networks, instead of being manually mapped from images. Individual drainage basins are precisely defined and reconstructed to restore flow continuity disrupted by craters. Drainage networks are extracted from their underlying basins using the contributing area threshold method. We demonstrate that such drainage networks coincide with mapped valley networks verifying that valley networks are indeed drainage systems. Our procedure is capable of delineating and analyzing valley networks with unparalleled speed and consistency. We have applied this method to 28 Noachian locations on Mars exhibiting prominent valley networks. All extracted networks have a planar morphology similar to that of terrestrial river networks. They are characterized by a drainage density of approx.0.1/km, low in comparison to the drainage density of terrestrial river networks. Slopes of "streams" in Martian valley networks decrease downstream at a slower rate than slopes of streams in terrestrial river networks. This analysis, based on a sizable data set of valley networks, reveals that although valley networks have some features pointing to their origin by precipitation-fed runoff erosion, their quantitative characteristics suggest that precipitation intensity and/or longevity of past pluvial climate were inadequate to develop mature drainage basins on Mars.

  15. Thoracic cavity segmentation algorithm using multiorgan extraction and surface fitting in volumetric CT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bae, JangPyo; Kim, Namkug, E-mail: namkugkim@gmail.com; Lee, Sang Min

    2014-04-15

    Purpose: To develop and validate a semiautomatic segmentation method for thoracic cavity volumetry and mediastinum fat quantification of patients with chronic obstructive pulmonary disease. Methods: The thoracic cavity region was separated by segmenting multiorgans, namely, the rib, lung, heart, and diaphragm. To encompass various lung disease-induced variations, the inner thoracic wall and diaphragm were modeled by using a three-dimensional surface-fitting method. To improve the accuracy of the diaphragm surface model, the heart and its surrounding tissue were segmented by a two-stage level set method using a shape prior. To assess the accuracy of the proposed algorithm, the algorithm results ofmore » 50 patients were compared to the manual segmentation results of two experts with more than 5 years of experience (these manual results were confirmed by an expert thoracic radiologist). The proposed method was also compared to three state-of-the-art segmentation methods. The metrics used to evaluate segmentation accuracy were volumetric overlap ratio (VOR), false positive ratio on VOR (FPRV), false negative ratio on VOR (FNRV), average symmetric absolute surface distance (ASASD), average symmetric squared surface distance (ASSSD), and maximum symmetric surface distance (MSSD). Results: In terms of thoracic cavity volumetry, the mean ± SD VOR, FPRV, and FNRV of the proposed method were (98.17 ± 0.84)%, (0.49 ± 0.23)%, and (1.34 ± 0.83)%, respectively. The ASASD, ASSSD, and MSSD for the thoracic wall were 0.28 ± 0.12, 1.28 ± 0.53, and 23.91 ± 7.64 mm, respectively. The ASASD, ASSSD, and MSSD for the diaphragm surface were 1.73 ± 0.91, 3.92 ± 1.68, and 27.80 ± 10.63 mm, respectively. The proposed method performed significantly better than the other three methods in terms of VOR, ASASD, and ASSSD. Conclusions: The proposed semiautomatic thoracic cavity segmentation method, which extracts multiple organs (namely, the rib, thoracic wall, diaphragm, and heart), performed with high accuracy and may be useful for clinical purposes.« less

  16. Comparison of QIAsymphony automated and QIAamp manual DNA extraction systems for measuring Epstein-Barr virus DNA load in whole blood using real-time PCR.

    PubMed

    Laus, Stella; Kingsley, Lawrence A; Green, Michael; Wadowsky, Robert M

    2011-11-01

    Automated and manual extraction systems have been used with real-time PCR for quantification of Epstein-Barr virus [human herpesvirus 4 (HHV-4)] DNA in whole blood, but few studies have evaluated relative performances. In the present study, the automated QIAsymphony and manual QIAamp extraction systems (Qiagen, Valencia, CA) were assessed using paired aliquots derived from clinical whole-blood specimens and an in-house, real-time PCR assay. The detection limits using the QIAsymphony and QIAamp systems were similar (270 and 560 copies/mL, respectively). For samples estimated as having ≥10,000 copies/mL, the intrarun and interrun variations were significantly lower using QIAsymphony (10.0% and 6.8%, respectively), compared with QIAamp (18.6% and 15.2%, respectively); for samples having ≤1000 copies/mL, the two variations ranged from 27.9% to 43.9% and were not significantly different between the two systems. Among 68 paired clinical samples, 48 pairs yielded viral loads ≥1000 copies/mL under both extraction systems. Although the logarithmic linear correlation from these positive samples was high (r(2) = 0.957), the values obtained using QIAsymphony were on average 0.2 log copies/mL higher than those obtained using QIAamp. Thus, the QIAsymphony and QIAamp systems provide similar EBV DNA load values in whole blood. Copyright © 2011 American Society for Investigative Pathology and the Association for Molecular Pathology. Published by Elsevier Inc. All rights reserved.

  17. Multi-atlas propagation based left atrium segmentation coupled with super-voxel based pulmonary veins delineation in late gadolinium-enhanced cardiac MRI

    NASA Astrophysics Data System (ADS)

    Yang, Guang; Zhuang, Xiahai; Khan, Habib; Haldar, Shouvik; Nyktari, Eva; Li, Lei; Ye, Xujiong; Slabaugh, Greg; Wong, Tom; Mohiaddin, Raad; Keegan, Jennifer; Firmin, David

    2017-02-01

    Late Gadolinium-Enhanced Cardiac MRI (LGE CMRI) is a non-invasive technique, which has shown promise in detecting native and post-ablation atrial scarring. To visualize the scarring, a precise segmentation of the left atrium (LA) and pulmonary veins (PVs) anatomy is performed as a first step—usually from an ECG gated CMRI roadmap acquisition—and the enhanced scar regions from the LGE CMRI images are superimposed. The anatomy of the LA and PVs in particular is highly variable and manual segmentation is labor intensive and highly subjective. In this paper, we developed a multi-atlas propagation based whole heart segmentation (WHS) to delineate the LA and PVs from ECG gated CMRI roadmap scans. While this captures the anatomy of the atrium well, the PVs anatomy is less easily visualized. The process is therefore augmented by semi-automated manual strokes for PVs identification in the registered LGE CMRI data. This allows us to extract more accurate anatomy than the fully automated WHS. Both qualitative visualization and quantitative assessment with respect to manual segmented ground truth showed that our method is efficient and effective with an overall mean Dice score of 0.91.

  18. The new PARIOTM device for determining continuous particle-size distributions of soils and sediments.

    NASA Astrophysics Data System (ADS)

    Miller, Alina; Pertassek, Thomas; Steins, Andreas; Durner, Wolfgang; Göttlein, Axel; Petrik, Wolfgang; von Unold, Georg

    2017-04-01

    The particle-size distribution (PSD) is a key property of soils. The reference method for determining the PSD is based on gravitational sedimentation of particles in an initially homogeneous suspension. Traditional methods measure manually (i) the uplift of a floating body in the suspension at different times (Hydrometer method) or (ii) the mass of solids in extracted suspension aliquots at predefined sampling depths and times (Pipette method). Both methods lead to a disturbance of the sedimentation process and provide only discrete data of the PSD. Durner et al. (2017) recently developed a new automated method to determine particle-size distributions of soils and sediments from gravitational sedimentation (Durner, W., S.C. Iden, and G. von Unold: The integral suspension pressure method (ISP) for precise particle-size analysis by gravitational sedimentation, Water Resources Research, doi:10.1002/2016WR019830, 2017). The so-called integral suspension method (ISP) method estimates continuous PSD's from sedimentation experiments by recording the temporal evolution of the suspension pressure at a certain measurement depth in a sedimentation cylinder. It requires no manual interaction after start and thus no specialized training of the lab personnel and avoids any disturbance of the sedimentation process. The required technology to perform these experiments was developed by the UMS company, Munich and is now available as an instrument called PARIO, traded by the METER Group. In this poster, the basic functioning of PARIO is shown and key components and parameters of the technology are explained.

  19. Drug drug interaction extraction from the literature using a recursive neural network

    PubMed Central

    Lim, Sangrak; Lee, Kyubum

    2018-01-01

    Detecting drug-drug interactions (DDI) is important because information on DDIs can help prevent adverse effects from drug combinations. Since there are many new DDI-related papers published in the biomedical domain, manually extracting DDI information from the literature is a laborious task. However, text mining can be used to find DDIs in the biomedical literature. Among the recently developed neural networks, we use a Recursive Neural Network to improve the performance of DDI extraction. Our recursive neural network model uses a position feature, a subtree containment feature, and an ensemble method to improve the performance of DDI extraction. Compared with the state-of-the-art models, the DDI detection and type classifiers of our model performed 4.4% and 2.8% better, respectively, on the DDIExtraction Challenge’13 test data. We also validated our model on the PK DDI corpus that consists of two types of DDIs data: in vivo DDI and in vitro DDI. Compared with the existing model, our detection classifier performed 2.3% and 6.7% better on in vivo and in vitro data respectively. The results of our validation demonstrate that our model can automatically extract DDIs better than existing models. PMID:29373599

  20. Novel images extraction model using improved delay vector variance feature extraction and multi-kernel neural network for EEG detection and prediction.

    PubMed

    Ge, Jing; Zhang, Guoping

    2015-01-01

    Advanced intelligent methodologies could help detect and predict diseases from the EEG signals in cases the manual analysis is inefficient available, for instance, the epileptic seizures detection and prediction. This is because the diversity and the evolution of the epileptic seizures make it very difficult in detecting and identifying the undergoing disease. Fortunately, the determinism and nonlinearity in a time series could characterize the state changes. Literature review indicates that the Delay Vector Variance (DVV) could examine the nonlinearity to gain insight into the EEG signals but very limited work has been done to address the quantitative DVV approach. Hence, the outcomes of the quantitative DVV should be evaluated to detect the epileptic seizures. To develop a new epileptic seizure detection method based on quantitative DVV. This new epileptic seizure detection method employed an improved delay vector variance (IDVV) to extract the nonlinearity value as a distinct feature. Then a multi-kernel functions strategy was proposed in the extreme learning machine (ELM) network to provide precise disease detection and prediction. The nonlinearity is more sensitive than the energy and entropy. 87.5% overall accuracy of recognition and 75.0% overall accuracy of forecasting were achieved. The proposed IDVV and multi-kernel ELM based method was feasible and effective for epileptic EEG detection. Hence, the newly proposed method has importance for practical applications.

  1. SU-G-206-17: RadShield: Semi-Automated Shielding Design for CT Using NCRP 147 and Isodose Curves

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    DeLorenzo, M; Rutel, I; Yang, K

    2016-06-15

    Purpose: Computed tomography (CT) exam rooms are shielded more quickly and accurately compared to manual calculations using RadShield, a semi-automated diagnostic shielding software package. Last year, we presented RadShield’s approach to shielding radiographic and fluoroscopic rooms calculating air kerma rate and barrier thickness at many points on the floor plan and reporting the maximum values for each barrier. RadShield has now been expanded to include CT shielding design using not only NCRP 147 methodology but also by overlaying vendor provided isodose curves onto the floor plan. Methods: The floor plan image is imported onto the RadShield workspace to serve asmore » a template for drawing barriers, occupied regions and CT locations. SubGUIs are used to set design goals, occupancy factors, workload, and overlay isodose curve files. CTDI and DLP methods are solved following NCRP 147. RadShield’s isodose curve method employs radial scanning to extract data point sets to fit kerma to a generalized power law equation of the form K(r) = ar^b. RadShield’s semiautomated shielding recommendations were compared against a board certified medical physicist’s design using dose length product (DLP) and isodose curves. Results: The percentage error found between the physicist’s manual calculation and RadShield’s semi-automated calculation of lead barrier thickness was 3.42% and 21.17% for the DLP and isodose curve methods, respectively. The medical physicist’s selection of calculation points for recommending lead thickness was roughly the same as those found by RadShield for the DLP method but differed greatly using the isodose method. Conclusion: RadShield improves accuracy in calculating air-kerma rate and barrier thickness over manual calculations using isodose curves. Isodose curves were less intuitive and more prone to error for the physicist than inverse square methods. RadShield can now perform shielding design calculations for general scattering bodies for which isodose curves are provided.« less

  2. SU-F-P-53: RadShield: Semi-Automated Shielding Design for CT Using NCRP 147 and Isodose Curves

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    DeLorenzo, M; Rutel, I; Wu, D

    Purpose: Computed tomography (CT) exam rooms are shielded more quickly and accurately compared to manual calculations using RadShield, a semi-automated diagnostic shielding software package. Last year, we presented RadShield’s approach to shielding radiographic and fluoroscopic rooms calculating air kerma rate and barrier thickness at many points on the floor plan and reporting the maximum values for each barrier. RadShield has now been expanded to include CT shielding design using not only NCRP 147 methodology but also by overlaying vendor provided isodose curves onto the floor plan. Methods: The floor plan image is imported onto the RadShield workspace to serve asmore » a template for drawing barriers, occupied regions and CT locations. SubGUIs are used to set design goals, occupancy factors, workload, and overlay isodose curve files. CTDI and DLP methods are solved following NCRP 147. RadShield’s isodose curve method employs radial scanning to extract data point sets to fit kerma to a generalized power law equation of the form K(r) = ar^b. RadShield’s semi-automated shielding recommendations were compared against a board certified medical physicist’s design using dose length product (DLP) and isodose curves. Results: The percentage error found between the physicist’s manual calculation and RadShield’s semi-automated calculation of lead barrier thickness was 3.42% and 21.17% for the DLP and isodose curve methods, respectively. The medical physicist’s selection of calculation points for recommending lead thickness was roughly the same as those found by RadShield for the DLP method but differed greatly using the isodose method. Conclusion: RadShield improves accuracy in calculating air-kerma rate and barrier thickness over manual calculations using isodose curves. Isodose curves were less intuitive and more prone to error for the physicist than inverse square methods. RadShield can now perform shielding design calculations for general scattering bodies for which isodose curves are provided.« less

  3. An observational analysis of provider adherence to AUA guidelines on the management of benign prostatic hyperplasia.

    PubMed

    Auffenberg, Gregory B; Gonzalez, Chris M; Wolf, J Stuart; Clemens, J Quentin; Meeks, William; McVary, Kevin T

    2014-11-01

    We retrospectively evaluated urologist adherence to the AUA guidelines on the management of new patients with benign prostatic hyperplasia related lower urinary tract symptoms in a large university urology group. All first time benign prostatic hyperplasia/lower urinary tract symptom visits to the urology clinic at the Northwestern Medical Faculty Foundation between January 1, 2008 and December 31, 2012 were evaluated using an institutionally managed electronic medical record data repository. Clinical documentation and orders from each encounter were assessed to determine the rate of performance of guideline measures. Approximately 1% of all results were manually reviewed in a validation process designed to determine the reliability of the electronic medical record based system. A total of 3,494 eligible encounters were evaluated in the final analysis. Provider adherence rates with the 9 measures recommended in the guidelines varied by measure from 53.0% to 92.8%. The rate of performance of 5 not routinely recommended measures was 10.2% or less. Post-void residual and urinary flow measurement were optional measures, and were performed on 68.1% and 4.6% of new encounters respectively. Manual validation revealed the electronic medical record data extraction was concordant with manual review in 96.7% of cases (95% CI 94.8-98.5). Using electronic medical record based data extraction techniques, we reliably document a baseline adherence rate with AUA guidelines on the management of benign prostatic hyperplasia. Establishing this benchmark will be important for future investigation into patient outcomes related to guideline adherence and into methods for improving provider adherence. Copyright © 2014 American Urological Association Education and Research, Inc. Published by Elsevier Inc. All rights reserved.

  4. The freetext matching algorithm: a computer program to extract diagnoses and causes of death from unstructured text in electronic health records

    PubMed Central

    2012-01-01

    Background Electronic health records are invaluable for medical research, but much information is stored as free text rather than in a coded form. For example, in the UK General Practice Research Database (GPRD), causes of death and test results are sometimes recorded only in free text. Free text can be difficult to use for research if it requires time-consuming manual review. Our aim was to develop an automated method for extracting coded information from free text in electronic patient records. Methods We reviewed the electronic patient records in GPRD of a random sample of 3310 patients who died in 2001, to identify the cause of death. We developed a computer program called the Freetext Matching Algorithm (FMA) to map diagnoses in text to the Read Clinical Terminology. The program uses lookup tables of synonyms and phrase patterns to identify diagnoses, dates and selected test results. We tested it on two random samples of free text from GPRD (1000 texts associated with death in 2001, and 1000 general texts from cases and controls in a coronary artery disease study), comparing the output to the U.S. National Library of Medicine’s MetaMap program and the gold standard of manual review. Results Among 3310 patients registered in the GPRD who died in 2001, the cause of death was recorded in coded form in 38.1% of patients, and in the free text alone in 19.4%. On the 1000 texts associated with death, FMA coded 683 of the 735 positive diagnoses, with precision (positive predictive value) 98.4% (95% confidence interval (CI) 97.2, 99.2) and recall (sensitivity) 92.9% (95% CI 90.8, 94.7). On the general sample, FMA detected 346 of the 447 positive diagnoses, with precision 91.5% (95% CI 88.3, 94.1) and recall 77.4% (95% CI 73.2, 81.2), which was similar to MetaMap. Conclusions We have developed an algorithm to extract coded information from free text in GP records with good precision. It may facilitate research using free text in electronic patient records, particularly for extracting the cause of death. PMID:22870911

  5. Physics-Based Image Segmentation Using First Order Statistical Properties and Genetic Algorithm for Inductive Thermography Imaging.

    PubMed

    Gao, Bin; Li, Xiaoqing; Woo, Wai Lok; Tian, Gui Yun

    2018-05-01

    Thermographic inspection has been widely applied to non-destructive testing and evaluation with the capabilities of rapid, contactless, and large surface area detection. Image segmentation is considered essential for identifying and sizing defects. To attain a high-level performance, specific physics-based models that describe defects generation and enable the precise extraction of target region are of crucial importance. In this paper, an effective genetic first-order statistical image segmentation algorithm is proposed for quantitative crack detection. The proposed method automatically extracts valuable spatial-temporal patterns from unsupervised feature extraction algorithm and avoids a range of issues associated with human intervention in laborious manual selection of specific thermal video frames for processing. An internal genetic functionality is built into the proposed algorithm to automatically control the segmentation threshold to render enhanced accuracy in sizing the cracks. Eddy current pulsed thermography will be implemented as a platform to demonstrate surface crack detection. Experimental tests and comparisons have been conducted to verify the efficacy of the proposed method. In addition, a global quantitative assessment index F-score has been adopted to objectively evaluate the performance of different segmentation algorithms.

  6. Assessment of myocardial metabolic rate of glucose by means of Bayesian ICA and Markov Chain Monte Carlo methods in small animal PET imaging

    NASA Astrophysics Data System (ADS)

    Berradja, Khadidja; Boughanmi, Nabil

    2016-09-01

    In dynamic cardiac PET FDG studies the assessment of myocardial metabolic rate of glucose (MMRG) requires the knowledge of the blood input function (IF). IF can be obtained by manual or automatic blood sampling and cross calibrated with PET. These procedures are cumbersome, invasive and generate uncertainties. The IF is contaminated by spillover of radioactivity from the adjacent myocardium and this could cause important error in the estimated MMRG. In this study, we show that the IF can be extracted from the images in a rat heart study with 18F-fluorodeoxyglucose (18F-FDG) by means of Independent Component Analysis (ICA) based on Bayesian theory and Markov Chain Monte Carlo (MCMC) sampling method (BICA). Images of the heart from rats were acquired with the Sherbrooke small animal PET scanner. A region of interest (ROI) was drawn around the rat image and decomposed into blood and tissue using BICA. The Statistical study showed that there is a significant difference (p < 0.05) between MMRG obtained with IF extracted by BICA with respect to IF extracted from measured images corrupted with spillover.

  7. Classification of polycystic ovary based on ultrasound images using competitive neural network

    NASA Astrophysics Data System (ADS)

    Dewi, R. M.; Adiwijaya; Wisesty, U. N.; Jondri

    2018-03-01

    Infertility in the women reproduction system due to inhibition of follicles maturation process causing the number of follicles which is called polycystic ovaries (PCO). PCO detection is still operated manually by a gynecologist by counting the number and size of follicles in the ovaries, so it takes a long time and needs high accuracy. In general, PCO can be detected by calculating stereology or feature extraction and classification. In this paper, we designed a system to classify PCO by using the feature extraction (Gabor Wavelet method) and Competitive Neural Network (CNN). CNN was selected because this method is the combination between Hemming Net and The Max Net so that the data classification can be performed based on the specific characteristics of ultrasound data. Based on the result of system testing, Competitive Neural Network obtained the highest accuracy is 80.84% and the time process is 60.64 seconds (when using 32 feature vectors as well as weight and bias values respectively of 0.03 and 0.002).

  8. Automated solid-phase extraction-liquid chromatography-tandem mass spectrometry analysis of 6-acetylmorphine in human urine specimens: application for a high-throughput urine analysis laboratory.

    PubMed

    Robandt, P V; Bui, H M; Scancella, J M; Klette, K L

    2010-10-01

    An automated solid-phase extraction-liquid chromatography- tandem mass spectrometry (SPE-LC-MS-MS) method using the Spark Holland Symbiosis Pharma SPE-LC coupled to a Waters Quattro Micro MS-MS was developed for the analysis of 6-acetylmorphine (6-AM) in human urine specimens. The method was linear (R² = 0.9983) to 100 ng/mL, with no carryover at 200 ng/mL. Limits of quantification and detection were found to be 2 ng/mL. Interrun precision calculated as percent coefficient of variation (%CV) and evaluated by analyzing five specimens at 10 ng/mL over nine batches (n = 45) was 3.6%. Intrarun precision evaluated from 0 to 100 ng/mL ranged from 1.0 to 4.4%CV. Other opioids (codeine, morphine, oxycodone, oxymorphone, hydromorphone, hydrocodone, and norcodeine) did not interfere in the detection, quantification, or chromatography of 6-AM or the deuterated internal standard. The quantified values for 41 authentic human urine specimens previously found to contain 6-AM by a validated gas chromatography (GC)-MS method were compared to those obtained by the SPE-LC-MS-MS method. The SPE-LC-MS-MS procedure eliminates the human factors of specimen handling, extraction, and derivatization, thereby reducing labor costs and rework resulting from human error or technique issues. The time required for extraction and analysis was reduced by approximately 50% when compared to a validated 6-AM procedure using manual SPE and GC-MS analysis.

  9. Method Optimization for Extracting High-Quality RNA From the Human Pancreas Tissue.

    PubMed

    Jun, Eunsung; Oh, Juyun; Lee, Song; Jun, Hye-Ryeong; Seo, Eun Hye; Jang, Jin-Young; Kim, Song Cheol

    2018-06-01

    Nucleic acid sequencing is frequently used to determine the molecular basis of diseases. Therefore, proper storage of biological specimens is essential to inhibit nucleic acid degradation. RNA isolated from the human pancreas is generally of poor quality because of its high concentration of endogenous RNase. In this study, we optimized the method for extracting high quality RNA from paired tumor and normal pancreatic tissues obtained from eight pancreatic cancer patients post-surgery. RNA integrity number (RIN) was checked to evaluate the integrity of RNA, we tried to extract the RNA with an RIN value of 8 or higher that allows for the latest genetic analysis. The effect of several parameters, including the method used for tissue lysis, RNAlater treatment, tissue weight at storage, and the time to storage after surgical resection, on the quantity and quality of RNA extracted was examined. Data showed that the highest quantity of RNA was isolated using a combination of manual and mechanical methods of tissue lysis. Additionally, sectioning the tissues into small pieces (<100 mg) and treating them with RNAlater solution prior to storage increased RNA stability. Following these guidelines, high quality RNA was obtained from 100% (8/8) of tumor tissues and 75% (6/8) of normal tissues. High-quality RNA was still stable under repeated freezing and thawing. The application of these results during sample handling and storage in clinical settings will facilitate the genetic diagnosis of diseases and their subsequent treatment. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.

  10. Automatic Fontanel Extraction from Newborns' CT Images Using Variational Level Set

    NASA Astrophysics Data System (ADS)

    Kazemi, Kamran; Ghadimi, Sona; Lyaghat, Alireza; Tarighati, Alla; Golshaeyan, Narjes; Abrishami-Moghaddam, Hamid; Grebe, Reinhard; Gondary-Jouet, Catherine; Wallois, Fabrice

    A realistic head model is needed for source localization methods used for the study of epilepsy in neonates applying Electroencephalographic (EEG) measurements from the scalp. The earliest models consider the head as a series of concentric spheres, each layer corresponding to a different tissue whose conductivity is assumed to be homogeneous. The results of the source reconstruction depend highly on the electric conductivities of the tissues forming the head.The most used model is constituted of three layers (scalp, skull, and intracranial). Most of the major bones of the neonates’ skull are ossified at birth but can slightly move relative to each other. This is due to the sutures, fibrous membranes that at this stage of development connect the already ossified flat bones of the neurocranium. These weak parts of the neurocranium are called fontanels. Thus it is important to enter the exact geometry of fontaneles and flat bone in a source reconstruction because they show pronounced in conductivity. Computer Tomography (CT) imaging provides an excellent tool for non-invasive investigation of the skull which expresses itself in high contrast to all other tissues while the fontanels only can be identified as absence of bone, gaps in the skull formed by flat bone. Therefore, the aim of this paper is to extract the fontanels from CT images applying a variational level set method. We applied the proposed method to CT-images of five different subjects. The automatically extracted fontanels show good agreement with the manually extracted ones.

  11. Validation of the ICU-DaMa tool for automatically extracting variables for minimum dataset and quality indicators: The importance of data quality assessment.

    PubMed

    Sirgo, Gonzalo; Esteban, Federico; Gómez, Josep; Moreno, Gerard; Rodríguez, Alejandro; Blanch, Lluis; Guardiola, Juan José; Gracia, Rafael; De Haro, Lluis; Bodí, María

    2018-04-01

    Big data analytics promise insights into healthcare processes and management, improving outcomes while reducing costs. However, data quality is a major challenge for reliable results. Business process discovery techniques and an associated data model were used to develop data management tool, ICU-DaMa, for extracting variables essential for overseeing the quality of care in the intensive care unit (ICU). To determine the feasibility of using ICU-DaMa to automatically extract variables for the minimum dataset and ICU quality indicators from the clinical information system (CIS). The Wilcoxon signed-rank test and Fisher's exact test were used to compare the values extracted from the CIS with ICU-DaMa for 25 variables from all patients attended in a polyvalent ICU during a two-month period against the gold standard of values manually extracted by two trained physicians. Discrepancies with the gold standard were classified into plausibility, conformance, and completeness errors. Data from 149 patients were included. Although there were no significant differences between the automatic method and the manual method, we detected differences in values for five variables, including one plausibility error and two conformance and completeness errors. Plausibility: 1) Sex, ICU-DaMa incorrectly classified one male patient as female (error generated by the Hospital's Admissions Department). Conformance: 2) Reason for isolation, ICU-DaMa failed to detect a human error in which a professional misclassified a patient's isolation. 3) Brain death, ICU-DaMa failed to detect another human error in which a professional likely entered two mutually exclusive values related to the death of the patient (brain death and controlled donation after circulatory death). Completeness: 4) Destination at ICU discharge, ICU-DaMa incorrectly classified two patients due to a professional failing to fill out the patient discharge form when thepatients died. 5) Length of continuous renal replacement therapy, data were missing for one patient because the CRRT device was not connected to the CIS. Automatic generation of minimum dataset and ICU quality indicators using ICU-DaMa is feasible. The discrepancies were identified and can be corrected by improving CIS ergonomics, training healthcare professionals in the culture of the quality of information, and using tools for detecting and correcting data errors. Copyright © 2018 Elsevier B.V. All rights reserved.

  12. Intelligence Surveillance And Reconnaissance Full Motion Video Automatic Anomaly Detection Of Crowd Movements: System Requirements For Airborne Application

    DTIC Science & Technology

    The collection of Intelligence , Surveillance, and Reconnaissance (ISR) Full Motion Video (FMV) is growing at an exponential rate, and the manual... intelligence for the warfighter. This paper will address the question of how can automatic pattern extraction, based on computer vision, extract anomalies in

  13. Nautical Education for Offshore Extractive Industries. Support Operations & Seamanship.

    ERIC Educational Resources Information Center

    Hoffmann, G. L.

    This training manual is intended for persons who will be employed on supply vessels or towboats which support ocean-based oil extraction operations. The text deals with the basic skills of marine towing procedures, boat handling, deck maintenance, cargo operations, and rope and wire handling. Additional sections treat the proper attitude of a…

  14. Fully automated motion correction in first-pass myocardial perfusion MR image sequences.

    PubMed

    Milles, Julien; van der Geest, Rob J; Jerosch-Herold, Michael; Reiber, Johan H C; Lelieveldt, Boudewijn P F

    2008-11-01

    This paper presents a novel method for registration of cardiac perfusion magnetic resonance imaging (MRI). The presented method is capable of automatically registering perfusion data, using independent component analysis (ICA) to extract physiologically relevant features together with their time-intensity behavior. A time-varying reference image mimicking intensity changes in the data of interest is computed based on the results of that ICA. This reference image is used in a two-pass registration framework. Qualitative and quantitative validation of the method is carried out using 46 clinical quality, short-axis, perfusion MR datasets comprising 100 images each. Despite varying image quality and motion patterns in the evaluation set, validation of the method showed a reduction of the average right ventricle (LV) motion from 1.26+/-0.87 to 0.64+/-0.46 pixels. Time-intensity curves are also improved after registration with an average error reduced from 2.65+/-7.89% to 0.87+/-3.88% between registered data and manual gold standard. Comparison of clinically relevant parameters computed using registered data and the manual gold standard show a good agreement. Additional tests with a simulated free-breathing protocol showed robustness against considerable deviations from a standard breathing protocol. We conclude that this fully automatic ICA-based method shows an accuracy, a robustness and a computation speed adequate for use in a clinical environment.

  15. Time efficient 124I-PET volumetry in benign thyroid disorders by automatic isocontour procedures: mathematic adjustment using manual contoured measurements in low-dose CT.

    PubMed

    Freesmeyer, Martin; Kühnel, Christian; Westphal, Julian G

    2015-01-01

    Benign thyroid diseases are widely common in western societies. However, the volumetry of the thyroid gland, especially when enlarged or abnormally formed, proves to be a challenge in clinical routine. The aim of this study was to develop a simple and rapid threshold-based isocontour extraction method for thyroid volumetry from (124)I-PET/CT data in patients scheduled for radioactive iodine therapy. PET/CT data from 45 patients were analysed 30 h after 1 MBq (124)I administration. Anatomical reference volume was calculated using manually contoured data from low-dose CT images of the neck (MC). In addition, we applied an automatic isocontour extraction method (IC0.2/1.0), with two different threshold values (0.2 and 1.0 kBq/ml), for volumetry of the PET data-set. IC0.2/1.0 shape data that showed significant variation from MC data were excluded. Subsequently, a mathematical correlation using a model of linear regression with multiple variables and step-wise elimination (mIC0.2/1.0), between IC0.2/1.0 and MC, was established. Data from 41 patients (IC0.2), and 32 patients (IC1.0) were analysed. The mathematically calculated volume, mIC, showed a median deviation from the reference (MC), of ±9 % (1-54 %) for mIC0.2 and of ±8.2 % (1-50 %) for mIC1.0 CONCLUSION: Contour extraction with both, mIC1.0 and mIC0.2 gave rapid and reliable results. However, mIC0.2 can be applied to significantly more patients (>90 %) and is, therefore, deemed to be more suitable for clinical routine, keeping in mind the potential advantages of using (124)I-PET/CT for the preparation of patients scheduled for radioactive iodine therapy.

  16. Accuracy Assessment of Crown Delineation Methods for the Individual Trees Using LIDAR Data

    NASA Astrophysics Data System (ADS)

    Chang, K. T.; Lin, C.; Lin, Y. C.; Liu, J. K.

    2016-06-01

    Forest canopy density and height are used as variables in a number of environmental applications, including the estimation of biomass, forest extent and condition, and biodiversity. The airborne Light Detection and Ranging (LiDAR) is very useful to estimate forest canopy parameters according to the generated canopy height models (CHMs). The purpose of this work is to introduce an algorithm to delineate crown parameters, e.g. tree height and crown radii based on the generated rasterized CHMs. And accuracy assessment for the extraction of volumetric parameters of a single tree is also performed via manual measurement using corresponding aerial photo pairs. A LiDAR dataset of a golf course acquired by Leica ALS70-HP is used in this study. Two algorithms, i.e. a traditional one with the subtraction of a digital elevation model (DEM) from a digital surface model (DSM), and a pit-free approach are conducted to generate the CHMs firstly. Then two algorithms, a multilevel morphological active-contour (MMAC) and a variable window filter (VWF), are implemented and used in this study for individual tree delineation. Finally, experimental results of two automatic estimation methods for individual trees can be evaluated with manually measured stand-level parameters, i.e. tree height and crown diameter. The resulting CHM generated by a simple subtraction is full of empty pixels (called "pits") that will give vital impact on subsequent analysis for individual tree delineation. The experimental results indicated that if more individual trees can be extracted, tree crown shape will became more completely in the CHM data after the pit-free process.

  17. Early pest detection in soy plantations from hyperspectral measurements: a case study for caterpillar detection

    NASA Astrophysics Data System (ADS)

    Tailanián, Matías; Castiglioni, Enrique; Musé, Pablo; Fernández Flores, Germán.; Lema, Gabriel; Mastrángelo, Pedro; Almansa, Mónica; Fernández Liñares, Ignacio; Fernández Liñares, Germán.

    2015-10-01

    Soybean producers suffer from caterpillar damage in many areas of the world. Estimated average economic losses are annually 500 million USD in Brazil, Argentina, Paraguay and Uruguay. Designing efficient pest control management using selective and targeted pesticide applications is extremely important both from economic and environmental perspectives. With that in mind, we conducted a research program during the 2013-2014 and 2014-2015 planting seasons in a 4,000 ha soybean farm, seeking to achieve early pest detection. Nowadays pest presence is evaluated using manual, labor-intensive counting methods based on sampling strategies which are time consuming and imprecise. The experiment was conducted as follows. Using manual counting methods as ground-truth, a spectrometer capturing reflectance from 400 to 1100 nm was used to measure the reflectance of soy plants. A first conclusion, resulting from measuring the spectral response at leaves level, showed that stress was a property of plants since different leaves with different levels of damage yielded the same spectral response. Then, to assess the applicability of unsupervised classification of plants as healthy, biotic-stressed or abiotic-stressed, feature extraction and selection from leaves spectral signatures, combined with a Supported Vector Machine classifier was designed. Optimization of SVM parameters using grid search with cross-validation, along with classification evaluation by ten-folds cross-validation showed a correct classification rate of 95%, consistently on both seasons. Controlled experiments using cages with different numbers of caterpillars--including caterpillar-free plants--were also conducted to evaluate consistency in trends of the spectral response as well as the extracted features.

  18. Automated renal histopathology: digital extraction and quantification of renal pathology

    NASA Astrophysics Data System (ADS)

    Sarder, Pinaki; Ginley, Brandon; Tomaszewski, John E.

    2016-03-01

    The branch of pathology concerned with excess blood serum proteins being excreted in the urine pays particular attention to the glomerulus, a small intertwined bunch of capillaries located at the beginning of the nephron. Normal glomeruli allow moderate amount of blood proteins to be filtered; proteinuric glomeruli allow large amount of blood proteins to be filtered. Diagnosis of proteinuric diseases requires time intensive manual examination of the structural compartments of the glomerulus from renal biopsies. Pathological examination includes cellularity of individual compartments, Bowman's and luminal space segmentation, cellular morphology, glomerular volume, capillary morphology, and more. Long examination times may lead to increased diagnosis time and/or lead to reduced precision of the diagnostic process. Automatic quantification holds strong potential to reduce renal diagnostic time. We have developed a computational pipeline capable of automatically segmenting relevant features from renal biopsies. Our method first segments glomerular compartments from renal biopsies by isolating regions with high nuclear density. Gabor texture segmentation is used to accurately define glomerular boundaries. Bowman's and luminal spaces are segmented using morphological operators. Nuclei structures are segmented using color deconvolution, morphological processing, and bottleneck detection. Average computation time of feature extraction for a typical biopsy, comprising of ~12 glomeruli, is ˜69 s using an Intel(R) Core(TM) i7-4790 CPU, and is ~65X faster than manual processing. Using images from rat renal tissue samples, automatic glomerular structural feature estimation was reproducibly demonstrated for 15 biopsy images, which contained 148 individual glomeruli images. The proposed method holds immense potential to enhance information available while making clinical diagnoses.

  19. Automated feature extraction for retinal vascular biometry in zebrafish using OCT angiography

    NASA Astrophysics Data System (ADS)

    Bozic, Ivan; Rao, Gopikrishna M.; Desai, Vineet; Tao, Yuankai K.

    2017-02-01

    Zebrafish have been identified as an ideal model for angiogenesis because of anatomical and functional similarities with other vertebrates. The scale and complexity of zebrafish assays are limited by the need to manually treat and serially screen animals, and recent technological advances have focused on automation and improving throughput. Here, we use optical coherence tomography (OCT) and OCT angiography (OCT-A) to perform noninvasive, in vivo imaging of retinal vasculature in zebrafish. OCT-A summed voxel projections were low pass filtered and skeletonized to create an en face vascular map prior to connectivity analysis. Vascular segmentation was referenced to the optic nerve head (ONH), which was identified by automatically segmenting the retinal pigment epithelium boundary on the OCT structural volume. The first vessel branch generation was identified as skeleton segments with branch points closest to the ONH, and subsequent generations were found iteratively by expanding the search space outwards from the ONH. Biometric parameters, including length, curvature, and branch angle of each vessel segment were calculated and grouped by branch generation. Despite manual handling and alignment of each animal over multiple time points, we observe distinct qualitative patterns that enable unique identification of each eye from individual animals. We believe this OCT-based retinal biometry method can be applied for automated animal identification and handling in high-throughput organism-level pharmacological assays and genetic screens. In addition, these extracted features may enable high-resolution quantification of longitudinal vascular changes as a method for studying zebrafish models of retinal neovascularization and vascular remodeling.

  20. Gas chromatography with pulsed flame photometric detection multiresidue method for organophosphate pesticide and metabolite residues at the parts-per-billion level in representatives commodities of fruits and vegetable crop groups.

    PubMed

    Podhorniak, L V; Negron, J F; Griffith, F D

    2001-01-01

    A gas chromatographic method with a pulsed flame photometric detector (P-FPD) is presented for the analysis of 28 parent organophosphate (OP) pesticides and their OP metabolites. A total of 57 organophosphates were analyzed in 10 representative fruit and vegetable crop groups. The method is based on a judicious selection of known procedures from FDA sources such as the Pesticide Analytical Manual and Laboratory Information Bulletins, combined in a manner to recover the OPs and their metabolite(s) at the part-per-billion (ppb) level. The method uses an acetone extraction with either miniaturized Hydromatrix column partitioning or alternately a miniaturized methylene dichloride liquid-liquid partitioning, followed by solid-phase extraction (SPE) cleanup with graphitized carbon black (GCB) and PSA cartridges. Determination of residues is by programmed temperature capillary column gas chromatography fitted with a P-FPD set in the phosphorus mode. The method is designed so that a set of samples can be prepared in 1 working day for overnight instrumental analysis. The recovery data indicates that a daily column-cutting procedure used in combination with the SPE extract cleanup effectively reduces matrix enhancement at the ppb level for many organophosphates. The OPs most susceptible to elevated recoveries around or greater than 150%, based on peak area calculations, were trichlorfon, phosmet, and the metabolites of dimethoate, fenamiphos, fenthion, and phorate.

  1. User-customized brain computer interfaces using Bayesian optimization

    NASA Astrophysics Data System (ADS)

    Bashashati, Hossein; Ward, Rabab K.; Bashashati, Ali

    2016-04-01

    Objective. The brain characteristics of different people are not the same. Brain computer interfaces (BCIs) should thus be customized for each individual person. In motor-imagery based synchronous BCIs, a number of parameters (referred to as hyper-parameters) including the EEG frequency bands, the channels and the time intervals from which the features are extracted should be pre-determined based on each subject’s brain characteristics. Approach. To determine the hyper-parameter values, previous work has relied on manual or semi-automatic methods that are not applicable to high-dimensional search spaces. In this paper, we propose a fully automatic, scalable and computationally inexpensive algorithm that uses Bayesian optimization to tune these hyper-parameters. We then build different classifiers trained on the sets of hyper-parameter values proposed by the Bayesian optimization. A final classifier aggregates the results of the different classifiers. Main Results. We have applied our method to 21 subjects from three BCI competition datasets. We have conducted rigorous statistical tests, and have shown the positive impact of hyper-parameter optimization in improving the accuracy of BCIs. Furthermore, We have compared our results to those reported in the literature. Significance. Unlike the best reported results in the literature, which are based on more sophisticated feature extraction and classification methods, and rely on prestudies to determine the hyper-parameter values, our method has the advantage of being fully automated, uses less sophisticated feature extraction and classification methods, and yields similar or superior results compared to the best performing designs in the literature.

  2. DEXTER: Disease-Expression Relation Extraction from Text.

    PubMed

    Gupta, Samir; Dingerdissen, Hayley; Ross, Karen E; Hu, Yu; Wu, Cathy H; Mazumder, Raja; Vijay-Shanker, K

    2018-01-01

    Gene expression levels affect biological processes and play a key role in many diseases. Characterizing expression profiles is useful for clinical research, and diagnostics and prognostics of diseases. There are currently several high-quality databases that capture gene expression information, obtained mostly from large-scale studies, such as microarray and next-generation sequencing technologies, in the context of disease. The scientific literature is another rich source of information on gene expression-disease relationships that not only have been captured from large-scale studies but have also been observed in thousands of small-scale studies. Expression information obtained from literature through manual curation can extend expression databases. While many of the existing databases include information from literature, they are limited by the time-consuming nature of manual curation and have difficulty keeping up with the explosion of publications in the biomedical field. In this work, we describe an automated text-mining tool, Disease-Expression Relation Extraction from Text (DEXTER) to extract information from literature on gene and microRNA expression in the context of disease. One of the motivations in developing DEXTER was to extend the BioXpress database, a cancer-focused gene expression database that includes data derived from large-scale experiments and manual curation of publications. The literature-based portion of BioXpress lags behind significantly compared to expression information obtained from large-scale studies and can benefit from our text-mined results. We have conducted two different evaluations to measure the accuracy of our text-mining tool and achieved average F-scores of 88.51 and 81.81% for the two evaluations, respectively. Also, to demonstrate the ability to extract rich expression information in different disease-related scenarios, we used DEXTER to extract information on differential expression information for 2024 genes in lung cancer, 115 glycosyltransferases in 62 cancers and 826 microRNA in 171 cancers. All extractions using DEXTER are integrated in the literature-based portion of BioXpress.Database URL: http://biotm.cis.udel.edu/DEXTER.

  3. Proteome and phosphoproteome analysis of honeybee (Apis mellifera) venom collected from electrical stimulation and manual extraction of the venom gland

    PubMed Central

    2013-01-01

    Background Honeybee venom is a complicated defensive toxin that has a wide range of pharmacologically active compounds. Some of these compounds are useful for human therapeutics. There are two major forms of honeybee venom used in pharmacological applications: manually (or reservoir disrupting) extracted glandular venom (GV), and venom extracted through the use of electrical stimulation (ESV). A proteome comparison of these two venom forms and an understanding of the phosphorylation status of ESV, are still very limited. Here, the proteomes of GV and ESV were compared using both gel-based and gel-free proteomics approaches and the phosphoproteome of ESV was determined through the use of TiO2 enrichment. Results Of the 43 proteins identified in GV, < 40% were venom toxins, and > 60% of the proteins were non-toxic proteins resulting from contamination by gland tissue damage during extraction and bee death. Of the 17 proteins identified in ESV, 14 proteins (>80%) were venom toxic proteins and most of them were found in higher abundance than in GV. Moreover, two novel proteins (dehydrogenase/reductase SDR family member 11-like and histone H2B.3-like) and three novel phosphorylation sites (icarapin (S43), phospholipase A-2 (T145), and apamin (T23)) were identified. Conclusions Our data demonstrate that venom extracted manually is different from venom extracted using ESV, and these differences may be important in their use as pharmacological agents. ESV may be more efficient than GV as a potential pharmacological source because of its higher venom protein content, production efficiency, and without the need to kill honeybee. The three newly identified phosphorylated venom proteins in ESV may elicit a different immune response through the specific recognition of antigenic determinants. The two novel venom proteins extend our proteome coverage of honeybee venom. PMID:24199871

  4. Validity of a Manual Soft Tissue Profile Prediction Method Following Mandibular Setback Osteotomy

    PubMed Central

    Kolokitha, Olga-Elpis

    2007-01-01

    Objectives The aim of this study was to determine the validity of a manual cephalometric method used for predicting the post-operative soft tissue profiles of patients who underwent mandibular setback surgery and compare it to a computerized cephalometric prediction method (Dentofacial Planner). Lateral cephalograms of 18 adults with mandibular prognathism taken at the end of pre-surgical orthodontics and approximately one year after surgery were used. Methods To test the validity of the manual method the prediction tracings were compared to the actual post-operative tracings. The Dentofacial Planner software was used to develop the computerized post-surgical prediction tracings. Both manual and computerized prediction printouts were analyzed by using the cephalometric system PORDIOS. Statistical analysis was performed by means of t-test. Results Comparison between manual prediction tracings and the actual post-operative profile showed that the manual method results in more convex soft tissue profiles; the upper lip was found in a more prominent position, upper lip thickness was increased and, the mandible and lower lip were found in a less posterior position than that of the actual profiles. Comparison between computerized and manual prediction methods showed that in the manual method upper lip thickness was increased, the upper lip was found in a more anterior position and the lower anterior facial height was increased as compared to the computerized prediction method. Conclusions Cephalometric simulation of post-operative soft tissue profile following orthodontic-surgical management of mandibular prognathism imposes certain limitations related to the methods implied. However, both manual and computerized prediction methods remain a useful tool for patient communication. PMID:19212468

  5. ALE: automated label extraction from GEO metadata.

    PubMed

    Giles, Cory B; Brown, Chase A; Ripperger, Michael; Dennis, Zane; Roopnarinesingh, Xiavan; Porter, Hunter; Perz, Aleksandra; Wren, Jonathan D

    2017-12-28

    NCBI's Gene Expression Omnibus (GEO) is a rich community resource containing millions of gene expression experiments from human, mouse, rat, and other model organisms. However, information about each experiment (metadata) is in the format of an open-ended, non-standardized textual description provided by the depositor. Thus, classification of experiments for meta-analysis by factors such as gender, age of the sample donor, and tissue of origin is not feasible without assigning labels to the experiments. Automated approaches are preferable for this, primarily because of the size and volume of the data to be processed, but also because it ensures standardization and consistency. While some of these labels can be extracted directly from the textual metadata, many of the data available do not contain explicit text informing the researcher about the age and gender of the subjects with the study. To bridge this gap, machine-learning methods can be trained to use the gene expression patterns associated with the text-derived labels to refine label-prediction confidence. Our analysis shows only 26% of metadata text contains information about gender and 21% about age. In order to ameliorate the lack of available labels for these data sets, we first extract labels from the textual metadata for each GEO RNA dataset and evaluate the performance against a gold standard of manually curated labels. We then use machine-learning methods to predict labels, based upon gene expression of the samples and compare this to the text-based method. Here we present an automated method to extract labels for age, gender, and tissue from textual metadata and GEO data using both a heuristic approach as well as machine learning. We show the two methods together improve accuracy of label assignment to GEO samples.

  6. Enriching text with images and colored light

    NASA Astrophysics Data System (ADS)

    Sekulovski, Dragan; Geleijnse, Gijs; Kater, Bram; Korst, Jan; Pauws, Steffen; Clout, Ramon

    2008-01-01

    We present an unsupervised method to enrich textual applications with relevant images and colors. The images are collected by querying large image repositories and subsequently the colors are computed using image processing. A prototype system based on this method is presented where the method is applied to song lyrics. In combination with a lyrics synchronization algorithm the system produces a rich multimedia experience. In order to identify terms within the text that may be associated with images and colors, we select noun phrases using a part of speech tagger. Large image repositories are queried with these terms. Per term representative colors are extracted using the collected images. Hereto, we either use a histogram-based or a mean shift-based algorithm. The representative color extraction uses the non-uniform distribution of the colors found in the large repositories. The images that are ranked best by the search engine are displayed on a screen, while the extracted representative colors are rendered on controllable lighting devices in the living room. We evaluate our method by comparing the computed colors to standard color representations of a set of English color terms. A second evaluation focuses on the distance in color between a queried term in English and its translation in a foreign language. Based on results from three sets of terms, a measure of suitability of a term for color extraction based on KL Divergence is proposed. Finally, we compare the performance of the algorithm using either the automatically indexed repository of Google Images and the manually annotated Flickr.com. Based on the results of these experiments, we conclude that using the presented method we can compute the relevant color for a term using a large image repository and image processing.

  7. Comparison of a modified phenol/chloroform and commercial-kit methods for extracting DNA from horse fecal material.

    PubMed

    Janabi, Ali H D; Kerkhof, Lee J; McGuinness, Lora R; Biddle, Amy S; McKeever, Kenneth H

    2016-10-01

    There are many choices for methods of extracting bacterial DNA for Next Generation Sequencing (NGS) from fecal samples. Here, we compare our modifications of a phenol/chloroform extraction method plus an inhibitor removal solution (C3) (ph/Chl+C3) to the PowerFecal® DNA Isolation Kit (MoBio-K). DNA quality and quantity coupled to NGS results were used to assess differences in relative abundance, Shannon diversity index, unique species, and principle coordinate analysis (PCoA) between biological replicates. Six replicate samples, taken from a single ball of horse feces manually collected from the rectum, were subjected to each extraction method. The Ph/Chl+C3 method produced 100× higher DNA yields with less shearing than the MoBio-K method. To assess the methods, the two method samples were sent for sequencing of the bacterial V3-V4 region of 16S rRNA gene using the Illumina MiSeq platform. The relative abundance of Bacteroidetes was greater and there were more unique species assigned to this group in MoBio-K than in Ph/Chl+C3 (P<0.05). In contrast, Firmicutes had greater relative abundance and more unique species in Ph/Chl+C3 extracts than in MoBio-K (P<0.05). The other major bacterial phyla were equally abundant in samples using both extraction methods. Alpha diversity and Shannon Weaver indices showed greater evenness of bacterial distribution in Ph/Chl+C3 compared with MoBio-K (P<0.05), but there was no difference in the OTU richness. Principle coordinate analysis (PCoA) indicated a distinct separation between the two methods (P<0.05) and tighter clustering (less variability) in Ph/Chl+C3 than in MoBio-K. These results suggest that the Ph/Chl+C3 may be preferred for research to identify specific Firmicutes taxa such as Clostridium, and Bacillus. However; MoBio-K may be a better choice for projects focusing on Bacteroidetes abundance. The Ph/Chl+C3 method required less time, but has some safety concerns associated with exposure and disposal of phenol and chloroform. While the MoBio-K may be better choice for researchers with less access to safety equipment like a fume hood. Copyright © 2016 Elsevier B.V. All rights reserved.

  8. Point Cloud Oriented Shoulder Line Extraction in Loess Hilly Area

    NASA Astrophysics Data System (ADS)

    Min, Li; Xin, Yang; Liyang, Xiong

    2016-06-01

    Shoulder line is the significant line in hilly area of Loess Plateau in China, dividing the surface into positive and negative terrain (P-N terrains). Due to the point cloud vegetation removal methods of P-N terrains are different, there is an imperative need for shoulder line extraction. In this paper, we proposed an automatic shoulder line extraction method based on point cloud. The workflow is as below: (i) ground points were selected by using a grid filter in order to remove most of noisy points. (ii) Based on DEM interpolated by those ground points, slope was mapped and classified into two classes (P-N terrains), using Natural Break Classified method. (iii) The common boundary between two slopes is extracted as shoulder line candidate. (iv) Adjust the filter gird size and repeat step i-iii until the shoulder line candidate matches its real location. (v) Generate shoulder line of the whole area. Test area locates in Madigou, Jingbian County of Shaanxi Province, China. A total of 600 million points are acquired in the test area of 0.23km2, using Riegl VZ400 3D Laser Scanner in August 2014. Due to the limit Granted computing performance, the test area is divided into 60 blocks and 13 of them around the shoulder line were selected for filter grid size optimizing. The experiment result shows that the optimal filter grid size varies in diverse sample area, and a power function relation exists between filter grid size and point density. The optimal grid size was determined by above relation and shoulder lines of 60 blocks were then extracted. Comparing with the manual interpretation results, the accuracy of the whole result reaches 85%. This method can be applied to shoulder line extraction in hilly area, which is crucial for point cloud denoising and high accuracy DEM generation.

  9. Use of an online extraction liquid chromatography quadrupole time-of-flight tandem mass spectrometry method for the characterization of polyphenols in Citrus paradisi cv. Changshanhuyu peel.

    PubMed

    Tong, Chaoying; Peng, Mijun; Tong, Runna; Ma, Ruyi; Guo, Keke; Shi, Shuyun

    2018-01-19

    Chemical profiling of natural products by high performance liquid chromatography (HPLC) was critical for understanding of their clinical bioactivities, and sample pretreatment steps have been considered as a bottleneck for analysis. Currently, concerted efforts have been made to develop sample pretreatment methods with high efficiency, low solvent and time consumptions. Here, a simple and efficient online extraction (OLE) strategy coupled with HPLC-diode array detector-quadrupole time-of-flight tandem mass spectrometry (HPLC-DAD-QTOF-MS/MS) was developed for rapid chemical profiling. For OLE strategy, guard column inserted with ground sample (2 mg) instead of sample loop was connected with manual injection valve, in which components were directly extracted and transferred to HPLC-DAD-QTOF-MS/MS system only by mobile phase without any extra time, solvent, instrument and operation. By comparison with offline heat-reflux extraction of Citrus paradisi cv. Changshanhuyu (Changshanhuyu) peel, OLE strategy presented higher extraction efficiency perhaps because of the high pressure and gradient elution mode. A total of twenty-two secondary metabolites were detected according to their retention times, UV spectra, exact mass, and fragmentation ions in MS/MS spectra, and nine of them were discovered in Changshanhuyu peel for the first time to our knowledge. It is concluded that the developed OLE-HPLC-DAD-QTOF-MS/MS system offers new perspectives for rapid chemical profiling of natural products. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. Protocol: high throughput silica-based purification of RNA from Arabidopsis seedlings in a 96-well format

    PubMed Central

    2011-01-01

    The increasing popularity of systems-based approaches to plant research has resulted in a demand for high throughput (HTP) methods to be developed. RNA extraction from multiple samples in an experiment is a significant bottleneck in performing systems-level genomic studies. Therefore we have established a high throughput method of RNA extraction from Arabidopsis thaliana to facilitate gene expression studies in this widely used plant model. We present optimised manual and automated protocols for the extraction of total RNA from 9-day-old Arabidopsis seedlings in a 96 well plate format using silica membrane-based methodology. Consistent and reproducible yields of high quality RNA are isolated averaging 8.9 μg total RNA per sample (~20 mg plant tissue). The purified RNA is suitable for subsequent qPCR analysis of the expression of over 500 genes in triplicate from each sample. Using the automated procedure, 192 samples (2 × 96 well plates) can easily be fully processed (samples homogenised, RNA purified and quantified) in less than half a day. Additionally we demonstrate that plant samples can be stored in RNAlater at -20°C (but not 4°C) for 10 months prior to extraction with no significant effect on RNA yield or quality. Additionally, disrupted samples can be stored in the lysis buffer at -20°C for at least 6 months prior to completion of the extraction procedure providing a flexible sampling and storage scheme to facilitate complex time series experiments. PMID:22136293

  11. Protocol: high throughput silica-based purification of RNA from Arabidopsis seedlings in a 96-well format.

    PubMed

    Salvo-Chirnside, Eliane; Kane, Steven; Kerr, Lorraine E

    2011-12-02

    The increasing popularity of systems-based approaches to plant research has resulted in a demand for high throughput (HTP) methods to be developed. RNA extraction from multiple samples in an experiment is a significant bottleneck in performing systems-level genomic studies. Therefore we have established a high throughput method of RNA extraction from Arabidopsis thaliana to facilitate gene expression studies in this widely used plant model. We present optimised manual and automated protocols for the extraction of total RNA from 9-day-old Arabidopsis seedlings in a 96 well plate format using silica membrane-based methodology. Consistent and reproducible yields of high quality RNA are isolated averaging 8.9 μg total RNA per sample (~20 mg plant tissue). The purified RNA is suitable for subsequent qPCR analysis of the expression of over 500 genes in triplicate from each sample. Using the automated procedure, 192 samples (2 × 96 well plates) can easily be fully processed (samples homogenised, RNA purified and quantified) in less than half a day. Additionally we demonstrate that plant samples can be stored in RNAlater at -20°C (but not 4°C) for 10 months prior to extraction with no significant effect on RNA yield or quality. Additionally, disrupted samples can be stored in the lysis buffer at -20°C for at least 6 months prior to completion of the extraction procedure providing a flexible sampling and storage scheme to facilitate complex time series experiments.

  12. Tracking fuzzy borders using geodesic curves with application to liver segmentation on planning CT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yuan, Yading, E-mail: yading.yuan@mssm.edu; Chao, Ming; Sheu, Ren-Dih

    Purpose: This work aims to develop a robust and efficient method to track the fuzzy borders between liver and the abutted organs where automatic liver segmentation usually suffers, and to investigate its applications in automatic liver segmentation on noncontrast-enhanced planning computed tomography (CT) images. Methods: In order to track the fuzzy liver–chestwall and liver–heart borders where oversegmentation is often found, a starting point and an ending point were first identified on the coronal view images; the fuzzy border was then determined as a geodesic curve constructed by minimizing the gradient-weighted path length between these two points near the fuzzy border.more » The minimization of path length was numerically solved by fast-marching method. The resultant fuzzy borders were incorporated into the authors’ automatic segmentation scheme, in which the liver was initially estimated by a patient-specific adaptive thresholding and then refined by a geodesic active contour model. By using planning CT images of 15 liver patients treated with stereotactic body radiation therapy, the liver contours extracted by the proposed computerized scheme were compared with those manually delineated by a radiation oncologist. Results: The proposed automatic liver segmentation method yielded an average Dice similarity coefficient of 0.930 ± 0.015, whereas it was 0.912 ± 0.020 if the fuzzy border tracking was not used. The application of fuzzy border tracking was found to significantly improve the segmentation performance. The mean liver volume obtained by the proposed method was 1727 cm{sup 3}, whereas it was 1719 cm{sup 3} for manual-outlined volumes. The computer-generated liver volumes achieved excellent agreement with manual-outlined volumes with correlation coefficient of 0.98. Conclusions: The proposed method was shown to provide accurate segmentation for liver in the planning CT images where contrast agent is not applied. The authors’ results also clearly demonstrated that the application of tracking the fuzzy borders could significantly reduce contour leakage during active contour evolution.« less

  13. Assessment of ambient background concentrations of elements in soil using combined survey and open-source data.

    PubMed

    Mikkonen, Hannah G; Clarke, Bradley O; Dasika, Raghava; Wallis, Christian J; Reichman, Suzie M

    2017-02-15

    Understanding ambient background concentrations in soil, at a local scale, is an essential part of environmental risk assessment. Where high resolution geochemical soil surveys have not been undertaken, soil data from alternative sources, such as environmental site assessment reports, can be used to support an understanding of ambient background conditions. Concentrations of metals/metalloids (As, Mn, Ni, Pb and Zn) were extracted from open-source environmental site assessment reports, for soils derived from the Newer Volcanics basalt, of Melbourne, Victoria, Australia. A manual screening method was applied to remove samples that were indicated to be contaminated by point sources and hence not representative of ambient background conditions. The manual screening approach was validated by comparison to data from a targeted background soil survey. Statistical methods for exclusion of contaminated samples from background soil datasets were compared to the manual screening method. The statistical methods tested included the Median plus Two Median Absolute Deviations, the upper whisker of a normal and log transformed Tukey boxplot, the point of inflection on a cumulative frequency plot and the 95th percentile. We have demonstrated that where anomalous sample results cannot be screened using site information, the Median plus Two Median Absolute Deviations is a conservative method for derivation of ambient background upper concentration limits (i.e. expected maximums). The upper whisker of a boxplot and the point of inflection on a cumulative frequency plot, were also considered adequate methods for deriving ambient background upper concentration limits, where the percentage of contaminated samples is <25%. Median ambient background concentrations of metals/metalloids in the Newer Volcanic soils of Melbourne were comparable to ambient background concentrations in Europe and the United States, except for Ni, which was naturally enriched in the basalt-derived soils of Melbourne. Copyright © 2016 Elsevier B.V. All rights reserved.

  14. Semiautomatic segmentation of liver metastases on volumetric CT images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yan, Jiayong; Schwartz, Lawrence H.; Zhao, Binsheng, E-mail: bz2166@cumc.columbia.edu

    2015-11-15

    Purpose: Accurate segmentation and quantification of liver metastases on CT images are critical to surgery/radiation treatment planning and therapy response assessment. To date, there are no reliable methods to perform such segmentation automatically. In this work, the authors present a method for semiautomatic delineation of liver metastases on contrast-enhanced volumetric CT images. Methods: The first step is to manually place a seed region-of-interest (ROI) in the lesion on an image. This ROI will (1) serve as an internal marker and (2) assist in automatically identifying an external marker. With these two markers, lesion contour on the image can be accuratelymore » delineated using traditional watershed transformation. Density information will then be extracted from the segmented 2D lesion and help determine the 3D connected object that is a candidate of the lesion volume. The authors have developed a robust strategy to automatically determine internal and external markers for marker-controlled watershed segmentation. By manually placing a seed region-of-interest in the lesion to be delineated on a reference image, the method can automatically determine dual threshold values to approximately separate the lesion from its surrounding structures and refine the thresholds from the segmented lesion for the accurate segmentation of the lesion volume. This method was applied to 69 liver metastases (1.1–10.3 cm in diameter) from a total of 15 patients. An independent radiologist manually delineated all lesions and the resultant lesion volumes served as the “gold standard” for validation of the method’s accuracy. Results: The algorithm received a median overlap, overestimation ratio, and underestimation ratio of 82.3%, 6.0%, and 11.5%, respectively, and a median average boundary distance of 1.2 mm. Conclusions: Preliminary results have shown that volumes of liver metastases on contrast-enhanced CT images can be accurately estimated by a semiautomatic segmentation method.« less

  15. Automated encoding of clinical documents based on natural language processing.

    PubMed

    Friedman, Carol; Shagina, Lyudmila; Lussier, Yves; Hripcsak, George

    2004-01-01

    The aim of this study was to develop a method based on natural language processing (NLP) that automatically maps an entire clinical document to codes with modifiers and to quantitatively evaluate the method. An existing NLP system, MedLEE, was adapted to automatically generate codes. The method involves matching of structured output generated by MedLEE consisting of findings and modifiers to obtain the most specific code. Recall and precision applied to Unified Medical Language System (UMLS) coding were evaluated in two separate studies. Recall was measured using a test set of 150 randomly selected sentences, which were processed using MedLEE. Results were compared with a reference standard determined manually by seven experts. Precision was measured using a second test set of 150 randomly selected sentences from which UMLS codes were automatically generated by the method and then validated by experts. Recall of the system for UMLS coding of all terms was .77 (95% CI.72-.81), and for coding terms that had corresponding UMLS codes recall was .83 (.79-.87). Recall of the system for extracting all terms was .84 (.81-.88). Recall of the experts ranged from .69 to .91 for extracting terms. The precision of the system was .89 (.87-.91), and precision of the experts ranged from .61 to .91. Extraction of relevant clinical information and UMLS coding were accomplished using a method based on NLP. The method appeared to be comparable to or better than six experts. The advantage of the method is that it maps text to codes along with other related information, rendering the coded output suitable for effective retrieval.

  16. Automated retinal vessel type classification in color fundus images

    NASA Astrophysics Data System (ADS)

    Yu, H.; Barriga, S.; Agurto, C.; Nemeth, S.; Bauman, W.; Soliz, P.

    2013-02-01

    Automated retinal vessel type classification is an essential first step toward machine-based quantitative measurement of various vessel topological parameters and identifying vessel abnormalities and alternations in cardiovascular disease risk analysis. This paper presents a new and accurate automatic artery and vein classification method developed for arteriolar-to-venular width ratio (AVR) and artery and vein tortuosity measurements in regions of interest (ROI) of 1.5 and 2.5 optic disc diameters from the disc center, respectively. This method includes illumination normalization, automatic optic disc detection and retinal vessel segmentation, feature extraction, and a partial least squares (PLS) classification. Normalized multi-color information, color variation, and multi-scale morphological features are extracted on each vessel segment. We trained the algorithm on a set of 51 color fundus images using manually marked arteries and veins. We tested the proposed method in a previously unseen test data set consisting of 42 images. We obtained an area under the ROC curve (AUC) of 93.7% in the ROI of AVR measurement and 91.5% of AUC in the ROI of tortuosity measurement. The proposed AV classification method has the potential to assist automatic cardiovascular disease early detection and risk analysis.

  17. Automatic Segmentation of High-Throughput RNAi Fluorescent Cellular Images

    PubMed Central

    Yan, Pingkum; Zhou, Xiaobo; Shah, Mubarak; Wong, Stephen T. C.

    2010-01-01

    High-throughput genome-wide RNA interference (RNAi) screening is emerging as an essential tool to assist biologists in understanding complex cellular processes. The large number of images produced in each study make manual analysis intractable; hence, automatic cellular image analysis becomes an urgent need, where segmentation is the first and one of the most important steps. In this paper, a fully automatic method for segmentation of cells from genome-wide RNAi screening images is proposed. Nuclei are first extracted from the DNA channel by using a modified watershed algorithm. Cells are then extracted by modeling the interaction between them as well as combining both gradient and region information in the Actin and Rac channels. A new energy functional is formulated based on a novel interaction model for segmenting tightly clustered cells with significant intensity variance and specific phenotypes. The energy functional is minimized by using a multiphase level set method, which leads to a highly effective cell segmentation method. Promising experimental results demonstrate that automatic segmentation of high-throughput genome-wide multichannel screening can be achieved by using the proposed method, which may also be extended to other multichannel image segmentation problems. PMID:18270043

  18. Validity of a manual soft tissue profile prediction method following mandibular setback osteotomy.

    PubMed

    Kolokitha, Olga-Elpis

    2007-10-01

    The aim of this study was to determine the validity of a manual cephalometric method used for predicting the post-operative soft tissue profiles of patients who underwent mandibular setback surgery and compare it to a computerized cephalometric prediction method (Dentofacial Planner). Lateral cephalograms of 18 adults with mandibular prognathism taken at the end of pre-surgical orthodontics and approximately one year after surgery were used. To test the validity of the manual method the prediction tracings were compared to the actual post-operative tracings. The Dentofacial Planner software was used to develop the computerized post-surgical prediction tracings. Both manual and computerized prediction printouts were analyzed by using the cephalometric system PORDIOS. Statistical analysis was performed by means of t-test. Comparison between manual prediction tracings and the actual post-operative profile showed that the manual method results in more convex soft tissue profiles; the upper lip was found in a more prominent position, upper lip thickness was increased and, the mandible and lower lip were found in a less posterior position than that of the actual profiles. Comparison between computerized and manual prediction methods showed that in the manual method upper lip thickness was increased, the upper lip was found in a more anterior position and the lower anterior facial height was increased as compared to the computerized prediction method. Cephalometric simulation of post-operative soft tissue profile following orthodontic-surgical management of mandibular prognathism imposes certain limitations related to the methods implied. However, both manual and computerized prediction methods remain a useful tool for patient communication.

  19. Comparison of the quantification of acetaminophen in plasma, cerebrospinal fluid and dried blood spots using high-performance liquid chromatography-tandem mass spectrometry

    PubMed Central

    Taylor, Rachel R.; Hoffman, Keith L.; Schniedewind, Björn; Clavijo, Claudia; Galinkin, Jeffrey L.; Christians, Uwe

    2013-01-01

    Acetaminophen (paracetamol, N-(4-hydroxyphenyl) acetamide) is one of the most commonly prescribed drugs for the management of pain in children. Quantification of acetaminophen in pre-term and term neonates and small children requires the availability of highly sensitive assays in small volume blood samples. We developed and validated an LC-MS/MS assay for the quantification of acetaminophen in human plasma, cerebro-spinal fluid (CSF) and dried blood spots (DBS). Reconstitution in water (DBS only) and addition of a protein precipitation solution containing the deuterated internal standard were the only manual steps. Extracted samples were analyzed on a Kinetex 2.6 μm PFP column using an acetonitrile/formic acid gradient. The analytes were detected in the positive multiple reaction mode. Alternatively, DBS were automatically processed using direct desorption in a sample card and preparation (SCAP) robotic autosampler in combination with online extraction. The range of reliable response in plasma and CSF was 3.05-20,000 ng/ml (r2 > 0.99) and 27.4-20,000 ng/ml (r2 > 0.99) for DBS (manual extraction and automated direct desorption). Inter-day accuracy was always within 85-115% and inter-day precision for plasma, CSF and manually extracted DBS were less than 15%. Deming regression analysis comparing 167 matching pairs of plasma and DBS samples showed a correlation coefficient of 0.98. Bland Altman analysis indicated a 26.6% positive bias in DBS, most likely reflecting the blood: plasma distribution ratio of acetaminophen. DBS are a valid matrix for acetaminophen pharmacokinetic studies. PMID:23670126

  20. Adaptable, high recall, event extraction system with minimal configuration

    PubMed Central

    2015-01-01

    Background Biomedical event extraction has been a major focus of biomedical natural language processing (BioNLP) research since the first BioNLP shared task was held in 2009. Accordingly, a large number of event extraction systems have been developed. Most such systems, however, have been developed for specific tasks and/or incorporated task specific settings, making their application to new corpora and tasks problematic without modification of the systems themselves. There is thus a need for event extraction systems that can achieve high levels of accuracy when applied to corpora in new domains, without the need for exhaustive tuning or modification, whilst retaining competitive levels of performance. Results We have enhanced our state-of-the-art event extraction system, EventMine, to alleviate the need for task-specific tuning. Task-specific details are specified in a configuration file, while extensive task-specific parameter tuning is avoided through the integration of a weighting method, a covariate shift method, and their combination. The task-specific configuration and weighting method have been employed within the context of two different sub-tasks of BioNLP shared task 2013, i.e. Cancer Genetics (CG) and Pathway Curation (PC), removing the need to modify the system specifically for each task. With minimal task specific configuration and tuning, EventMine achieved the 1st place in the PC task, and 2nd in the CG, achieving the highest recall for both tasks. The system has been further enhanced following the shared task by incorporating the covariate shift method and entity generalisations based on the task definitions, leading to further performance improvements. Conclusions We have shown that it is possible to apply a state-of-the-art event extraction system to new tasks with high levels of performance, without having to modify the system internally. Both covariate shift and weighting methods are useful in facilitating the production of high recall systems. These methods and their combination can adapt a model to the target data with no deep tuning and little manual configuration. PMID:26201408

  1. Validity of radiographic assessment of the knee joint space using automatic image analysis.

    PubMed

    Komatsu, Daigo; Hasegawa, Yukiharu; Kojima, Toshihisa; Seki, Taisuke; Ikeuchi, Kazuma; Takegami, Yasuhiko; Amano, Takafumi; Higuchi, Yoshitoshi; Kasai, Takehiro; Ishiguro, Naoki

    2016-09-01

    The present study investigated whether there were differences between automatic and manual measurements of the minimum joint space width (mJSW) on knee radiographs. Knee radiographs of 324 participants in a systematic health screening were analyzed using the following three methods: manual measurement of film-based radiographs (Manual), manual measurement of digitized radiographs (Digital), and automatic measurement of digitized radiographs (Auto). The mean mJSWs on the medial and lateral sides of the knees were determined using each method, and measurement reliability was evaluated using intra-class correlation coefficients. Measurement errors were compared between normal knees and knees with radiographic osteoarthritis. All three methods demonstrated good reliability, although the reliability was slightly lower with the Manual method than with the other methods. On the medial and lateral sides of the knees, the mJSWs were the largest in the Manual method and the smallest in the Auto method. The measurement errors of each method were significantly larger for normal knees than for radiographic osteoarthritis knees. The mJSW measurements are more accurate and reliable with the Auto method than with the Manual or Digital method, especially for normal knees. Therefore, the Auto method is ideal for the assessment of the knee joint space.

  2. Automated red blood cells extraction from holographic images using fully convolutional neural networks.

    PubMed

    Yi, Faliu; Moon, Inkyu; Javidi, Bahram

    2017-10-01

    In this paper, we present two models for automatically extracting red blood cells (RBCs) from RBCs holographic images based on a deep learning fully convolutional neural network (FCN) algorithm. The first model, called FCN-1, only uses the FCN algorithm to carry out RBCs prediction, whereas the second model, called FCN-2, combines the FCN approach with the marker-controlled watershed transform segmentation scheme to achieve RBCs extraction. Both models achieve good segmentation accuracy. In addition, the second model has much better performance in terms of cell separation than traditional segmentation methods. In the proposed methods, the RBCs phase images are first numerically reconstructed from RBCs holograms recorded with off-axis digital holographic microscopy. Then, some RBCs phase images are manually segmented and used as training data to fine-tune the FCN. Finally, each pixel in new input RBCs phase images is predicted into either foreground or background using the trained FCN models. The RBCs prediction result from the first model is the final segmentation result, whereas the result from the second model is used as the internal markers of the marker-controlled transform algorithm for further segmentation. Experimental results show that the given schemes can automatically extract RBCs from RBCs phase images and much better RBCs separation results are obtained when the FCN technique is combined with the marker-controlled watershed segmentation algorithm.

  3. Automated red blood cells extraction from holographic images using fully convolutional neural networks

    PubMed Central

    Yi, Faliu; Moon, Inkyu; Javidi, Bahram

    2017-01-01

    In this paper, we present two models for automatically extracting red blood cells (RBCs) from RBCs holographic images based on a deep learning fully convolutional neural network (FCN) algorithm. The first model, called FCN-1, only uses the FCN algorithm to carry out RBCs prediction, whereas the second model, called FCN-2, combines the FCN approach with the marker-controlled watershed transform segmentation scheme to achieve RBCs extraction. Both models achieve good segmentation accuracy. In addition, the second model has much better performance in terms of cell separation than traditional segmentation methods. In the proposed methods, the RBCs phase images are first numerically reconstructed from RBCs holograms recorded with off-axis digital holographic microscopy. Then, some RBCs phase images are manually segmented and used as training data to fine-tune the FCN. Finally, each pixel in new input RBCs phase images is predicted into either foreground or background using the trained FCN models. The RBCs prediction result from the first model is the final segmentation result, whereas the result from the second model is used as the internal markers of the marker-controlled transform algorithm for further segmentation. Experimental results show that the given schemes can automatically extract RBCs from RBCs phase images and much better RBCs separation results are obtained when the FCN technique is combined with the marker-controlled watershed segmentation algorithm. PMID:29082078

  4. Segmentation of prostate biopsy needles in transrectal ultrasound images

    NASA Astrophysics Data System (ADS)

    Krefting, Dagmar; Haupt, Barbara; Tolxdorff, Thomas; Kempkensteffen, Carsten; Miller, Kurt

    2007-03-01

    Prostate cancer is the most common cancer in men. Tissue extraction at different locations (biopsy) is the gold-standard for diagnosis of prostate cancer. These biopsies are commonly guided by transrectal ultrasound imaging (TRUS). Exact location of the extracted tissue within the gland is desired for more specific diagnosis and provides better therapy planning. While the orientation and the position of the needle within clinical TRUS image are limited, the appearing length and visibility of the needle varies strongly. Marker lines are present and tissue inhomogeneities and deflection artefacts may appear. Simple intensity, gradient oder edge-detecting based segmentation methods fail. Therefore a multivariate statistical classificator is implemented. The independent feature model is built by supervised learning using a set of manually segmented needles. The feature space is spanned by common binary object features as size and eccentricity as well as imaging-system dependent features like distance and orientation relative to the marker line. The object extraction is done by multi-step binarization of the region of interest. The ROI is automatically determined at the beginning of the segmentation and marker lines are removed from the images. The segmentation itself is realized by scale-invariant classification using maximum likelihood estimation and Mahalanobis distance as discriminator. The technique presented here could be successfully applied in 94% of 1835 TRUS images from 30 tissue extractions. It provides a robust method for biopsy needle localization in clinical prostate biopsy TRUS images.

  5. Automated color classification of urine dipstick image in urine examination

    NASA Astrophysics Data System (ADS)

    Rahmat, R. F.; Royananda; Muchtar, M. A.; Taqiuddin, R.; Adnan, S.; Anugrahwaty, R.; Budiarto, R.

    2018-03-01

    Urine examination using urine dipstick has long been used to determine the health status of a person. The economical and convenient use of urine dipstick is one of the reasons urine dipstick is still used to check people health status. The real-life implementation of urine dipstick is done manually, in general, that is by comparing it with the reference color visually. This resulted perception differences in the color reading of the examination results. In this research, authors used a scanner to obtain the urine dipstick color image. The use of scanner can be one of the solutions in reading the result of urine dipstick because the light produced is consistent. A method is required to overcome the problems of urine dipstick color matching and the test reference color that have been conducted manually. The method proposed by authors is Euclidean Distance, Otsu along with RGB color feature extraction method to match the colors on the urine dipstick with the standard reference color of urine examination. The result shows that the proposed approach was able to classify the colors on a urine dipstick with an accuracy of 95.45%. The accuracy of color classification on urine dipstick against the standard reference color is influenced by the level of scanner resolution used, the higher the scanner resolution level, the higher the accuracy.

  6. Measuring snow water equivalent from common-offset GPR records through migration velocity analysis

    NASA Astrophysics Data System (ADS)

    St. Clair, James; Holbrook, W. Steven

    2017-12-01

    Many mountainous regions depend on seasonal snowfall for their water resources. Current methods of predicting the availability of water resources rely on long-term relationships between stream discharge and snowpack monitoring at isolated locations, which are less reliable during abnormal snow years. Ground-penetrating radar (GPR) has been shown to be an effective tool for measuring snow water equivalent (SWE) because of the close relationship between snow density and radar velocity. However, the standard methods of measuring radar velocity can be time-consuming. Here we apply a migration focusing method originally developed for extracting velocity information from diffracted energy observed in zero-offset seismic sections to the problem of estimating radar velocities in seasonal snow from common-offset GPR data. Diffractions are isolated by plane-wave-destruction (PWD) filtering and the optimal migration velocity is chosen based on the varimax norm of the migrated image. We then use the radar velocity to estimate snow density, depth, and SWE. The GPR-derived SWE estimates are within 6 % of manual SWE measurements when the GPR antenna is coupled to the snow surface and 3-21 % of the manual measurements when the antenna is mounted on the front of a snowmobile ˜ 0.5 m above the snow surface.

  7. A Flexible Analysis Tool for the Quantitative Acoustic Assessment of Infant Cry

    PubMed Central

    Reggiannini, Brian; Sheinkopf, Stephen J.; Silverman, Harvey F.; Li, Xiaoxue; Lester, Barry M.

    2015-01-01

    Purpose In this article, the authors describe and validate the performance of a modern acoustic analyzer specifically designed for infant cry analysis. Method Utilizing known algorithms, the authors developed a method to extract acoustic parameters describing infant cries from standard digital audio files. They used a frame rate of 25 ms with a frame advance of 12.5 ms. Cepstral-based acoustic analysis proceeded in 2 phases, computing frame-level data and then organizing and summarizing this information within cry utterances. Using signal detection methods, the authors evaluated the accuracy of the automated system to determine voicing and to detect fundamental frequency (F0) as compared to voiced segments and pitch periods manually coded from spectrogram displays. Results The system detected F0 with 88% to 95% accuracy, depending on tolerances set at 10 to 20 Hz. Receiver operating characteristic analyses demonstrated very high accuracy at detecting voicing characteristics in the cry samples. Conclusions This article describes an automated infant cry analyzer with high accuracy to detect important acoustic features of cry. A unique and important aspect of this work is the rigorous testing of the system’s accuracy as compared to ground-truth manual coding. The resulting system has implications for basic and applied research on infant cry development. PMID:23785178

  8. Mining semantic networks of bioinformatics e-resources from the literature

    PubMed Central

    2011-01-01

    Background There have been a number of recent efforts (e.g. BioCatalogue, BioMoby) to systematically catalogue bioinformatics tools, services and datasets. These efforts rely on manual curation, making it difficult to cope with the huge influx of various electronic resources that have been provided by the bioinformatics community. We present a text mining approach that utilises the literature to automatically extract descriptions and semantically profile bioinformatics resources to make them available for resource discovery and exploration through semantic networks that contain related resources. Results The method identifies the mentions of resources in the literature and assigns a set of co-occurring terminological entities (descriptors) to represent them. We have processed 2,691 full-text bioinformatics articles and extracted profiles of 12,452 resources containing associated descriptors with binary and tf*idf weights. Since such representations are typically sparse (on average 13.77 features per resource), we used lexical kernel metrics to identify semantically related resources via descriptor smoothing. Resources are then clustered or linked into semantic networks, providing the users (bioinformaticians, curators and service/tool crawlers) with a possibility to explore algorithms, tools, services and datasets based on their relatedness. Manual exploration of links between a set of 18 well-known bioinformatics resources suggests that the method was able to identify and group semantically related entities. Conclusions The results have shown that the method can reconstruct interesting functional links between resources (e.g. linking data types and algorithms), in particular when tf*idf-like weights are used for profiling. This demonstrates the potential of combining literature mining and simple lexical kernel methods to model relatedness between resource descriptors in particular when there are few features, thus potentially improving the resource description, discovery and exploration process. The resource profiles are available at http://gnode1.mib.man.ac.uk/bioinf/semnets.html PMID:21388573

  9. A work study of the CAD/CAM method and conventional manual method in the fabrication of spinal orthoses for patients with adolescent idiopathic scoliosis.

    PubMed

    Wong, M S; Cheng, J C Y; Wong, M W; So, S F

    2005-04-01

    A study was conducted to compare the CAD/CAM method with the conventional manual method in fabrication of spinal orthoses for patients with adolescent idiopathic scoliosis. Ten subjects were recruited for this study. Efficiency analyses of the two methods were performed from cast filling/ digitization process to completion of cast/image rectification. The dimensional changes of the casts/ models rectified by the two cast rectification methods were also investigated. The results demonstrated that the CAD/CAM method was faster than the conventional manual method in the studied processes. The mean rectification time of the CAD/CAM method was shorter than that of the conventional manual method by 108.3 min (63.5%). This indicated that the CAD/CAM method took about 1/3 of the time of the conventional manual to finish cast rectification. In the comparison of cast/image dimensional differences between the conventional manual method and the CAD/CAM method, five major dimensions in each of the five rectified regions namely the axilla, thoracic, lumbar, abdominal and pelvic regions were involved. There were no significant dimensional differences (p < 0.05) in 19 out of the 25 studied dimensions. This study demonstrated that the CAD/CAM system could save the time in the rectification process and offer a relatively high resemblance in cast rectification as compared with the conventional manual method.

  10. Changes to the COS Extraction Algorithm for Lifetime Position 3

    NASA Astrophysics Data System (ADS)

    Proffitt, Charles R.; Bostroem, K. Azalee; Ely, Justin; Foster, Deatrick; Hernandez, Svea; Hodge, Philip; Jedrzejewski, Robert I.; Lockwood, Sean A.; Massa, Derck; Peeples, Molly S.; Oliveira, Cristina M.; Penton, Steven V.; Plesha, Rachel; Roman-Duval, Julia; Sana, Hugues; Sahnow, David J.; Sonnentrucker, Paule; Taylor, Joanna M.

    2015-09-01

    The COS FUV Detector Lifetime Position 3 (LP3) has been placed only 2.5" below the original lifetime position (LP1). This is sufficiently close to gain-sagged regions at LP1 that a revised extraction algorithm is needed to ensure good spectral quality. We provide an overview of this new "TWOZONE" extraction algorithm, discuss its strengths and limitations, describe new output columns in the X1D files that show the boundaries of the new extraction regions, and provide some advice on how to manually tune the algorithm for specialized applications.

  11. Simple practical approach for sample loading prior to DNA extraction using a silica monolith in a microfluidic device.

    PubMed

    Shaw, Kirsty J; Joyce, Domino A; Docker, Peter T; Dyer, Charlotte E; Greenman, John; Greenway, Gillian M; Haswell, Stephen J

    2009-12-07

    A novel DNA loading methodology is presented for performing DNA extraction on a microfluidic system. DNA in a chaotropic salt solution was manually loaded onto a silica monolith orthogonal to the subsequent flow of wash and elution solutions. DNA was successfully extracted from buccal swabs using electro-osmotic pumping (EOP) coupled with in situ reagents contained within a 1.5% agarose gel matrix. The extracted DNA was of sufficient quantity and purity for polymerase chain reaction (PCR) amplification.

  12. Unsupervised Extraction of Diagnosis Codes from EMRs Using Knowledge-Based and Extractive Text Summarization Techniques

    PubMed Central

    Kavuluru, Ramakanth; Han, Sifei; Harris, Daniel

    2017-01-01

    Diagnosis codes are extracted from medical records for billing and reimbursement and for secondary uses such as quality control and cohort identification. In the US, these codes come from the standard terminology ICD-9-CM derived from the international classification of diseases (ICD). ICD-9 codes are generally extracted by trained human coders by reading all artifacts available in a patient’s medical record following specific coding guidelines. To assist coders in this manual process, this paper proposes an unsupervised ensemble approach to automatically extract ICD-9 diagnosis codes from textual narratives included in electronic medical records (EMRs). Earlier attempts on automatic extraction focused on individual documents such as radiology reports and discharge summaries. Here we use a more realistic dataset and extract ICD-9 codes from EMRs of 1000 inpatient visits at the University of Kentucky Medical Center. Using named entity recognition (NER), graph-based concept-mapping of medical concepts, and extractive text summarization techniques, we achieve an example based average recall of 0.42 with average precision 0.47; compared with a baseline of using only NER, we notice a 12% improvement in recall with the graph-based approach and a 7% improvement in precision using the extractive text summarization approach. Although diagnosis codes are complex concepts often expressed in text with significant long range non-local dependencies, our present work shows the potential of unsupervised methods in extracting a portion of codes. As such, our findings are especially relevant for code extraction tasks where obtaining large amounts of training data is difficult. PMID:28748227

  13. Massage, reflexology and other manual methods for pain management in labour.

    PubMed

    Smith, Caroline A; Levett, Kate M; Collins, Carmel T; Jones, Leanne

    2012-02-15

    Many women would like to avoid pharmacological or invasive methods of pain management in labour, and this may contribute towards the popularity of complementary methods of pain management. This review examined currently available evidence supporting the use of manual healing methods including massage and reflexology for pain management in labour. To examine the effects of manual healing methods including massage and reflexology for pain management in labour on maternal and perinatal morbidity. We searched the Cochrane Pregnancy and Childbirth Group's Trials Register (30 June 2011), the Cochrane Central Register of Controlled Trials (CENTRAL) (The Cochrane Library 2011, Issue 2 of 4), MEDLINE (1966 to 30 June 2011), CINAHL (1980 to 30 June 2011), the Australian and New Zealand Clinical Trial Registry (30 June 2011), Chinese Clinical Trial Register (30 June 2011), Current Controlled Trials (30 June 2011), ClinicalTrials.gov, (30 June 2011) ISRCTN Register (30 June 2011), National Centre for Complementary and Alternative Medicine (NCCAM) (30 June 2011) and the WHO International Clinical Trials Registry Platform (30 June 2011). Randomised controlled trials comparing manual healing methods with standard care, no treatment, other non-pharmacological forms of pain management in labour or placebo. Two authors independently assessed trial quality and extracted data. We attempted to contact study authors for additional information. We included six trials, with data reporting on five trials and 326 women in the meta-analysis. We found trials for massage only. Less pain during labour was reported from massage compared with usual care during the first stage of labour (standardised mean difference (SMD) -0.82, 95% confidence interval (CI) -1.17 to -0.47), four trials, 225 women), and labour pain was reduced in one trial of massage compared with music (risk ratio (RR) 0.40, 95% CI 0.18 to 0.89, 101 women). One trial of massage compared with usual care found reduced anxiety during the first stage of labour (MD -16.27, 95% CI -27.03 to -5.51, 60 women). No trial was assessed as being at a low risk of bias for all quality domains. Massage may have a role in reducing pain, and improving women's emotional experience of labour. However, there is a need for further research.

  14. A comparison of treatment effectiveness between the CAD/CAM method and the manual method for managing adolescent idiopathic scoliosis.

    PubMed

    Wong, M S; Cheng, J C Y; Lo, K H

    2005-04-01

    The treatment effectiveness of the CAD/CAM method and the manual method in managing adolescent idiopathic scoliosis (AIS) was compared. Forty subjects were recruited with twenty subjects for each method. The clinical parameters namely Cobb's angle and apical vertebral rotation were evaluated at the pre-brace and the immediate in-brace visits. The results demonstrated that orthotic treatments rendered by the CAD/CAM method and the conventional manual method were effective in providing initial control of Cobb's angle. Significant decreases (p < 0.05) were found between the pre-brace and immediate in-brace visits for both methods. The mean reductions of Cobb's angle were 12.8 degrees (41.9%) for the CAD/CAM method and 9.8 degrees (32.1%) for the manual method. An initial control of the apical vertebral rotation was not shown in this study. In the comparison between the CAD/CAM method and the manual method, no significant difference was found in the control of Cobb's angle and apical vertebral rotation. The current study demonstrated that the CAD/CAM method can provide similar result in the initial stage of treatment as compared with the manual method.

  15. The segmentation of bones in pelvic CT images based on extraction of key frames.

    PubMed

    Yu, Hui; Wang, Haijun; Shi, Yao; Xu, Ke; Yu, Xuyao; Cao, Yuzhen

    2018-05-22

    Bone segmentation is important in computed tomography (CT) imaging of the pelvis, which assists physicians in the early diagnosis of pelvic injury, in planning operations, and in evaluating the effects of surgical treatment. This study developed a new algorithm for the accurate, fast, and efficient segmentation of the pelvis. The proposed method consists of two main parts: the extraction of key frames and the segmentation of pelvic CT images. Key frames were extracted based on pixel difference, mutual information and normalized correlation coefficient. In the pelvis segmentation phase, skeleton extraction from CT images and a marker-based watershed algorithm were combined to segment the pelvis. To meet the requirements of clinical application, physician's judgment is needed. Therefore the proposed methodology is semi-automated. In this paper, 5 sets of CT data were used to test the overlapping area, and 15 CT images were used to determine the average deviation distance. The average overlapping area of the 5 sets was greater than 94%, and the minimum average deviation distance was approximately 0.58 pixels. In addition, the key frame extraction efficiency and the running time of the proposed method were evaluated on 20 sets of CT data. For each set, approximately 13% of the images were selected as key frames, and the average processing time was approximately 2 min (the time for manual marking was not included). The proposed method is able to achieve accurate, fast, and efficient segmentation of pelvic CT image sequences. Segmentation results not only provide an important reference for early diagnosis and decisions regarding surgical procedures, they also offer more accurate data for medical image registration, recognition and 3D reconstruction.

  16. Imitating manual curation of text-mined facts in biomedicine.

    PubMed

    Rodriguez-Esteban, Raul; Iossifov, Ivan; Rzhetsky, Andrey

    2006-09-08

    Text-mining algorithms make mistakes in extracting facts from natural-language texts. In biomedical applications, which rely on use of text-mined data, it is critical to assess the quality (the probability that the message is correctly extracted) of individual facts--to resolve data conflicts and inconsistencies. Using a large set of almost 100,000 manually produced evaluations (most facts were independently reviewed more than once, producing independent evaluations), we implemented and tested a collection of algorithms that mimic human evaluation of facts provided by an automated information-extraction system. The performance of our best automated classifiers closely approached that of our human evaluators (ROC score close to 0.95). Our hypothesis is that, were we to use a larger number of human experts to evaluate any given sentence, we could implement an artificial-intelligence curator that would perform the classification job at least as accurately as an average individual human evaluator. We illustrated our analysis by visualizing the predicted accuracy of the text-mined relations involving the term cocaine.

  17. Assessment of Automatically Exported Clinical Data from a Hospital Information System for Clinical Research in Multiple Myeloma.

    PubMed

    Torres, Viviana; Cerda, Mauricio; Knaup, Petra; Löpprich, Martin

    2016-01-01

    An important part of the electronic information available in Hospital Information System (HIS) has the potential to be automatically exported to Electronic Data Capture (EDC) platforms for improving clinical research. This automation has the advantage of reducing manual data transcription, a time consuming and prone to errors process. However, quantitative evaluations of the process of exporting data from a HIS to an EDC system have not been reported extensively, in particular comparing with manual transcription. In this work an assessment to study the quality of an automatic export process, focused in laboratory data from a HIS is presented. Quality of the laboratory data was assessed in two types of processes: (1) a manual process of data transcription, and (2) an automatic process of data transference. The automatic transference was implemented as an Extract, Transform and Load (ETL) process. Then, a comparison was carried out between manual and automatic data collection methods. The criteria to measure data quality were correctness and completeness. The manual process had a general error rate of 2.6% to 7.1%, obtaining the lowest error rate if data fields with a not clear definition were removed from the analysis (p < 10E-3). In the case of automatic process, the general error rate was 1.9% to 12.1%, where lowest error rate is obtained when excluding information missing in the HIS but transcribed to the EDC from other physical sources. The automatic ETL process can be used to collect laboratory data for clinical research if data in the HIS as well as physical documentation not included in HIS, are identified previously and follows a standardized data collection protocol.

  18. Robotic hair harvesting system: a new proposal.

    PubMed

    Lin, Xiang; Nakazawa, Toji; Yasuda, Ryuya; Kobayashi, Etsuko; Sakuma, Ichiro; Liao, Hongen

    2011-01-01

    Follicular Unit Extraction (FUE) has become a popular hair transplanting method for solving male-pattern baldness problem. Manually harvesting hairs one by one, however, is a tedious and time-consuming job to doctors. We design an accurate hair harvesting robot with a novel and efficient end-effector which consists of one digital microscope and a punch device. The microscope is first employed to automatically localize target hairs and then guides the punch device for harvesting after shifting. The end-effector shows average bias and precision of 0.014 mm by virtue of a rotary guidance design for the motorized shifting mechanism.

  19. Key-phrase based classification of public health web pages.

    PubMed

    Dolamic, Ljiljana; Boyer, Célia

    2013-01-01

    This paper describes and evaluates the public health web pages classification model based on key phrase extraction and matching. Easily extendible both in terms of new classes as well as the new language this method proves to be a good solution for text classification faced with the total lack of training data. To evaluate the proposed solution we have used a small collection of public health related web pages created by a double blind manual classification. Our experiments have shown that by choosing the adequate threshold value the desired value for either precision or recall can be achieved.

  20. One-click scanning of large-size documents using mobile phone camera

    NASA Astrophysics Data System (ADS)

    Liu, Sijiang; Jiang, Bo; Yang, Yuanjie

    2016-07-01

    Currently mobile apps for document scanning do not provide convenient operations to tackle large-size documents. In this paper, we present a one-click scanning approach for large-size documents using mobile phone camera. After capturing a continuous video of documents, our approach automatically extracts several key frames by optical flow analysis. Then based on key frames, a mobile GPU based image stitching method is adopted to generate a completed document image with high details. There are no extra manual intervention in the process and experimental results show that our app performs well, showing convenience and practicability for daily life.

  1. Seasonality, extractive foraging and the evolution of primate sensorimotor intelligence.

    PubMed

    Melin, Amanda D; Young, Hilary C; Mosdossy, Krisztina N; Fedigan, Linda M

    2014-06-01

    The parallel evolution of increased sensorimotor intelligence in humans and capuchins has been linked to the cognitive and manual demands of seasonal extractive faunivory. This hypothesis is attractive on theoretical grounds, but it has eluded widespread acceptance due to lack of empirical data. For instance, the effects of seasonality on the extractive foraging behaviors of capuchins are largely unknown. Here we report foraging observations on four groups of wild capuchins (Cebus capucinus) inhabiting a seasonally dry tropical forest. We also measured intra-annual variation in temperature, rainfall, and food abundance. We found that the exploitation of embedded or mechanically protected invertebrates was concentrated during periods of fruit scarcity. Such a pattern suggests that embedded insects are best characterized as a fallback food for capuchins. We discuss the implications of seasonal extractive faunivory for the evolution of sensorimotor intelligence (SMI) in capuchins and hominins and suggest that the suite of features associated with SMI, including increased manual dexterity, tool use, and innovative problem solving are cognitive adaptations among frugivores that fall back seasonally on extractable foods. The selective pressures acting on SMI are predicted to be strongest among primates living in the most seasonal environments. This model is proffered to explain the differences in tool use between capuchin lineages, and SMI as an adaptation to extractive foraging is suggested to play an important role in hominin evolution. Copyright © 2014 Elsevier Ltd. All rights reserved.

  2. Three-dimensional automatic computer-aided evaluation of pleural effusions on chest CT images

    NASA Astrophysics Data System (ADS)

    Bi, Mark; Summers, Ronald M.; Yao, Jianhua

    2011-03-01

    The ability to estimate the volume of pleural effusions is desirable as it can provide information about the severity of the condition and the need for thoracentesis. We present here an improved version of an automated program to measure the volume of pleural effusions using regular chest CT images. First, the lungs are segmented using region growing, mathematical morphology, and anatomical knowledge. The visceral and parietal layers of the pleura are then extracted based on anatomical landmarks, curve fitting and active contour models. The liver and compressed tissues are segmented out using thresholding. The pleural space is then fitted to a Bezier surface which is subsequently projected onto the individual two-dimensional slices. Finally, the volume of the pleural effusion is quantified. Our method was tested on 15 chest CT studies and validated against three separate manual tracings. The Dice coefficients were 0.74+/-0.07, 0.74+/-0.08, and 0.75+/-0.07 respectively, comparable to the variation between two different manual tracings.

  3. Occupational exposure to mould and microbial metabolites during onion sorting--insights into an overlooked workplace.

    PubMed

    Mayer, Stefan; Twarużek, Magdalena; Błajet-Kosicka, Anna; Grajewski, Jan

    2016-03-01

    Manual sorting of onions is known to be associated with a bioaerosol exposure. The study aimed to gain an initial indication as to what extent manual sorting of onions is also associated with mycotoxin exposure. Twelve representative samples of outer onion skins from different onion origins were sampled and analyzed with a multimycotoxin method comprising 40 mycotoxins using a single extraction step followed by liquid chromatography with electrospray ionization and triple quadrupole mass spectrometry. Six of the 12 samples were positive for mycotoxins. In those samples, deoxynivalenol, fumonisin B1, and B2 were observed in quantitatively detectable amounts of 3940 ng/g for fumonisin B1 and in the range of 126-587 ng/g for deoxynivalenol and 55-554 ng/g for fumonisin B2. Although the results point to a lower risk due to mycotoxins, the risk should not be completely neglected and has to be considered in the risk assessment.

  4. Segmentation of the ovine lung in 3D CT Images

    NASA Astrophysics Data System (ADS)

    Shi, Lijun; Hoffman, Eric A.; Reinhardt, Joseph M.

    2004-04-01

    Pulmonary CT images can provide detailed information about the regional structure and function of the respiratory system. Prior to any of these analyses, however, the lungs must be identified in the CT data sets. A popular animal model for understanding lung physiology and pathophysiology is the sheep. In this paper we describe a lung segmentation algorithm for CT images of sheep. The algorithm has two main steps. The first step is lung extraction, which identifies the lung region using a technique based on optimal thresholding and connected components analysis. The second step is lung separation, which separates the left lung from the right lung by identifying the central fissure using an anatomy-based method incorporating dynamic programming and a line filter algorithm. The lung segmentation algorithm has been validated by comparing our automatic method to manual analysis for five pulmonary CT datasets. The RMS error between the computer-defined and manually-traced boundary is 0.96 mm. The segmentation requires approximately 10 minutes for a 512x512x400 dataset on a PC workstation (2.40 GHZ CPU, 2.0 GB RAM), while it takes human observer approximately two hours to accomplish the same task.

  5. Experiences with Text Mining Large Collections of Unstructured Systems Development Artifacts at JPL

    NASA Technical Reports Server (NTRS)

    Port, Dan; Nikora, Allen; Hihn, Jairus; Huang, LiGuo

    2011-01-01

    Often repositories of systems engineering artifacts at NASA's Jet Propulsion Laboratory (JPL) are so large and poorly structured that they have outgrown our capability to effectively manually process their contents to extract useful information. Sophisticated text mining methods and tools seem a quick, low-effort approach to automating our limited manual efforts. Our experiences of exploring such methods mainly in three areas including historical risk analysis, defect identification based on requirements analysis, and over-time analysis of system anomalies at JPL, have shown that obtaining useful results requires substantial unanticipated efforts - from preprocessing the data to transforming the output for practical applications. We have not observed any quick 'wins' or realized benefit from short-term effort avoidance through automation in this area. Surprisingly we have realized a number of unexpected long-term benefits from the process of applying text mining to our repositories. This paper elaborates some of these benefits and our important lessons learned from the process of preparing and applying text mining to large unstructured system artifacts at JPL aiming to benefit future TM applications in similar problem domains and also in hope for being extended to broader areas of applications.

  6. Effect of automatic cluster removers on milking efficiency and teat condition of Manchega ewes.

    PubMed

    Bueso-Ródenas, J; Romero, G; Arias, R; Rodríguez, A M; Díaz, J R

    2015-06-01

    Milking operations represent more than 50% of the work on a dairy ewe farm. The implementation of automatic cluster removers (ACR) is gaining popularity, as it allows the operator to avoid manual cluster detachments, simplifying the milking routines. The aim of this study was to discover the effect on the milking of Manchega ewes over an entire lactation period by using this type of devices, set up with 2 different combinations of milk flow threshold (MF) and delay time (DT) and comparing them with the traditional method using manual cluster removal. During a 15-d pre-experimental period, the animals were milked without ACR and sampling was performed to select 108 ewes and distribute them into 3 groups of similar characteristics according to their parity, milk yield, milking duration, and mammary gland sanitary status. Later, each group was milked for a duration of 4 mo in 3 different conditions: 1 with manual cluster removal, the second setting the ACR at MF 150 g/min and DT 20 s, and the third setting the ACR at MF 200 g/min and DT 10 s. Samplings of milking fraction, milking duration, milk composition, mammary gland sanitary status, teat-end status, and vacuum level in the short milk tubes during milking were performed. The use of ACR limited the vacuum drops in the short milk tubes and the edema in the teat end after milking, although no reduction in the number of new cases of mastitis was observed and the milk composition did not change. Moreover, it was noted that the use of ACR set with MF 150 g/min and DT 20 s was more efficient than the manual cluster removal, as it obtained a similar amount of extracted milk but took less time. Conversely, the use of ACR set with MF 200 g/min and DT 10 s involved a higher reduction in individual milking duration and the milking duration of groups of animals but reduced milk extracted. Copyright © 2015 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  7. EEG artifact elimination by extraction of ICA-component features using image processing algorithms.

    PubMed

    Radüntz, T; Scouten, J; Hochmuth, O; Meffert, B

    2015-03-30

    Artifact rejection is a central issue when dealing with electroencephalogram recordings. Although independent component analysis (ICA) separates data in linearly independent components (IC), the classification of these components as artifact or EEG signal still requires visual inspection by experts. In this paper, we achieve automated artifact elimination using linear discriminant analysis (LDA) for classification of feature vectors extracted from ICA components via image processing algorithms. We compare the performance of this automated classifier to visual classification by experts and identify range filtering as a feature extraction method with great potential for automated IC artifact recognition (accuracy rate 88%). We obtain almost the same level of recognition performance for geometric features and local binary pattern (LBP) features. Compared to the existing automated solutions the proposed method has two main advantages: First, it does not depend on direct recording of artifact signals, which then, e.g. have to be subtracted from the contaminated EEG. Second, it is not limited to a specific number or type of artifact. In summary, the present method is an automatic, reliable, real-time capable and practical tool that reduces the time intensive manual selection of ICs for artifact removal. The results are very promising despite the relatively small channel resolution of 25 electrodes. Copyright © 2015 The Authors. Published by Elsevier B.V. All rights reserved.

  8. Overview of the ID, EPI and REL tasks of BioNLP Shared Task 2011.

    PubMed

    Pyysalo, Sampo; Ohta, Tomoko; Rak, Rafal; Sullivan, Dan; Mao, Chunhong; Wang, Chunxia; Sobral, Bruno; Tsujii, Jun'ichi; Ananiadou, Sophia

    2012-06-26

    We present the preparation, resources, results and analysis of three tasks of the BioNLP Shared Task 2011: the main tasks on Infectious Diseases (ID) and Epigenetics and Post-translational Modifications (EPI), and the supporting task on Entity Relations (REL). The two main tasks represent extensions of the event extraction model introduced in the BioNLP Shared Task 2009 (ST'09) to two new areas of biomedical scientific literature, each motivated by the needs of specific biocuration tasks. The ID task concerns the molecular mechanisms of infection, virulence and resistance, focusing in particular on the functions of a class of signaling systems that are ubiquitous in bacteria. The EPI task is dedicated to the extraction of statements regarding chemical modifications of DNA and proteins, with particular emphasis on changes relating to the epigenetic control of gene expression. By contrast to these two application-oriented main tasks, the REL task seeks to support extraction in general by separating challenges relating to part-of relations into a subproblem that can be addressed by independent systems. Seven groups participated in each of the two main tasks and four groups in the supporting task. The participating systems indicated advances in the capability of event extraction methods and demonstrated generalization in many aspects: from abstracts to full texts, from previously considered subdomains to new ones, and from the ST'09 extraction targets to other entities and events. The highest performance achieved in the supporting task REL, 58% F-score, is broadly comparable with levels reported for other relation extraction tasks. For the ID task, the highest-performing system achieved 56% F-score, comparable to the state-of-the-art performance at the established ST'09 task. In the EPI task, the best result was 53% F-score for the full set of extraction targets and 69% F-score for a reduced set of core extraction targets, approaching a level of performance sufficient for user-facing applications. In this study, we extend on previously reported results and perform further analyses of the outputs of the participating systems. We place specific emphasis on aspects of system performance relating to real-world applicability, considering alternate evaluation metrics and performing additional manual analysis of system outputs. We further demonstrate that the strengths of extraction systems can be combined to improve on the performance achieved by any system in isolation. The manually annotated corpora, supporting resources, and evaluation tools for all tasks are available from http://www.bionlp-st.org and the tasks continue as open challenges for all interested parties.

  9. Overview of the ID, EPI and REL tasks of BioNLP Shared Task 2011

    PubMed Central

    2012-01-01

    We present the preparation, resources, results and analysis of three tasks of the BioNLP Shared Task 2011: the main tasks on Infectious Diseases (ID) and Epigenetics and Post-translational Modifications (EPI), and the supporting task on Entity Relations (REL). The two main tasks represent extensions of the event extraction model introduced in the BioNLP Shared Task 2009 (ST'09) to two new areas of biomedical scientific literature, each motivated by the needs of specific biocuration tasks. The ID task concerns the molecular mechanisms of infection, virulence and resistance, focusing in particular on the functions of a class of signaling systems that are ubiquitous in bacteria. The EPI task is dedicated to the extraction of statements regarding chemical modifications of DNA and proteins, with particular emphasis on changes relating to the epigenetic control of gene expression. By contrast to these two application-oriented main tasks, the REL task seeks to support extraction in general by separating challenges relating to part-of relations into a subproblem that can be addressed by independent systems. Seven groups participated in each of the two main tasks and four groups in the supporting task. The participating systems indicated advances in the capability of event extraction methods and demonstrated generalization in many aspects: from abstracts to full texts, from previously considered subdomains to new ones, and from the ST'09 extraction targets to other entities and events. The highest performance achieved in the supporting task REL, 58% F-score, is broadly comparable with levels reported for other relation extraction tasks. For the ID task, the highest-performing system achieved 56% F-score, comparable to the state-of-the-art performance at the established ST'09 task. In the EPI task, the best result was 53% F-score for the full set of extraction targets and 69% F-score for a reduced set of core extraction targets, approaching a level of performance sufficient for user-facing applications. In this study, we extend on previously reported results and perform further analyses of the outputs of the participating systems. We place specific emphasis on aspects of system performance relating to real-world applicability, considering alternate evaluation metrics and performing additional manual analysis of system outputs. We further demonstrate that the strengths of extraction systems can be combined to improve on the performance achieved by any system in isolation. The manually annotated corpora, supporting resources, and evaluation tools for all tasks are available from http://www.bionlp-st.org and the tasks continue as open challenges for all interested parties. PMID:22759456

  10. A novel method for intelligent fault diagnosis of rolling bearings using ensemble deep auto-encoders

    NASA Astrophysics Data System (ADS)

    Shao, Haidong; Jiang, Hongkai; Lin, Ying; Li, Xingqiu

    2018-03-01

    Automatic and accurate identification of rolling bearings fault categories, especially for the fault severities and fault orientations, is still a major challenge in rotating machinery fault diagnosis. In this paper, a novel method called ensemble deep auto-encoders (EDAEs) is proposed for intelligent fault diagnosis of rolling bearings. Firstly, different activation functions are employed as the hidden functions to design a series of auto-encoders (AEs) with different characteristics. Secondly, EDAEs are constructed with various auto-encoders for unsupervised feature learning from the measured vibration signals. Finally, a combination strategy is designed to ensure accurate and stable diagnosis results. The proposed method is applied to analyze the experimental bearing vibration signals. The results confirm that the proposed method can get rid of the dependence on manual feature extraction and overcome the limitations of individual deep learning models, which is more effective than the existing intelligent diagnosis methods.

  11. Automatic and Reproducible Positioning of Phase-Contrast MRI for the Quantification of Global Cerebral Blood Flow

    PubMed Central

    Liu, Peiying; Lu, Hanzhang; Filbey, Francesca M.; Pinkham, Amy E.; McAdams, Carrie J.; Adinoff, Bryon; Daliparthi, Vamsi; Cao, Yan

    2014-01-01

    Phase-Contrast MRI (PC-MRI) is a noninvasive technique to measure blood flow. In particular, global but highly quantitative cerebral blood flow (CBF) measurement using PC-MRI complements several other CBF mapping methods such as arterial spin labeling and dynamic susceptibility contrast MRI by providing a calibration factor. The ability to estimate blood supply in physiological units also lays a foundation for assessment of brain metabolic rate. However, a major obstacle before wider applications of this method is that the slice positioning of the scan, ideally placed perpendicular to the feeding arteries, requires considerable expertise and can present a burden to the operator. In the present work, we proposed that the majority of PC-MRI scans can be positioned using an automatic algorithm, leaving only a small fraction of arteries requiring manual positioning. We implemented and evaluated an algorithm for this purpose based on feature extraction of a survey angiogram, which is of minimal operator dependence. In a comparative test-retest study with 7 subjects, the blood flow measurement using this algorithm showed an inter-session coefficient of variation (CoV) of . The Bland-Altman method showed that the automatic method differs from the manual method by between and , for of the CBF measurements. This is comparable to the variance in CBF measurement using manually-positioned PC MRI alone. In a further application of this algorithm to 157 consecutive subjects from typical clinical cohorts, the algorithm provided successful positioning in 89.7% of the arteries. In 79.6% of the subjects, all four arteries could be planned using the algorithm. Chi-square tests of independence showed that the success rate was not dependent on the age or gender, but the patients showed a trend of lower success rate (p = 0.14) compared to healthy controls. In conclusion, this automatic positioning algorithm could improve the application of PC-MRI in CBF quantification. PMID:24787742

  12. Antifungal Susceptibility Testing in HIV/AIDS Patients: a Comparison Between Automated Machine and Manual Method.

    PubMed

    Nelwan, Erni J; Indrasanti, Evi; Sinto, Robert; Nurchaida, Farida; Sosrosumihardjo, Rustadi

    2016-01-01

    to evaluate the performance of Vitek2 compact machine (Biomerieux Inc. ver 04.02, France) in reference to manual methods for susceptibility test for Candida resistance among HIV/AIDS patients. a comparison study to evaluate Vitek2 compact machine (Biomerieux Inc. ver 04.02, France) in reference to manual methods for susceptibility test for Candida resistance among HIV/AIDS patient was done. Categorical agreement between manual disc diffusion and Vitek2 machine was calculated using predefined criteria. Time to susceptibility result for automated and manual methods were measured. there were 137 Candida isolates comprising eight Candida species with C.albicans and C. glabrata as the first (56.2%) and second (15.3%) most common species, respectively. For fluconazole drug, among the C. albicans, 2.6% was found resistant on manual disc diffusion methods and no resistant was determined by Vitek2 machine; whereas 100% C. krusei was identified as resistant on both methods. Resistant patterns for C. glabrata to fluconazole, voriconazole and amphotericin B were 52.4%, 23.8%, 23.8% vs. 9.5%, 9.5%, 4.8% respectively between manual diffusion disc methods and Vitek2 machine. Time to susceptibility result for automated methods compared to Vitex2 machine was shorter for all Candida species. there is a good categorical agreement between manual disc diffusion and Vitek2 machine, except for C. glabrata for measuring the antifungal resistant. Time to susceptibility result for automated methods is shorter for all Candida species.

  13. Accuracy of MRI volume measurements of breast lesions: comparison between automated, semiautomated and manual assessment.

    PubMed

    Rominger, Marga B; Fournell, Daphne; Nadar, Beenarose Thanka; Behrens, Sarah N M; Figiel, Jens H; Keil, Boris; Heverhagen, Johannes T

    2009-05-01

    The aim of this study was to investigate the efficacy of a dedicated software tool for automated and semiautomated volume measurement in contrast-enhanced (CE) magnetic resonance mammography (MRM). Ninety-six breast lesions with histopathological workup (27 benign, 69 malignant) were re-evaluated by different volume measurement techniques. Volumes of all lesions were extracted automatically (AVM) and semiautomatically (SAVM) from CE 3D MRM and compared with manual 3D contour segmentation (manual volume measurement, MVM, reference measurement technique) and volume estimates based on maximum diameter measurement (MDM). Compared with MVM as reference method MDM, AVM and SAVM underestimated lesion volumes by 63.8%, 30.9% and 21.5%, respectively, with significantly different accuracy for benign (102.4%, 18.4% and 11.4%) and malignant (54.9%, 33.0% and 23.1%) lesions (p < 0.05). Inter- and intraobserver reproducibility was best for AVM (mean difference +/- 2SD, 1.0 +/- 9.7% and 1.8 +/- 12.1%) followed by SAVM (4.3 +/- 25.7% and 4.3 +/- 7.9%), MVM (2.3 +/- 38.2% and 8.6 +/- 31.8%) and MDM (33.9 +/- 128.4% and 9.3 +/- 55.9%). SAVM is more accurate for volume assessment of breast lesions than MDM and AVM. Volume measurement is less accurate for malignant than benign lesions.

  14. Fetal brain volumetry through MRI volumetric reconstruction and segmentation

    PubMed Central

    Estroff, Judy A.; Barnewolt, Carol E.; Connolly, Susan A.; Warfield, Simon K.

    2013-01-01

    Purpose Fetal MRI volumetry is a useful technique but it is limited by a dependency upon motion-free scans, tedious manual segmentation, and spatial inaccuracy due to thick-slice scans. An image processing pipeline that addresses these limitations was developed and tested. Materials and methods The principal sequences acquired in fetal MRI clinical practice are multiple orthogonal single-shot fast spin echo scans. State-of-the-art image processing techniques were used for inter-slice motion correction and super-resolution reconstruction of high-resolution volumetric images from these scans. The reconstructed volume images were processed with intensity non-uniformity correction and the fetal brain extracted by using supervised automated segmentation. Results Reconstruction, segmentation and volumetry of the fetal brains for a cohort of twenty-five clinically acquired fetal MRI scans was done. Performance metrics for volume reconstruction, segmentation and volumetry were determined by comparing to manual tracings in five randomly chosen cases. Finally, analysis of the fetal brain and parenchymal volumes was performed based on the gestational age of the fetuses. Conclusion The image processing pipeline developed in this study enables volume rendering and accurate fetal brain volumetry by addressing the limitations of current volumetry techniques, which include dependency on motion-free scans, manual segmentation, and inaccurate thick-slice interpolation. PMID:20625848

  15. A Pilot Study on Developing a Standardized and Sensitive School Violence Risk Assessment with Manual Annotation.

    PubMed

    Barzman, Drew H; Ni, Yizhao; Griffey, Marcus; Patel, Bianca; Warren, Ashaki; Latessa, Edward; Sorter, Michael

    2017-09-01

    School violence has increased over the past decade and innovative, sensitive, and standardized approaches to assess school violence risk are needed. In our current feasibility study, we initialized a standardized, sensitive, and rapid school violence risk approach with manual annotation. Manual annotation is the process of analyzing a student's transcribed interview to extract relevant information (e.g., key words) to school violence risk levels that are associated with students' behaviors, attitudes, feelings, use of technology (social media and video games), and other activities. In this feasibility study, we first implemented school violence risk assessments to evaluate risk levels by interviewing the student and parent separately at the school or the hospital to complete our novel school safety scales. We completed 25 risk assessments, resulting in 25 transcribed interviews of 12-18 year olds from 15 schools in Ohio and Kentucky. We then analyzed structured professional judgments, language, and patterns associated with school violence risk levels by using manual annotation and statistical methodology. To analyze the student interviews, we initiated the development of an annotation guideline to extract key information that is associated with students' behaviors, attitudes, feelings, use of technology and other activities. Statistical analysis was applied to associate the significant categories with students' risk levels to identify key factors which will help with developing action steps to reduce risk. In a future study, we plan to recruit more subjects in order to fully develop the manual annotation which will result in a more standardized and sensitive approach to school violence assessments.

  16. Active learning reduces annotation time for clinical concept extraction.

    PubMed

    Kholghi, Mahnoosh; Sitbon, Laurianne; Zuccon, Guido; Nguyen, Anthony

    2017-10-01

    To investigate: (1) the annotation time savings by various active learning query strategies compared to supervised learning and a random sampling baseline, and (2) the benefits of active learning-assisted pre-annotations in accelerating the manual annotation process compared to de novo annotation. There are 73 and 120 discharge summary reports provided by Beth Israel institute in the train and test sets of the concept extraction task in the i2b2/VA 2010 challenge, respectively. The 73 reports were used in user study experiments for manual annotation. First, all sequences within the 73 reports were manually annotated from scratch. Next, active learning models were built to generate pre-annotations for the sequences selected by a query strategy. The annotation/reviewing time per sequence was recorded. The 120 test reports were used to measure the effectiveness of the active learning models. When annotating from scratch, active learning reduced the annotation time up to 35% and 28% compared to a fully supervised approach and a random sampling baseline, respectively. Reviewing active learning-assisted pre-annotations resulted in 20% further reduction of the annotation time when compared to de novo annotation. The number of concepts that require manual annotation is a good indicator of the annotation time for various active learning approaches as demonstrated by high correlation between time rate and concept annotation rate. Active learning has a key role in reducing the time required to manually annotate domain concepts from clinical free text, either when annotating from scratch or reviewing active learning-assisted pre-annotations. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. Cerebral Oxygenation and Pain of Heel Blood Sampling Using Manual and Automatic Lancets in Premature Infants.

    PubMed

    Hwang, Mi-Jung; Seol, Geun Hee

    2015-01-01

    Heel blood sampling is a common but painful procedure for neonates. Automatic lancets have been shown to be more effective, with reduced pain and tissue damage, than manual lancets, but the effects of lancet type on cortical activation have not yet been compared. The study aimed to compare the effects of manual and automatic lancets on cerebral oxygenation and pain of heel blood sampling in 24 premature infants with respiratory distress syndrome. Effectiveness was measured by assessing numbers of pricks and squeezes and duration of heel blood sampling. Pain responses were measured using the premature infant pain profile score, heart rate, and oxygen saturation (SpO2). Regional cerebral oxygen saturation (rScO2) was measured using near-infrared spectroscopy, and cerebral fractional tissue oxygen extraction was calculated from SpO2 and rScO. Measures of effectiveness were significantly better with automatic than with manual lancing, including fewer heel punctures (P = .009) and squeezes (P < .001) and shorter duration of heel blood sampling (P = .002). rScO2 was significantly higher (P = .013) and cerebral fractional tissue oxygen extraction after puncture significantly lower (P = .040) with automatic lancing. Premature infant pain profile scores during (P = .004) and after (P = .048) puncture were significantly lower in the automatic than in the manual lancet group. Automatic lancets for heel blood sampling in neonates with respiratory distress syndrome significantly reduced pain and enhanced cerebral oxygenation, suggesting that heel blood should be sampled routinely using an automatic lancet.

  18. D Central Line Extraction of Fossil Oyster Shells

    NASA Astrophysics Data System (ADS)

    Djuricic, A.; Puttonen, E.; Harzhauser, M.; Mandic, O.; Székely, B.; Pfeifer, N.

    2016-06-01

    Photogrammetry provides a powerful tool to digitally document protected, inaccessible, and rare fossils. This saves manpower in relation to current documentation practice and makes the fragile specimens more available for paleontological analysis and public education. In this study, high resolution orthophoto (0.5 mm) and digital surface models (1 mm) are used to define fossil boundaries that are then used as an input to automatically extract fossil length information via central lines. In general, central lines are widely used in geosciences as they ease observation, monitoring and evaluation of object dimensions. Here, the 3D central lines are used in a novel paleontological context to study fossilized oyster shells with photogrammetric and LiDAR-obtained 3D point cloud data. 3D central lines of 1121 Crassostrea gryphoides oysters of various shapes and sizes were computed in the study. Central line calculation included: i) Delaunay triangulation between the fossil shell boundary points and formation of the Voronoi diagram; ii) extraction of Voronoi vertices and construction of a connected graph tree from them; iii) reduction of the graph to the longest possible central line via Dijkstra's algorithm; iv) extension of longest central line to the shell boundary and smoothing by an adjustment of cubic spline curve; and v) integration of the central line into the corresponding 3D point cloud. The resulting longest path estimate for the 3D central line is a size parameter that can be applied in oyster shell age determination both in paleontological and biological applications. Our investigation evaluates ability and performance of the central line method to measure shell sizes accurately by comparing automatically extracted central lines with manually collected reference data used in paleontological analysis. Our results show that the automatically obtained central line length overestimated the manually collected reference by 1.5% in the test set, which is deemed sufficient for the selected paleontological application, namely shell age determination.

  19. CHOBS: Color Histogram of Block Statistics for Automatic Bleeding Detection in Wireless Capsule Endoscopy Video.

    PubMed

    Ghosh, Tonmoy; Fattah, Shaikh Anowarul; Wahid, Khan A

    2018-01-01

    Wireless capsule endoscopy (WCE) is the most advanced technology to visualize whole gastrointestinal (GI) tract in a non-invasive way. But the major disadvantage here, it takes long reviewing time, which is very laborious as continuous manual intervention is necessary. In order to reduce the burden of the clinician, in this paper, an automatic bleeding detection method for WCE video is proposed based on the color histogram of block statistics, namely CHOBS. A single pixel in WCE image may be distorted due to the capsule motion in the GI tract. Instead of considering individual pixel values, a block surrounding to that individual pixel is chosen for extracting local statistical features. By combining local block features of three different color planes of RGB color space, an index value is defined. A color histogram, which is extracted from those index values, provides distinguishable color texture feature. A feature reduction technique utilizing color histogram pattern and principal component analysis is proposed, which can drastically reduce the feature dimension. For bleeding zone detection, blocks are classified using extracted local features that do not incorporate any computational burden for feature extraction. From extensive experimentation on several WCE videos and 2300 images, which are collected from a publicly available database, a very satisfactory bleeding frame and zone detection performance is achieved in comparison to that obtained by some of the existing methods. In the case of bleeding frame detection, the accuracy, sensitivity, and specificity obtained from proposed method are 97.85%, 99.47%, and 99.15%, respectively, and in the case of bleeding zone detection, 95.75% of precision is achieved. The proposed method offers not only low feature dimension but also highly satisfactory bleeding detection performance, which even can effectively detect bleeding frame and zone in a continuous WCE video data.

  20. FEX: A Knowledge-Based System For Planimetric Feature Extraction

    NASA Astrophysics Data System (ADS)

    Zelek, John S.

    1988-10-01

    Topographical planimetric features include natural surfaces (rivers, lakes) and man-made surfaces (roads, railways, bridges). In conventional planimetric feature extraction, a photointerpreter manually interprets and extracts features from imagery on a stereoplotter. Visual planimetric feature extraction is a very labour intensive operation. The advantages of automating feature extraction include: time and labour savings; accuracy improvements; and planimetric data consistency. FEX (Feature EXtraction) combines techniques from image processing, remote sensing and artificial intelligence for automatic feature extraction. The feature extraction process co-ordinates the information and knowledge in a hierarchical data structure. The system simulates the reasoning of a photointerpreter in determining the planimetric features. Present efforts have concentrated on the extraction of road-like features in SPOT imagery. Keywords: Remote Sensing, Artificial Intelligence (AI), SPOT, image understanding, knowledge base, apars.

  1. Quantifying Void Ratio in Granular Materials Using Voronoi Tessellation

    NASA Technical Reports Server (NTRS)

    Alshibli, Khalid A.; El-Saidany, Hany A.; Rose, M. Franklin (Technical Monitor)

    2000-01-01

    Voronoi technique was used to calculate the local void ratio distribution of granular materials. It was implemented in an application-oriented image processing and analysis algorithm capable of extracting object edges, separating adjacent particles, obtaining the centroid of each particle, generating Voronoi polygons, and calculating the local void ratio. Details of the algorithm capabilities and features are presented. Verification calculations included performing manual digitization of synthetic images using Oda's method and Voronoi polygon system. The developed algorithm yielded very accurate measurements of the local void ratio distribution. Voronoi tessellation has the advantage, compared to Oda's method, of offering a well-defined polygon generation criterion that can be implemented in an algorithm to automatically calculate local void ratio of particulate materials.

  2. Temporal Analysis and Automatic Calibration of the Velodyne HDL-32E LiDAR System

    NASA Astrophysics Data System (ADS)

    Chan, T. O.; Lichti, D. D.; Belton, D.

    2013-10-01

    At the end of the first quarter of 2012, more than 600 Velodyne LiDAR systems had been sold worldwide for various robotic and high-accuracy survey applications. The ultra-compact Velodyne HDL-32E LiDAR has become a predominant sensor for many applications that require lower sensor size/weight and cost. For high accuracy applications, cost-effective calibration methods with minimal manual intervention are always desired by users. However, the calibrations are complicated by the Velodyne LiDAR's narrow vertical field of view and the very highly time-variant nature of its measurements. In the paper, the temporal stability of the HDL-32E is first analysed as the motivation for developing a new, automated calibration method. This is followed by a detailed description of the calibration method that is driven by a novel segmentation method for extracting vertical cylindrical features from the Velodyne point clouds. The proposed segmentation method utilizes the Velodyne point cloud's slice-like nature and first decomposes the point clouds into 2D layers. Then the layers are treated as 2D images and are processed with the Generalized Hough Transform which extracts the points distributed in circular patterns from the point cloud layers. Subsequently, the vertical cylindrical features can be readily extracted from the whole point clouds based on the previously extracted points. The points are passed to the calibration that estimates the cylinder parameters and the LiDAR's additional parameters simultaneously by constraining the segmented points to fit to the cylindrical geometric model in such a way the weighted sum of the adjustment residuals are minimized. The proposed calibration is highly automatic and this allows end users to obtain the time-variant additional parameters instantly and frequently whenever there are vertical cylindrical features presenting in scenes. The methods were verified with two different real datasets, and the results suggest that up to 78.43% accuracy improvement for the HDL-32E can be achieved using the proposed calibration method.

  3. Combining Recurrence Analysis and Automatic Movement Extraction from Video Recordings to Study Behavioral Coupling in Face-to-Face Parent-Child Interactions.

    PubMed

    López Pérez, David; Leonardi, Giuseppe; Niedźwiecka, Alicja; Radkowska, Alicja; Rączaszek-Leonardi, Joanna; Tomalski, Przemysław

    2017-01-01

    The analysis of parent-child interactions is crucial for the understanding of early human development. Manual coding of interactions is a time-consuming task, which is a limitation in many projects. This becomes especially demanding if a frame-by-frame categorization of movement needs to be achieved. To overcome this, we present a computational approach for studying movement coupling in natural settings, which is a combination of a state-of-the-art automatic tracker, Tracking-Learning-Detection (TLD), and nonlinear time-series analysis, Cross-Recurrence Quantification Analysis (CRQA). We investigated the use of TLD to extract and automatically classify movement of each partner from 21 video recordings of interactions, where 5.5-month-old infants and mothers engaged in free play in laboratory settings. As a proof of concept, we focused on those face-to-face episodes, where the mother animated an object in front of the infant, in order to measure the coordination between the infants' head movement and the mothers' hand movement. We also tested the feasibility of using such movement data to study behavioral coupling between partners with CRQA. We demonstrate that movement can be extracted automatically from standard definition video recordings and used in subsequent CRQA to quantify the coupling between movement of the parent and the infant. Finally, we assess the quality of this coupling using an extension of CRQA called anisotropic CRQA and show asymmetric dynamics between the movement of the parent and the infant. When combined these methods allow automatic coding and classification of behaviors, which results in a more efficient manner of analyzing movements than manual coding.

  4. Combining Recurrence Analysis and Automatic Movement Extraction from Video Recordings to Study Behavioral Coupling in Face-to-Face Parent-Child Interactions

    PubMed Central

    López Pérez, David; Leonardi, Giuseppe; Niedźwiecka, Alicja; Radkowska, Alicja; Rączaszek-Leonardi, Joanna; Tomalski, Przemysław

    2017-01-01

    The analysis of parent-child interactions is crucial for the understanding of early human development. Manual coding of interactions is a time-consuming task, which is a limitation in many projects. This becomes especially demanding if a frame-by-frame categorization of movement needs to be achieved. To overcome this, we present a computational approach for studying movement coupling in natural settings, which is a combination of a state-of-the-art automatic tracker, Tracking-Learning-Detection (TLD), and nonlinear time-series analysis, Cross-Recurrence Quantification Analysis (CRQA). We investigated the use of TLD to extract and automatically classify movement of each partner from 21 video recordings of interactions, where 5.5-month-old infants and mothers engaged in free play in laboratory settings. As a proof of concept, we focused on those face-to-face episodes, where the mother animated an object in front of the infant, in order to measure the coordination between the infants' head movement and the mothers' hand movement. We also tested the feasibility of using such movement data to study behavioral coupling between partners with CRQA. We demonstrate that movement can be extracted automatically from standard definition video recordings and used in subsequent CRQA to quantify the coupling between movement of the parent and the infant. Finally, we assess the quality of this coupling using an extension of CRQA called anisotropic CRQA and show asymmetric dynamics between the movement of the parent and the infant. When combined these methods allow automatic coding and classification of behaviors, which results in a more efficient manner of analyzing movements than manual coding. PMID:29312075

  5. Automating digital leaf measurement: the tooth, the whole tooth, and nothing but the tooth.

    PubMed

    Corney, David P A; Tang, H Lilian; Clark, Jonathan Y; Hu, Yin; Jin, Jing

    2012-01-01

    Many species of plants produce leaves with distinct teeth around their margins. The presence and nature of these teeth can often help botanists to identify species. Moreover, it has long been known that more species native to colder regions have teeth than species native to warmer regions. It has therefore been suggested that fossilized remains of leaves can be used as a proxy for ancient climate reconstruction. Similar studies on living plants can help our understanding of the relationships. The required analysis of leaves typically involves considerable manual effort, which in practice limits the number of leaves that are analyzed, potentially reducing the power of the results. In this work, we describe a novel algorithm to automate the marginal tooth analysis of leaves found in digital images. We demonstrate our methods on a large set of images of whole herbarium specimens collected from Tilia trees (also known as lime, linden or basswood). We chose the genus Tilia as its constituent species have toothed leaves of varied size and shape. In a previous study we extracted c.1600 leaves automatically from a set of c.1100 images. Our new algorithm locates teeth on the margins of such leaves and extracts features such as each tooth's area, perimeter and internal angles, as well as counting them. We evaluate an implementation of our algorithm's performance against a manually analyzed subset of the images. We found that the algorithm achieves an accuracy of 85% for counting teeth and 75% for estimating tooth area. We also demonstrate that the automatically extracted features are sufficient to identify different species of Tilia using a simple linear discriminant analysis, and that the features relating to teeth are the most useful.

  6. Improved parameter extraction and classification for dynamic contrast enhanced MRI of prostate

    NASA Astrophysics Data System (ADS)

    Haq, Nandinee Fariah; Kozlowski, Piotr; Jones, Edward C.; Chang, Silvia D.; Goldenberg, S. Larry; Moradi, Mehdi

    2014-03-01

    Magnetic resonance imaging (MRI), particularly dynamic contrast enhanced (DCE) imaging, has shown great potential in prostate cancer diagnosis and prognosis. The time course of the DCE images provides measures of the contrast agent uptake kinetics. Also, using pharmacokinetic modelling, one can extract parameters from the DCE-MR images that characterize the tumor vascularization and can be used to detect cancer. A requirement for calculating the pharmacokinetic DCE parameters is estimating the Arterial Input Function (AIF). One needs an accurate segmentation of the cross section of the external femoral artery to obtain the AIF. In this work we report a semi-automatic method for segmentation of the cross section of the femoral artery, using circular Hough transform, in the sequence of DCE images. We also report a machine-learning framework to combine pharmacokinetic parameters with the model-free contrast agent uptake kinetic parameters extracted from the DCE time course into a nine-dimensional feature vector. This combination of features is used with random forest and with support vector machine classi cation for cancer detection. The MR data is obtained from patients prior to radical prostatectomy. After the surgery, wholemount histopathology analysis is performed and registered to the DCE-MR images as the diagnostic reference. We show that the use of a combination of pharmacokinetic parameters and the model-free empirical parameters extracted from the time course of DCE results in improved cancer detection compared to the use of each group of features separately. We also validate the proposed method for calculation of AIF based on comparison with the manual method.

  7. Testing an automated method to estimate ground-water recharge from streamflow records

    USGS Publications Warehouse

    Rutledge, A.T.; Daniel, C.C.

    1994-01-01

    The computer program, RORA, allows automated analysis of streamflow hydrographs to estimate ground-water recharge. Output from the program, which is based on the recession-curve-displacement method (often referred to as the Rorabaugh method, for whom the program is named), was compared to estimates of recharge obtained from a manual analysis of 156 years of streamflow record from 15 streamflow-gaging stations in the eastern United States. Statistical tests showed that there was no significant difference between paired estimates of annual recharge by the two methods. Tests of results produced by the four workers who performed the manual method showed that results can differ significantly between workers. Twenty-two percent of the variation between manual and automated estimates could be attributed to having different workers perform the manual method. The program RORA will produce estimates of recharge equivalent to estimates produced manually, greatly increase the speed od analysis, and reduce the subjectivity inherent in manual analysis.

  8. Automatic segmentation of the glenohumeral cartilages from magnetic resonance images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Neubert, A., E-mail: ales.neubert@csiro.au

    Purpose: Magnetic resonance (MR) imaging plays a key role in investigating early degenerative disorders and traumatic injuries of the glenohumeral cartilages. Subtle morphometric and biochemical changes of potential relevance to clinical diagnosis, treatment planning, and evaluation can be assessed from measurements derived from in vivo MR segmentation of the cartilages. However, segmentation of the glenohumeral cartilages, using approaches spanning manual to automated methods, is technically challenging, due to their thin, curved structure and overlapping intensities of surrounding tissues. Automatic segmentation of the glenohumeral cartilages from MR imaging is not at the same level compared to the weight-bearing knee and hipmore » joint cartilages despite the potential applications with respect to clinical investigation of shoulder disorders. In this work, the authors present a fully automated segmentation method for the glenohumeral cartilages using MR images of healthy shoulders. Methods: The method involves automated segmentation of the humerus and scapula bones using 3D active shape models, the extraction of the expected bone–cartilage interface, and cartilage segmentation using a graph-based method. The cartilage segmentation uses localization, patient specific tissue estimation, and a model of the cartilage thickness variation. The accuracy of this method was experimentally validated using a leave-one-out scheme on a database of MR images acquired from 44 asymptomatic subjects with a true fast imaging with steady state precession sequence on a 3 T scanner (Siemens Trio) using a dedicated shoulder coil. The automated results were compared to manual segmentations from two experts (an experienced radiographer and an experienced musculoskeletal anatomist) using the Dice similarity coefficient (DSC) and mean absolute surface distance (MASD) metrics. Results: Accurate and precise bone segmentations were achieved with mean DSC of 0.98 and 0.93 for the humeral head and glenoid fossa, respectively. Mean DSC scores of 0.74 and 0.72 were obtained for the humeral and glenoid cartilage volumes, respectively. The manual interobserver reliability evaluated by DSC was 0.80 ± 0.03 and 0.76 ± 0.04 for the two cartilages, implying that the automated results were within an acceptable 10% difference. The MASD between the automatic and the corresponding manual cartilage segmentations was less than 0.4 mm (previous studies reported mean cartilage thickness of 1.3 mm). Conclusions: This work shows the feasibility of volumetric segmentation and separation of the glenohumeral cartilages from MR images. To their knowledge, this is the first fully automated algorithm for volumetric segmentation of the individual glenohumeral cartilages from MR images. The approach was validated against manual segmentations from experienced analysts. In future work, the approach will be validated on imaging datasets acquired with various MR contrasts in patients.« less

  9. PREDOSE: a semantic web platform for drug abuse epidemiology using social media.

    PubMed

    Cameron, Delroy; Smith, Gary A; Daniulaityte, Raminta; Sheth, Amit P; Dave, Drashti; Chen, Lu; Anand, Gaurish; Carlson, Robert; Watkins, Kera Z; Falck, Russel

    2013-12-01

    The role of social media in biomedical knowledge mining, including clinical, medical and healthcare informatics, prescription drug abuse epidemiology and drug pharmacology, has become increasingly significant in recent years. Social media offers opportunities for people to share opinions and experiences freely in online communities, which may contribute information beyond the knowledge of domain professionals. This paper describes the development of a novel semantic web platform called PREDOSE (PREscription Drug abuse Online Surveillance and Epidemiology), which is designed to facilitate the epidemiologic study of prescription (and related) drug abuse practices using social media. PREDOSE uses web forum posts and domain knowledge, modeled in a manually created Drug Abuse Ontology (DAO--pronounced dow), to facilitate the extraction of semantic information from User Generated Content (UGC), through combination of lexical, pattern-based and semantics-based techniques. In a previous study, PREDOSE was used to obtain the datasets from which new knowledge in drug abuse research was derived. Here, we report on various platform enhancements, including an updated DAO, new components for relationship and triple extraction, and tools for content analysis, trend detection and emerging patterns exploration, which enhance the capabilities of the PREDOSE platform. Given these enhancements, PREDOSE is now more equipped to impact drug abuse research by alleviating traditional labor-intensive content analysis tasks. Using custom web crawlers that scrape UGC from publicly available web forums, PREDOSE first automates the collection of web-based social media content for subsequent semantic annotation. The annotation scheme is modeled in the DAO, and includes domain specific knowledge such as prescription (and related) drugs, methods of preparation, side effects, and routes of administration. The DAO is also used to help recognize three types of data, namely: (1) entities, (2) relationships and (3) triples. PREDOSE then uses a combination of lexical and semantic-based techniques to extract entities and relationships from the scraped content, and a top-down approach for triple extraction that uses patterns expressed in the DAO. In addition, PREDOSE uses publicly available lexicons to identify initial sentiment expressions in text, and then a probabilistic optimization algorithm (from related research) to extract the final sentiment expressions. Together, these techniques enable the capture of fine-grained semantic information, which facilitate search, trend analysis and overall content analysis using social media on prescription drug abuse. Moreover, extracted data are also made available to domain experts for the creation of training and test sets for use in evaluation and refinements in information extraction techniques. A recent evaluation of the information extraction techniques applied in the PREDOSE platform indicates 85% precision and 72% recall in entity identification, on a manually created gold standard dataset. In another study, PREDOSE achieved 36% precision in relationship identification and 33% precision in triple extraction, through manual evaluation by domain experts. Given the complexity of the relationship and triple extraction tasks and the abstruse nature of social media texts, we interpret these as favorable initial results. Extracted semantic information is currently in use in an online discovery support system, by prescription drug abuse researchers at the Center for Interventions, Treatment and Addictions Research (CITAR) at Wright State University. A comprehensive platform for entity, relationship, triple and sentiment extraction from such abstruse texts has never been developed for drug abuse research. PREDOSE has already demonstrated the importance of mining social media by providing data from which new findings in drug abuse research were uncovered. Given the recent platform enhancements, including the refined DAO, components for relationship and triple extraction, and tools for content, trend and emerging pattern analysis, it is expected that PREDOSE will play a significant role in advancing drug abuse epidemiology in future. Copyright © 2013 Elsevier Inc. All rights reserved.

  10. Hair Transplantation Controversies.

    PubMed

    Avram, Marc R; Finney, Robert; Rogers, Nicole

    2017-11-01

    Hair transplant surgery creates consistently natural appearing transplanted hair for men. It is increasingly popular procedure to restore natural growing hair for men with hair loss. To review some current controversies in hair transplant surgery. Review of the English PubMed literature and specialty literature in hair transplant surgery. Some of the controversies in hair transplant surgery include appropriate donor harvesting technique including elliptical donor harvesting versus follicular unit extraction whether manual versus robotic, the role of platelet-rich plasma and low-level light surgery in hair transplant surgery. Hair transplant surgery creates consistently natural appearing hair. As with all techniques, there are controversies regarding the optimal method for performing the procedure. Some of the current controversies in hair transplant surgery include optimal donor harvesting techniques, elliptical donor harvesting versus follicular unit extraction, the role of low-level light therapy and the platelet-rich plasma therapy in the procedure. Future studies will further clarify their role in the procedure.

  11. Solidification of floating organic droplet in dispersive liquid-liquid microextraction as a green analytical tool.

    PubMed

    Mansour, Fotouh R; Danielson, Neil D

    2017-08-01

    Dispersive liquid-liquid microextraction (DLLME) is a special type of microextraction in which a mixture of two solvents (an extracting solvent and a disperser) is injected into the sample. The extraction solvent is then dispersed as fine droplets in the cloudy sample through manual or mechanical agitation. Hence, the sample is centrifuged to break the formed emulsion and the extracting solvent is manually separated. The organic solvents commonly used in DLLME are halogenated hydrocarbons that are highly toxic. These solvents are heavier than water, so they sink to the bottom of the centrifugation tube which makes the separation step difficult. By using solvents of low density, the organic extractant floats on the sample surface. If the selected solvent such as undecanol has a freezing point in the range 10-25°C, the floating droplet can be solidified using a simple ice-bath, and then transferred out of the sample matrix; this step is known as solidification of floating organic droplet (SFOD). Coupling DLLME to SFOD combines the advantages of both approaches together. The DLLME-SFOD process is controlled by the same variables of conventional liquid-liquid extraction. The organic solvents used as extractants in DLLME-SFOD must be immiscible with water, of lower density, low volatility, high partition coefficient and low melting and freezing points. The extraction efficiency of DLLME-SFOD is affected by types and volumes of organic extractant and disperser, salt addition, pH, temperature, stirring rate and extraction time. This review discusses the principle, optimization variables, advantages and disadvantages and some selected applications of DLLME-SFOD in water, food and biomedical analysis. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. Effects of Withania somnifera on Reproductive System: A Systematic Review of the Available Evidence

    PubMed Central

    Nazemyieh, Hossein; Fazljou, Seyed Mohammad Bagher; Nejatbakhsh, Fatemeh; Moini Jazani, Arezoo; Ahmadi AsrBadr, Yadollah

    2018-01-01

    Introduction Withania somnifera (WS) also known as ashwagandha is a well-known medicinal plant used in traditional medicine in many countries for infertility treatment. The present study was aimed at systemically reviewing therapeutic effects of WS on the reproductive system. Methods This systematic review study was designed in 2016. Required data were obtained from PubMed, Scopus, Google Scholar, Cochrane Library, Science Direct, Web of Knowledge, Web of Science, and manual search of articles, grey literature, reference checking, and expert contact. Results WS was found to improve reproductive system function by many ways. WS extract decreased infertility among male subjects, due to the enhancement in semen quality which is proposed due to the enhanced enzymatic activity in seminal plasma and decreasing oxidative stress. Also, WS extract improved luteinizing hormone and follicular stimulating hormone balance leading to folliculogenesis and increased gonadal weight, although some animal studies had concluded that WS had reversible spermicidal and infertilizing effects in male subjects. Conclusion WS was found to enhance spermatogenesis and sperm related indices in male and sexual behaviors in female. But, according to some available evidences for spermicidal features, further studies should focus on the extract preparation method and also dosage used in their study protocols. PMID:29670898

  13. Deep Learning for Automated Extraction of Primary Sites From Cancer Pathology Reports.

    PubMed

    Qiu, John X; Yoon, Hong-Jun; Fearn, Paul A; Tourassi, Georgia D

    2018-01-01

    Pathology reports are a primary source of information for cancer registries which process high volumes of free-text reports annually. Information extraction and coding is a manual, labor-intensive process. In this study, we investigated deep learning and a convolutional neural network (CNN), for extracting ICD-O-3 topographic codes from a corpus of breast and lung cancer pathology reports. We performed two experiments, using a CNN and a more conventional term frequency vector approach, to assess the effects of class prevalence and inter-class transfer learning. The experiments were based on a set of 942 pathology reports with human expert annotations as the gold standard. CNN performance was compared against a more conventional term frequency vector space approach. We observed that the deep learning models consistently outperformed the conventional approaches in the class prevalence experiment, resulting in micro- and macro-F score increases of up to 0.132 and 0.226, respectively, when class labels were well populated. Specifically, the best performing CNN achieved a micro-F score of 0.722 over 12 ICD-O-3 topography codes. Transfer learning provided a consistent but modest performance boost for the deep learning methods but trends were contingent on the CNN method and cancer site. These encouraging results demonstrate the potential of deep learning for automated abstraction of pathology reports.

  14. An automatic markerless registration method for neurosurgical robotics based on an optical camera.

    PubMed

    Meng, Fanle; Zhai, Fangwen; Zeng, Bowei; Ding, Hui; Wang, Guangzhi

    2018-02-01

    Current markerless registration methods for neurosurgical robotics use the facial surface to match the robot space with the image space, and acquisition of the facial surface usually requires manual interaction and constrains the patient to a supine position. To overcome these drawbacks, we propose a registration method that is automatic and does not constrain patient position. An optical camera attached to the robot end effector captures images around the patient's head from multiple views. Then, high coverage of the head surface is reconstructed from the images through multi-view stereo vision. Since the acquired head surface point cloud contains color information, a specific mark that is manually drawn on the patient's head prior to the capture procedure can be extracted to automatically accomplish coarse registration rather than using facial anatomic landmarks. Then, fine registration is achieved by registering the high coverage of the head surface without relying solely on the facial region, thus eliminating patient position constraints. The head surface was acquired by the camera with a good repeatability accuracy. The average target registration error of 8 different patient positions measured with targets inside a head phantom was [Formula: see text], while the mean surface registration error was [Formula: see text]. The method proposed in this paper achieves automatic markerless registration in multiple patient positions and guarantees registration accuracy inside the head. This method provides a new approach for establishing the spatial relationship between the image space and the robot space.

  15. Prophylactic antibiotics for manual removal of retained placenta in vaginal birth.

    PubMed

    Chongsomchai, Chompilas; Lumbiganon, Pisake; Laopaiboon, Malinee

    2014-10-20

    Retained placenta is a potentially life-threatening condition because of its association with postpartum hemorrhage. Manual removal of placenta increases the likelihood of bacterial contamination in the uterine cavity. To compare the effectiveness and side-effects of routine antibiotic use for manual removal of placenta in vaginal birth in women who received antibiotic prophylaxis and those who did not and to identify the appropriate regimen of antibiotic prophylaxis for this procedure. We searched the Cochrane Pregnancy and Childbirth Group's Trials Register (31 July 2014). All randomized controlled trials comparing antibiotic prophylaxis and placebo or non antibiotic use to prevent endometritis after manual removal of placenta in vaginal birth. There are no included trials. In future updates, if we identify eligible trials, two review authors will independently assess trial quality and extract data No studies that met the inclusion criteria were identified. There are no randomized controlled trials to evaluate the effectiveness of antibiotic prophylaxis to prevent endometritis after manual removal of placenta in vaginal birth.

  16. DESIGN MANUAL: PHOSPHORUS REMOVAL

    EPA Science Inventory

    This manual summarizes process design information for the best developed methods for removing phosphorus from wastewater. his manual discusses several proven phosphorus removal methods, including phosphorus removal obtainable through biological activity as well as chemical precip...

  17. Quantification of Estrogen Receptor-Alpha Expression in Human Breast Carcinomas With a Miniaturized, Low-Cost Digital Microscope: A Comparison with a High-End Whole Slide-Scanner

    PubMed Central

    Holmström, Oscar; Linder, Nina; Lundin, Mikael; Moilanen, Hannu; Suutala, Antti; Turkki, Riku; Joensuu, Heikki; Isola, Jorma; Diwan, Vinod; Lundin, Johan

    2015-01-01

    Introduction A significant barrier to medical diagnostics in low-resource environments is the lack of medical care and equipment. Here we present a low-cost, cloud-connected digital microscope for applications at the point-of-care. We evaluate the performance of the device in the digital assessment of estrogen receptor-alpha (ER) expression in breast cancer samples. Studies suggest computer-assisted analysis of tumor samples digitized with whole slide-scanners may be comparable to manual scoring, here we study whether similar results can be obtained with the device presented. Materials and Methods A total of 170 samples of human breast carcinoma, immunostained for ER expression, were digitized with a high-end slide-scanner and the point-of-care microscope. Corresponding regions from the samples were extracted, and ER status was determined visually and digitally. Samples were classified as ER negative (<1% ER positivity) or positive, and further into weakly (1–10% positivity) and strongly positive. Interobserver agreement (Cohen’s kappa) was measured and correlation coefficients (Pearson’s product-momentum) were calculated for comparison of the methods. Results Correlation and interobserver agreement (r = 0.98, p < 0.001, kappa = 0.84, CI95% = 0.75–0.94) were strong in the results from both devices. Concordance of the point-of-care microscope and the manual scoring was good (r = 0.94, p < 0.001, kappa = 0.71, CI95% = 0.61–0.80), and comparable to the concordance between the slide scanner and manual scoring (r = 0.93, p < 0.001, kappa = 0.69, CI95% = 0.60–0.78). Fourteen (8%) discrepant cases between manual and device-based scoring were present with the slide scanner, and 16 (9%) with the point-of-care microscope, all representing samples of low ER expression. Conclusions Tumor ER status can be accurately quantified with a low-cost imaging device and digital image-analysis, with results comparable to conventional computer-assisted or manual scoring. This technology could potentially be expanded for other histopathological applications at the point-of-care. PMID:26659386

  18. Non-rigid registration of 3D ultrasound for neurosurgery using automatic feature detection and matching.

    PubMed

    Machado, Inês; Toews, Matthew; Luo, Jie; Unadkat, Prashin; Essayed, Walid; George, Elizabeth; Teodoro, Pedro; Carvalho, Herculano; Martins, Jorge; Golland, Polina; Pieper, Steve; Frisken, Sarah; Golby, Alexandra; Wells, William

    2018-06-04

    The brain undergoes significant structural change over the course of neurosurgery, including highly nonlinear deformation and resection. It can be informative to recover the spatial mapping between structures identified in preoperative surgical planning and the intraoperative state of the brain. We present a novel feature-based method for achieving robust, fully automatic deformable registration of intraoperative neurosurgical ultrasound images. A sparse set of local image feature correspondences is first estimated between ultrasound image pairs, after which rigid, affine and thin-plate spline models are used to estimate dense mappings throughout the image. Correspondences are derived from 3D features, distinctive generic image patterns that are automatically extracted from 3D ultrasound images and characterized in terms of their geometry (i.e., location, scale, and orientation) and a descriptor of local image appearance. Feature correspondences between ultrasound images are achieved based on a nearest-neighbor descriptor matching and probabilistic voting model similar to the Hough transform. Experiments demonstrate our method on intraoperative ultrasound images acquired before and after opening of the dura mater, during resection and after resection in nine clinical cases. A total of 1620 automatically extracted 3D feature correspondences were manually validated by eleven experts and used to guide the registration. Then, using manually labeled corresponding landmarks in the pre- and post-resection ultrasound images, we show that our feature-based registration reduces the mean target registration error from an initial value of 3.3 to 1.5 mm. This result demonstrates that the 3D features promise to offer a robust and accurate solution for 3D ultrasound registration and to correct for brain shift in image-guided neurosurgery.

  19. A Comparison of Supervised Machine Learning Algorithms and Feature Vectors for MS Lesion Segmentation Using Multimodal Structural MRI

    PubMed Central

    Sweeney, Elizabeth M.; Vogelstein, Joshua T.; Cuzzocreo, Jennifer L.; Calabresi, Peter A.; Reich, Daniel S.; Crainiceanu, Ciprian M.; Shinohara, Russell T.

    2014-01-01

    Machine learning is a popular method for mining and analyzing large collections of medical data. We focus on a particular problem from medical research, supervised multiple sclerosis (MS) lesion segmentation in structural magnetic resonance imaging (MRI). We examine the extent to which the choice of machine learning or classification algorithm and feature extraction function impacts the performance of lesion segmentation methods. As quantitative measures derived from structural MRI are important clinical tools for research into the pathophysiology and natural history of MS, the development of automated lesion segmentation methods is an active research field. Yet, little is known about what drives performance of these methods. We evaluate the performance of automated MS lesion segmentation methods, which consist of a supervised classification algorithm composed with a feature extraction function. These feature extraction functions act on the observed T1-weighted (T1-w), T2-weighted (T2-w) and fluid-attenuated inversion recovery (FLAIR) MRI voxel intensities. Each MRI study has a manual lesion segmentation that we use to train and validate the supervised classification algorithms. Our main finding is that the differences in predictive performance are due more to differences in the feature vectors, rather than the machine learning or classification algorithms. Features that incorporate information from neighboring voxels in the brain were found to increase performance substantially. For lesion segmentation, we conclude that it is better to use simple, interpretable, and fast algorithms, such as logistic regression, linear discriminant analysis, and quadratic discriminant analysis, and to develop the features to improve performance. PMID:24781953

  20. A comparison of supervised machine learning algorithms and feature vectors for MS lesion segmentation using multimodal structural MRI.

    PubMed

    Sweeney, Elizabeth M; Vogelstein, Joshua T; Cuzzocreo, Jennifer L; Calabresi, Peter A; Reich, Daniel S; Crainiceanu, Ciprian M; Shinohara, Russell T

    2014-01-01

    Machine learning is a popular method for mining and analyzing large collections of medical data. We focus on a particular problem from medical research, supervised multiple sclerosis (MS) lesion segmentation in structural magnetic resonance imaging (MRI). We examine the extent to which the choice of machine learning or classification algorithm and feature extraction function impacts the performance of lesion segmentation methods. As quantitative measures derived from structural MRI are important clinical tools for research into the pathophysiology and natural history of MS, the development of automated lesion segmentation methods is an active research field. Yet, little is known about what drives performance of these methods. We evaluate the performance of automated MS lesion segmentation methods, which consist of a supervised classification algorithm composed with a feature extraction function. These feature extraction functions act on the observed T1-weighted (T1-w), T2-weighted (T2-w) and fluid-attenuated inversion recovery (FLAIR) MRI voxel intensities. Each MRI study has a manual lesion segmentation that we use to train and validate the supervised classification algorithms. Our main finding is that the differences in predictive performance are due more to differences in the feature vectors, rather than the machine learning or classification algorithms. Features that incorporate information from neighboring voxels in the brain were found to increase performance substantially. For lesion segmentation, we conclude that it is better to use simple, interpretable, and fast algorithms, such as logistic regression, linear discriminant analysis, and quadratic discriminant analysis, and to develop the features to improve performance.

  1. Comparison of the quantification of acetaminophen in plasma, cerebrospinal fluid and dried blood spots using high-performance liquid chromatography-tandem mass spectrometry.

    PubMed

    Taylor, Rachel R; Hoffman, Keith L; Schniedewind, Björn; Clavijo, Claudia; Galinkin, Jeffrey L; Christians, Uwe

    2013-09-01

    Acetaminophen (paracetamol, N-(4-hydroxyphenyl) acetamide) is one of the most commonly prescribed drugs for the management of pain in children. Quantification of acetaminophen in pre-term and term neonates and small children requires the availability of highly sensitive assays in small volume blood samples. We developed and validated an LC-MS/MS assay for the quantification of acetaminophen in human plasma, cerebro-spinal fluid (CSF) and dried blood spots (DBS). Reconstitution in water (DBS only) and addition of a protein precipitation solution containing the deuterated internal standard were the only manual steps. Extracted samples were analyzed on a Kinetex 2.6 μm PFP column using an acetonitrile/formic acid gradient. The analytes were detected in the positive multiple reaction mode. Alternatively, DBS were automatically processed using direct desorption in a sample card and preparation (SCAP) robotic autosampler in combination with online extraction. The range of reliable response in plasma and CSF was 3.05-20,000 ng/ml (r(2)>0.99) and 27.4-20,000 ng/ml (r(2)>0.99) for DBS (manual extraction and automated direct desorption). Inter-day accuracy was always within 85-115% and inter-day precision for plasma, CSF and manually extracted DBS were less than 15%. Deming regression analysis comparing 167 matching pairs of plasma and DBS samples showed a correlation coefficient of 0.98. Bland Altman analysis indicated a 26.6% positive bias in DBS, most likely reflecting the blood: plasma distribution ratio of acetaminophen. DBS are a valid matrix for acetaminophen pharmacokinetic studies. Copyright © 2013 Elsevier B.V. All rights reserved.

  2. Spent NiMH batteries-The role of selective precipitation in the recovery of valuable metals

    NASA Astrophysics Data System (ADS)

    Bertuol, Daniel Assumpção; Bernardes, Andréa Moura; Tenório, Jorge Alberto Soares

    The production of electronic equipment, such as computers and cell phones, and, consequently, batteries, has increased dramatically. One of the types of batteries whose production and consumption has increased in recent times is the nickel metal hydride (NiMH) battery. This study evaluated a hydrometallurgical method of recovery of rare earths and a simple method to obtain a solution rich in Ni-Co from spent NiMH batteries. The active materials from both electrodes were manually removed from the accumulators and leached. Several acid and basic solutions for the recovery of rare earths were evaluated. Results showed that more than 98 wt.% of the rare earths were recovered as sulfate salts by dissolution with sulfuric acid, followed by selective precipitation at pH 1.2 using sodium hydroxide. The complete process, precipitation at pH 1.2 followed by precipitation at pH 7, removed about 100 wt.% of iron and 70 wt.% of zinc from the leaching solution. Results were similar to those found in studies that used solvent extraction. This method is easy, economic, and does not pose environmental threats of solvent extraction.

  3. Rapid Bedside Inactivation of Ebola Virus for Safe Nucleic Acid Tests.

    PubMed

    Rosenstierne, Maiken Worsøe; Karlberg, Helen; Bragstad, Karoline; Lindegren, Gunnel; Stoltz, Malin Lundahl; Salata, Cristiano; Kran, Anne-Marte Bakken; Dudman, Susanne Gjeruldsen; Mirazimi, Ali; Fomsgaard, Anders

    2016-10-01

    Rapid bedside inactivation of Ebola virus would be a solution for the safety of medical and technical staff, risk containment, sample transport, and high-throughput or rapid diagnostic testing during an outbreak. We show that the commercially available Magna Pure lysis/binding buffer used for nucleic acid extraction inactivates Ebola virus. A rapid bedside inactivation method for nucleic acid tests is obtained by simply adding Magna Pure lysis/binding buffer directly into vacuum blood collection EDTA tubes using a thin needle and syringe prior to sampling. The ready-to-use inactivation vacuum tubes are stable for more than 4 months, and Ebola virus RNA is preserved in the Magna Pure lysis/binding buffer for at least 5 weeks independent of the storage temperature. We also show that Ebola virus RNA can be manually extracted from Magna Pure lysis/binding buffer-inactivated samples using the QIAamp viral RNA minikit. We present an easy and convenient method for bedside inactivation using available blood collection vacuum tubes and reagents. We propose to use this simple method for fast, safe, and easy bedside inactivation of Ebola virus for safe transport and routine nucleic acid detection. Copyright © 2016, American Society for Microbiology. All Rights Reserved.

  4. A new idea for visualization of lesions distribution in mammogram based on CPD registration method.

    PubMed

    Pan, Xiaoguang; Qi, Buer; Yu, Hongfei; Wei, Haiping; Kang, Yan

    2017-07-20

    Mammography is currently the most effective technique for breast cancer. Lesions distribution can provide support for clinical diagnosis and epidemiological studies. We presented a new idea to help radiologists study breast lesions distribution conveniently. We also developed an automatic tool based on this idea which could show visualization of lesions distribution in a standard mammogram. Firstly, establishing a lesion database to study; then, extracting breast contours and match different women's mammograms to a standard mammogram; finally, showing the lesion distribution in the standard mammogram, and providing the distribution statistics. The crucial process of developing this tool was matching different women's mammograms correctly. We used a hybrid breast contour extraction method combined with coherent point drift method to match different women's mammograms. We tested our automatic tool by four mass datasets of 641 images. The distribution results shown by the tool were consistent with the results counted according to their reports and mammograms by manual. We also discussed the registration error that was less than 3.3 mm in average distance. The new idea is effective and the automatic tool can provide lesions distribution results which are consistent with radiologists simply and conveniently.

  5. Detection of cardiac activity using a 5.8 GHz radio frequency sensor.

    PubMed

    Vasu, V; Fox, N; Brabetz, T; Wren, M; Heneghan, C; Sezer, S

    2009-01-01

    A 5.8-GHz ISM-Band radio-frequency sensor has been developed for non-contact measurement of respiration and heart rate from stationary and semi-stationary subjects at a distance of 0.5 to 1.5 meters. We report on the accuracy of the heart rate measurements obtained using two algorithmic approaches, as compared to a reference heart rate obtained using a pulse oximeter. Simultaneous Photoplethysmograph (PPG) and non-contact sensor recordings were recorded over fifteen minute periods for ten healthy subjects (8M/2F, ages 29.6 + or - 5.6 yrs) One algorithm is based on automated detection of individual peaks associated with each cardiac cycle; a second algorithm extracts a heart rate over a 60-second period using spectral analysis. Peaks were also extracted manually for comparison with the automated method. The peak-detection methods were less accurate than the spectral methods, but suggest the possibility of acquiring beat by beat data; the spectral algorithms measured heart rate to within + or -10% for the ten subjects chosen. Non-contact measurement of heart rate will be useful in chronic disease monitoring for conditions such as heart failure and cardiovascular disease.

  6. High Throughput Multispectral Image Processing with Applications in Food Science.

    PubMed

    Tsakanikas, Panagiotis; Pavlidis, Dimitris; Nychas, George-John

    2015-01-01

    Recently, machine vision is gaining attention in food science as well as in food industry concerning food quality assessment and monitoring. Into the framework of implementation of Process Analytical Technology (PAT) in the food industry, image processing can be used not only in estimation and even prediction of food quality but also in detection of adulteration. Towards these applications on food science, we present here a novel methodology for automated image analysis of several kinds of food products e.g. meat, vanilla crème and table olives, so as to increase objectivity, data reproducibility, low cost information extraction and faster quality assessment, without human intervention. Image processing's outcome will be propagated to the downstream analysis. The developed multispectral image processing method is based on unsupervised machine learning approach (Gaussian Mixture Models) and a novel unsupervised scheme of spectral band selection for segmentation process optimization. Through the evaluation we prove its efficiency and robustness against the currently available semi-manual software, showing that the developed method is a high throughput approach appropriate for massive data extraction from food samples.

  7. High-throughput measurement of rice tillers using a conveyor equipped with x-ray computed tomography

    NASA Astrophysics Data System (ADS)

    Yang, Wanneng; Xu, Xiaochun; Duan, Lingfeng; Luo, Qingming; Chen, Shangbin; Zeng, Shaoqun; Liu, Qian

    2011-02-01

    Tillering is one of the most important agronomic traits because the number of shoots per plant determines panicle number, a key component of grain yield. The conventional method of counting tillers is still manual. Under the condition of mass measurement, the accuracy and efficiency could be gradually degraded along with fatigue of experienced staff. Thus, manual measurement, including counting and recording, is not only time consuming but also lack objectivity. To automate this process, we developed a high-throughput facility, dubbed high-throughput system for measuring automatically rice tillers (H-SMART), for measuring rice tillers based on a conventional x-ray computed tomography (CT) system and industrial conveyor. Each pot-grown rice plant was delivered into the CT system for scanning via the conveyor equipment. A filtered back-projection algorithm was used to reconstruct the transverse section image of the rice culms. The number of tillers was then automatically extracted by image segmentation. To evaluate the accuracy of this system, three batches of rice at different growth stages (tillering, heading, or filling) were tested, yielding absolute mean absolute errors of 0.22, 0.36, and 0.36, respectively. Subsequently, the complete machine was used under industry conditions to estimate its efficiency, which was 4320 pots per continuous 24 h workday. Thus, the H-SMART could determine the number of tillers of pot-grown rice plants, providing three advantages over the manual tillering method: absence of human disturbance, automation, and high throughput. This facility expands the application of agricultural photonics in plant phenomics.

  8. PaCeQuant: A Tool for High-Throughput Quantification of Pavement Cell Shape Characteristics1[OPEN

    PubMed Central

    Poeschl, Yvonne; Plötner, Romina

    2017-01-01

    Pavement cells (PCs) are the most frequently occurring cell type in the leaf epidermis and play important roles in leaf growth and function. In many plant species, PCs form highly complex jigsaw-puzzle-shaped cells with interlocking lobes. Understanding of their development is of high interest for plant science research because of their importance for leaf growth and hence for plant fitness and crop yield. Studies of PC development, however, are limited, because robust methods are lacking that enable automatic segmentation and quantification of PC shape parameters suitable to reflect their cellular complexity. Here, we present our new ImageJ-based tool, PaCeQuant, which provides a fully automatic image analysis workflow for PC shape quantification. PaCeQuant automatically detects cell boundaries of PCs from confocal input images and enables manual correction of automatic segmentation results or direct import of manually segmented cells. PaCeQuant simultaneously extracts 27 shape features that include global, contour-based, skeleton-based, and PC-specific object descriptors. In addition, we included a method for classification and analysis of lobes at two-cell junctions and three-cell junctions, respectively. We provide an R script for graphical visualization and statistical analysis. We validated PaCeQuant by extensive comparative analysis to manual segmentation and existing quantification tools and demonstrated its usability to analyze PC shape characteristics during development and between different genotypes. PaCeQuant thus provides a platform for robust, efficient, and reproducible quantitative analysis of PC shape characteristics that can easily be applied to study PC development in large data sets. PMID:28931626

  9. High-throughput measurement of rice tillers using a conveyor equipped with x-ray computed tomography.

    PubMed

    Yang, Wanneng; Xu, Xiaochun; Duan, Lingfeng; Luo, Qingming; Chen, Shangbin; Zeng, Shaoqun; Liu, Qian

    2011-02-01

    Tillering is one of the most important agronomic traits because the number of shoots per plant determines panicle number, a key component of grain yield. The conventional method of counting tillers is still manual. Under the condition of mass measurement, the accuracy and efficiency could be gradually degraded along with fatigue of experienced staff. Thus, manual measurement, including counting and recording, is not only time consuming but also lack objectivity. To automate this process, we developed a high-throughput facility, dubbed high-throughput system for measuring automatically rice tillers (H-SMART), for measuring rice tillers based on a conventional x-ray computed tomography (CT) system and industrial conveyor. Each pot-grown rice plant was delivered into the CT system for scanning via the conveyor equipment. A filtered back-projection algorithm was used to reconstruct the transverse section image of the rice culms. The number of tillers was then automatically extracted by image segmentation. To evaluate the accuracy of this system, three batches of rice at different growth stages (tillering, heading, or filling) were tested, yielding absolute mean absolute errors of 0.22, 0.36, and 0.36, respectively. Subsequently, the complete machine was used under industry conditions to estimate its efficiency, which was 4320 pots per continuous 24 h workday. Thus, the H-SMART could determine the number of tillers of pot-grown rice plants, providing three advantages over the manual tillering method: absence of human disturbance, automation, and high throughput. This facility expands the application of agricultural photonics in plant phenomics.

  10. The preprocessed connectomes project repository of manually corrected skull-stripped T1-weighted anatomical MRI data.

    PubMed

    Puccio, Benjamin; Pooley, James P; Pellman, John S; Taverna, Elise C; Craddock, R Cameron

    2016-10-25

    Skull-stripping is the procedure of removing non-brain tissue from anatomical MRI data. This procedure can be useful for calculating brain volume and for improving the quality of other image processing steps. Developing new skull-stripping algorithms and evaluating their performance requires gold standard data from a variety of different scanners and acquisition methods. We complement existing repositories with manually corrected brain masks for 125 T1-weighted anatomical scans from the Nathan Kline Institute Enhanced Rockland Sample Neurofeedback Study. Skull-stripped images were obtained using a semi-automated procedure that involved skull-stripping the data using the brain extraction based on nonlocal segmentation technique (BEaST) software, and manually correcting the worst results. Corrected brain masks were added into the BEaST library and the procedure was repeated until acceptable brain masks were available for all images. In total, 85 of the skull-stripped images were hand-edited and 40 were deemed to not need editing. The results are brain masks for the 125 images along with a BEaST library for automatically skull-stripping other data. Skull-stripped anatomical images from the Neurofeedback sample are available for download from the Preprocessed Connectomes Project. The resulting brain masks can be used by researchers to improve preprocessing of the Neurofeedback data, as training and testing data for developing new skull-stripping algorithms, and for evaluating the impact on other aspects of MRI preprocessing. We have illustrated the utility of these data as a reference for comparing various automatic methods and evaluated the performance of the newly created library on independent data.

  11. Yarn-dyed fabric defect classification based on convolutional neural network

    NASA Astrophysics Data System (ADS)

    Jing, Junfeng; Dong, Amei; Li, Pengfei; Zhang, Kaibing

    2017-09-01

    Considering that manual inspection of the yarn-dyed fabric can be time consuming and inefficient, we propose a yarn-dyed fabric defect classification method by using a convolutional neural network (CNN) based on a modified AlexNet. CNN shows powerful ability in performing feature extraction and fusion by simulating the learning mechanism of human brain. The local response normalization layers in AlexNet are replaced by the batch normalization layers, which can enhance both the computational efficiency and classification accuracy. In the training process of the network, the characteristics of the defect are extracted step by step and the essential features of the image can be obtained from the fusion of the edge details with several convolution operations. Then the max-pooling layers, the dropout layers, and the fully connected layers are employed in the classification model to reduce the computation cost and extract more precise features of the defective fabric. Finally, the results of the defect classification are predicted by the softmax function. The experimental results show promising performance with an acceptable average classification rate and strong robustness on yarn-dyed fabric defect classification.

  12. Video indexing based on image and sound

    NASA Astrophysics Data System (ADS)

    Faudemay, Pascal; Montacie, Claude; Caraty, Marie-Jose

    1997-10-01

    Video indexing is a major challenge for both scientific and economic reasons. Information extraction can sometimes be easier from sound channel than from image channel. We first present a multi-channel and multi-modal query interface, to query sound, image and script through 'pull' and 'push' queries. We then summarize the segmentation phase, which needs information from the image channel. Detection of critical segments is proposed. It should speed-up both automatic and manual indexing. We then present an overview of the information extraction phase. Information can be extracted from the sound channel, through speaker recognition, vocal dictation with unconstrained vocabularies, and script alignment with speech. We present experiment results for these various techniques. Speaker recognition methods were tested on the TIMIT and NTIMIT database. Vocal dictation as experimented on newspaper sentences spoken by several speakers. Script alignment was tested on part of a carton movie, 'Ivanhoe'. For good quality sound segments, error rates are low enough for use in indexing applications. Major issues are the processing of sound segments with noise or music, and performance improvement through the use of appropriate, low-cost architectures or networks of workstations.

  13. Dendritic tree extraction from noisy maximum intensity projection images in C. elegans.

    PubMed

    Greenblum, Ayala; Sznitman, Raphael; Fua, Pascal; Arratia, Paulo E; Oren, Meital; Podbilewicz, Benjamin; Sznitman, Josué

    2014-06-12

    Maximum Intensity Projections (MIP) of neuronal dendritic trees obtained from confocal microscopy are frequently used to study the relationship between tree morphology and mechanosensory function in the model organism C. elegans. Extracting dendritic trees from noisy images remains however a strenuous process that has traditionally relied on manual approaches. Here, we focus on automated and reliable 2D segmentations of dendritic trees following a statistical learning framework. Our dendritic tree extraction (DTE) method uses small amounts of labelled training data on MIPs to learn noise models of texture-based features from the responses of tree structures and image background. Our strategy lies in evaluating statistical models of noise that account for both the variability generated from the imaging process and from the aggregation of information in the MIP images. These noisy models are then used within a probabilistic, or Bayesian framework to provide a coarse 2D dendritic tree segmentation. Finally, some post-processing is applied to refine the segmentations and provide skeletonized trees using a morphological thinning process. Following a Leave-One-Out Cross Validation (LOOCV) method for an MIP databse with available "ground truth" images, we demonstrate that our approach provides significant improvements in tree-structure segmentations over traditional intensity-based methods. Improvements for MIPs under various imaging conditions are both qualitative and quantitative, as measured from Receiver Operator Characteristic (ROC) curves and the yield and error rates in the final segmentations. In a final step, we demonstrate our DTE approach on previously unseen MIP samples including the extraction of skeletonized structures, and compare our method to a state-of-the art dendritic tree tracing software. Overall, our DTE method allows for robust dendritic tree segmentations in noisy MIPs, outperforming traditional intensity-based methods. Such approach provides a useable segmentation framework, ultimately delivering a speed-up for dendritic tree identification on the user end and a reliable first step towards further morphological characterizations of tree arborization.

  14. Application of the BioMek 2000 Laboratory Automation Workstation and the DNA IQ System to the extraction of forensic casework samples.

    PubMed

    Greenspoon, Susan A; Ban, Jeffrey D; Sykes, Karen; Ballard, Elizabeth J; Edler, Shelley S; Baisden, Melissa; Covington, Brian L

    2004-01-01

    Robotic systems are commonly utilized for the extraction of database samples. However, the application of robotic extraction to forensic casework samples is a more daunting task. Such a system must be versatile enough to accommodate a wide range of samples that may contain greatly varying amounts of DNA, but it must also pose no more risk of contamination than the manual DNA extraction methods. This study demonstrates that the BioMek 2000 Laboratory Automation Workstation, used in combination with the DNA IQ System, is versatile enough to accommodate the wide range of samples typically encountered by a crime laboratory. The use of a silica coated paramagnetic resin, as with the DNA IQ System, facilitates the adaptation of an open well, hands off, robotic system to the extraction of casework samples since no filtration or centrifugation steps are needed. Moreover, the DNA remains tightly coupled to the silica coated paramagnetic resin for the entire process until the elution step. A short pre-extraction incubation step is necessary prior to loading samples onto the robot and it is at this step that most modifications are made to accommodate the different sample types and substrates commonly encountered with forensic evidentiary samples. Sexual assault (mixed stain) samples, cigarette butts, blood stains, buccal swabs, and various tissue samples were successfully extracted with the BioMek 2000 Laboratory Automation Workstation and the DNA IQ System, with no evidence of contamination throughout the extensive validation studies reported here.

  15. An algorithm to track laboratory zebrafish shoals.

    PubMed

    Feijó, Gregory de Oliveira; Sangalli, Vicenzo Abichequer; da Silva, Isaac Newton Lima; Pinho, Márcio Sarroglia

    2018-05-01

    In this paper, a semi-automatic multi-object tracking method to track a group of unmarked zebrafish is proposed. This method can handle partial occlusion cases, maintaining the correct identity of each individual. For every object, we extracted a set of geometric features to be used in the two main stages of the algorithm. The first stage selected the best candidate, based both on the blobs identified in the image and the estimate generated by a Kalman Filter instance. In the second stage, if the same candidate-blob is selected by two or more instances, a blob-partitioning algorithm takes place in order to split this blob and reestablish the instances' identities. If the algorithm cannot determine the identity of a blob, a manual intervention is required. This procedure was compared against a manual labeled ground truth on four video sequences with different numbers of fish and spatial resolution. The performance of the proposed method is then compared against two well-known zebrafish tracking methods found in the literature: one that treats occlusion scenarios and one that only track fish that are not in occlusion. Based on the data set used, the proposed method outperforms the first method in correctly separating fish in occlusion, increasing its efficiency by at least 8.15% of the cases. As for the second, the proposed method's overall performance outperformed the second in some of the tested videos, especially those with lower image quality, because the second method requires high-spatial resolution images, which is not a requirement for the proposed method. Yet, the proposed method was able to separate fish involved in occlusion and correctly assign its identity in up to 87.85% of the cases, without accounting for user intervention. Copyright © 2018 Elsevier Ltd. All rights reserved.

  16. CHAD User's Guide: Extracting Human Activity Information from CHAD on the PC

    EPA Pesticide Factsheets

    User manual that includes tutorials, what's inside the CHAD databases, background on individuals studies in CHAD, using data form individual studies, caveats, problems, notes, and database design and development.

  17. Is there an association between assisted reproductive technologies and time and complications of the third stage of labor?

    PubMed

    Aziz, Michael Matean; Guirguis, George; Maratto, Sean; Benito, Carlos; Forman, Eric J

    2016-06-01

    To determine if vaginal deliveries exposed to assisted reproductive technologies (ART) are associated with an increased time between delivery of the neonate and placenta and select complications. A retrospective cohort of patients enrolled in an infertility practice who had term, singleton, vaginal deliveries at two academic hospitals from 2008 to 2013 was analyzed. Controls were patients with spontaneous conceptions after infertility consultations. The exposure groups were patients with controlled ovarian hyper-stimulation (COH) with in vivo fertilization, COH with in vitro fertilization and fresh embryo transfer (COH/IVF), and frozen embryo transfer or oocyte donation recipients without COH (non-COH ET). Multiple gestations and stillbirths were excluded. Median time of third stage was compared using the Mann-Whitney U test. Secondary outcomes of retained placenta, manual placental extraction, and post-partum hemorrhage (PPH) were compared using Chi-square or Fisher's exact analyses. A total of 769 patients met criteria and were analyzed. While there were no differences in time of third stage of labor, retained placenta, or PPH, manual extraction was significantly more common among non-COH ET [age-adjusted OR 5.6 (95 % CI 2.2-13.8); p < 0.001]. Patients who conceived after non-COH ET were at increased risk for manual placental extraction. This association was not influenced by age differences between groups. Further research must be done to determine which elements of the ART process are responsible for these differences.

  18. An automatic scaling method for obtaining the trace and parameters from oblique ionogram based on hybrid genetic algorithm

    NASA Astrophysics Data System (ADS)

    Song, Huan; Hu, Yaogai; Jiang, Chunhua; Zhou, Chen; Zhao, Zhengyu; Zou, Xianjian

    2016-12-01

    Scaling oblique ionogram plays an important role in obtaining ionospheric structure at the midpoint of oblique sounding path. The paper proposed an automatic scaling method to extract the trace and parameters of oblique ionogram based on hybrid genetic algorithm (HGA). The extracted 10 parameters come from F2 layer and Es layer, such as maximum observation frequency, critical frequency, and virtual height. The method adopts quasi-parabolic (QP) model to describe F2 layer's electron density profile that is used to synthesize trace. And it utilizes secant theorem, Martyn's equivalent path theorem, image processing technology, and echoes' characteristics to determine seven parameters' best fit values, and three parameter's initial values in QP model to set up their searching spaces which are the needed input data of HGA. Then HGA searches the three parameters' best fit values from their searching spaces based on the fitness between the synthesized trace and the real trace. In order to verify the performance of the method, 240 oblique ionograms are scaled and their results are compared with manual scaling results and the inversion results of the corresponding vertical ionograms. The comparison results show that the scaling results are accurate or at least adequate 60-90% of the time.

  19. Research on simulated infrared image utility evaluation using deep representation

    NASA Astrophysics Data System (ADS)

    Zhang, Ruiheng; Mu, Chengpo; Yang, Yu; Xu, Lixin

    2018-01-01

    Infrared (IR) image simulation is an important data source for various target recognition systems. However, whether simulated IR images could be used as training data for classifiers depends on the features of fidelity and authenticity of simulated IR images. For evaluation of IR image features, a deep-representation-based algorithm is proposed. Being different from conventional methods, which usually adopt a priori knowledge or manually designed feature, the proposed method can extract essential features and quantitatively evaluate the utility of simulated IR images. First, for data preparation, we employ our IR image simulation system to generate large amounts of IR images. Then, we present the evaluation model of simulated IR image, for which an end-to-end IR feature extraction and target detection model based on deep convolutional neural network is designed. At last, the experiments illustrate that our proposed method outperforms other verification algorithms in evaluating simulated IR images. Cross-validation, variable proportion mixed data validation, and simulation process contrast experiments are carried out to evaluate the utility and objectivity of the images generated by our simulation system. The optimum mixing ratio between simulated and real data is 0.2≤γ≤0.3, which is an effective data augmentation method for real IR images.

  20. Automatic extraction of nuclei centroids of mouse embryonic cells from fluorescence microscopy images.

    PubMed

    Bashar, Md Khayrul; Komatsu, Koji; Fujimori, Toshihiko; Kobayashi, Tetsuya J

    2012-01-01

    Accurate identification of cell nuclei and their tracking using three dimensional (3D) microscopic images is a demanding task in many biological studies. Manual identification of nuclei centroids from images is an error-prone task, sometimes impossible to accomplish due to low contrast and the presence of noise. Nonetheless, only a few methods are available for 3D bioimaging applications, which sharply contrast with 2D analysis, where many methods already exist. In addition, most methods essentially adopt segmentation for which a reliable solution is still unknown, especially for 3D bio-images having juxtaposed cells. In this work, we propose a new method that can directly extract nuclei centroids from fluorescence microscopy images. This method involves three steps: (i) Pre-processing, (ii) Local enhancement, and (iii) Centroid extraction. The first step includes two variations: first variation (Variant-1) uses the whole 3D pre-processed image, whereas the second one (Variant-2) modifies the preprocessed image to the candidate regions or the candidate hybrid image for further processing. At the second step, a multiscale cube filtering is employed in order to locally enhance the pre-processed image. Centroid extraction in the third step consists of three stages. In Stage-1, we compute a local characteristic ratio at every voxel and extract local maxima regions as candidate centroids using a ratio threshold. Stage-2 processing removes spurious centroids from Stage-1 results by analyzing shapes of intensity profiles from the enhanced image. An iterative procedure based on the nearest neighborhood principle is then proposed to combine if there are fragmented nuclei. Both qualitative and quantitative analyses on a set of 100 images of 3D mouse embryo are performed. Investigations reveal a promising achievement of the technique presented in terms of average sensitivity and precision (i.e., 88.04% and 91.30% for Variant-1; 86.19% and 95.00% for Variant-2), when compared with an existing method (86.06% and 90.11%), originally developed for analyzing C. elegans images.

  1. Mining hidden data to predict patient prognosis: texture feature extraction and machine learning in mammography

    NASA Astrophysics Data System (ADS)

    Leighs, J. A.; Halling-Brown, M. D.; Patel, M. N.

    2018-03-01

    The UK currently has a national breast cancer-screening program and images are routinely collected from a number of screening sites, representing a wealth of invaluable data that is currently under-used. Radiologists evaluate screening images manually and recall suspicious cases for further analysis such as biopsy. Histological testing of biopsy samples confirms the malignancy of the tumour, along with other diagnostic and prognostic characteristics such as disease grade. Machine learning is becoming increasingly popular for clinical image classification problems, as it is capable of discovering patterns in data otherwise invisible. This is particularly true when applied to medical imaging features; however clinical datasets are often relatively small. A texture feature extraction toolkit has been developed to mine a wide range of features from medical images such as mammograms. This study analysed a dataset of 1,366 radiologist-marked, biopsy-proven malignant lesions obtained from the OPTIMAM Medical Image Database (OMI-DB). Exploratory data analysis methods were employed to better understand extracted features. Machine learning techniques including Classification and Regression Trees (CART), ensemble methods (e.g. random forests), and logistic regression were applied to the data to predict the disease grade of the analysed lesions. Prediction scores of up to 83% were achieved; sensitivity and specificity of the models trained have been discussed to put the results into a clinical context. The results show promise in the ability to predict prognostic indicators from the texture features extracted and thus enable prioritisation of care for patients at greatest risk.

  2. Validity of the iPhone M7 motion co-processor as a pedometer for able-bodied ambulation.

    PubMed

    Major, Matthew J; Alford, Micah

    2016-12-01

    Physical activity benefits for disease prevention are well-established. Smartphones offer a convenient platform for community-based step count estimation to monitor and encourage physical activity. Accuracy is dependent on hardware-software platforms, creating a recurring challenge for validation, but the Apple iPhone® M7 motion co-processor provides a standardised method that helps address this issue. Validity of the M7 to record step count for level-ground, able-bodied walking at three self-selected speeds, and agreement with the StepWatch TM was assessed. Steps were measured concurrently with the iPhone® (custom application to extract step count), StepWatch TM and manual count. Agreement between iPhone® and manual/StepWatch TM count was estimated through Pearson correlation and Bland-Altman analyses. Data from 20 participants suggested that iPhone® step count correlations with manual and StepWatch TM were strong for customary (1.3 ± 0.1 m/s) and fast (1.8 ± 0.2 m/s) speeds, but weak for the slow (1.0 ± 0.1 m/s) speed. Mean absolute error (manual-iPhone®) was 21%, 8% and 4% for the slow, customary and fast speeds, respectively. The M7 accurately records step count during customary and fast walking speeds, but is prone to considerable inaccuracies at slow speeds which has important implications for certain patient groups. The iPhone® may be a suitable alternative to the StepWatch TM for only faster walking speeds.

  3. A novel automated device for rapid nucleic acid extraction utilizing a zigzag motion of magnetic silica beads.

    PubMed

    Yamaguchi, Akemi; Matsuda, Kazuyuki; Uehara, Masayuki; Honda, Takayuki; Saito, Yasunori

    2016-02-04

    We report a novel automated device for nucleic acid extraction, which consists of a mechanical control system and a disposable cassette. The cassette is composed of a bottle, a capillary tube, and a chamber. After sample injection in the bottle, the sample is lysed, and nucleic acids are adsorbed on the surface of magnetic silica beads. These magnetic beads are transported and are vibrated through the washing reagents in the capillary tube under the control of the mechanical control system, and thus, the nucleic acid is purified without centrifugation. The purified nucleic acid is automatically extracted in 3 min for the polymerase chain reaction (PCR). The nucleic acid extraction is dependent on the transport speed and the vibration frequency of the magnetic beads, and optimizing these two parameters provided better PCR efficiency than the conventional manual procedure. There was no difference between the detection limits of our novel device and that of the conventional manual procedure. We have already developed the droplet-PCR machine, which can amplify and detect specific nucleic acids rapidly and automatically. Connecting the droplet-PCR machine to our novel automated extraction device enables PCR analysis within 15 min, and this system can be made available as a point-of-care testing in clinics as well as general hospitals. Copyright © 2015 Elsevier B.V. All rights reserved.

  4. Contour interpolated radial basis functions with spline boundary correction for fast 3D reconstruction of the human articular cartilage from MR images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Javaid, Zarrar; Unsworth, Charles P., E-mail: c.unsworth@auckland.ac.nz; Boocock, Mark G.

    2016-03-15

    Purpose: The aim of this work is to demonstrate a new image processing technique that can provide a “near real-time” 3D reconstruction of the articular cartilage of the human knee from MR images which is user friendly. This would serve as a point-of-care 3D visualization tool which would benefit a consultant radiologist in the visualization of the human articular cartilage. Methods: The authors introduce a novel fusion of an adaptation of the contour method known as “contour interpolation (CI)” with radial basis functions (RBFs) which they describe as “CI-RBFs.” The authors also present a spline boundary correction which further enhancesmore » volume estimation of the method. A subject cohort consisting of 17 right nonpathological knees (ten female and seven male) is assessed to validate the quality of the proposed method. The authors demonstrate how the CI-RBF method dramatically reduces the number of data points required for fitting an implicit surface to the entire cartilage, thus, significantly improving the speed of reconstruction over the comparable RBF reconstruction method of Carr. The authors compare the CI-RBF method volume estimation to a typical commercial package (3D DOCTOR), Carr’s RBF method, and a benchmark manual method for the reconstruction of the femoral, tibial, and patellar cartilages. Results: The authors demonstrate how the CI-RBF method significantly reduces the number of data points (p-value < 0.0001) required for fitting an implicit surface to the cartilage, by 48%, 31%, and 44% for the patellar, tibial, and femoral cartilages, respectively. Thus, significantly improving the speed of reconstruction (p-value < 0.0001) by 39%, 40%, and 44% for the patellar, tibial, and femoral cartilages over the comparable RBF model of Carr providing a near real-time reconstruction of 6.49, 8.88, and 9.43 min for the patellar, tibial, and femoral cartilages, respectively. In addition, it is demonstrated how the CI-RBF method matches the volume estimation of a typical commercial package (3D DOCTOR), Carr’s RBF method, and a benchmark manual method for the reconstruction of the femoral, tibial, and patellar cartilages. Furthermore, the performance of the segmentation method used for the extraction of the femoral, tibial, and patellar cartilages is assessed with a Dice similarity coefficient, sensitivity, and specificity measure providing high agreement to manual segmentation. Conclusions: The CI-RBF method provides a fast, accurate, and robust 3D model reconstruction that matches Carr’s RBF method, 3D DOCTOR, and a manual benchmark method in accuracy and significantly improves upon Carr’s RBF method in data requirement and computational speed. In addition, the visualization tool has been designed to quickly segment MR images requiring only four mouse clicks per MR image slice.« less

  5. Semi-automating the manual literature search for systematic reviews increases efficiency.

    PubMed

    Chapman, Andrea L; Morgan, Laura C; Gartlehner, Gerald

    2010-03-01

    To minimise retrieval bias, manual literature searches are a key part of the search process of any systematic review. Considering the need to have accurate information, valid results of the manual literature search are essential to ensure scientific standards; likewise efficient approaches that minimise the amount of personnel time required to conduct a manual literature search are of great interest. The objective of this project was to determine the validity and efficiency of a new manual search method that utilises the scopus database. We used the traditional manual search approach as the gold standard to determine the validity and efficiency of the proposed scopus method. Outcome measures included completeness of article detection and personnel time involved. Using both methods independently, we compared the results based on accuracy of the results, validity and time spent conducting the search, efficiency. Regarding accuracy, the scopus method identified the same studies as the traditional approach indicating its validity. In terms of efficiency, using scopus led to a time saving of 62.5% compared with the traditional approach (3 h versus 8 h). The scopus method can significantly improve the efficiency of manual searches and thus of systematic reviews.

  6. Hybrid Clustering And Boundary Value Refinement for Tumor Segmentation using Brain MRI

    NASA Astrophysics Data System (ADS)

    Gupta, Anjali; Pahuja, Gunjan

    2017-08-01

    The method of brain tumor segmentation is the separation of tumor area from Brain Magnetic Resonance (MR) images. There are number of methods already exist for segmentation of brain tumor efficiently. However it’s tedious task to identify the brain tumor from MR images. The segmentation process is extraction of different tumor tissues such as active, tumor, necrosis, and edema from the normal brain tissues such as gray matter (GM), white matter (WM), as well as cerebrospinal fluid (CSF). As per the survey study, most of time the brain tumors are detected easily from brain MR image using region based approach but required level of accuracy, abnormalities classification is not predictable. The segmentation of brain tumor consists of many stages. Manually segmenting the tumor from brain MR images is very time consuming hence there exist many challenges in manual segmentation. In this research paper, our main goal is to present the hybrid clustering which consists of Fuzzy C-Means Clustering (for accurate tumor detection) and level set method(for handling complex shapes) for the detection of exact shape of tumor in minimal computational time. using this approach we observe that for a certain set of images 0.9412 sec of time is taken to detect tumor which is very less in comparison to recent existing algorithm i.e. Hybrid clustering (Fuzzy C-Means and K Means clustering).

  7. Automatic corpus callosum segmentation for standardized MR brain scanning

    NASA Astrophysics Data System (ADS)

    Xu, Qing; Chen, Hong; Zhang, Li; Novak, Carol L.

    2007-03-01

    Magnetic Resonance (MR) brain scanning is often planned manually with the goal of aligning the imaging plane with key anatomic landmarks. The planning is time-consuming and subject to inter- and intra- operator variability. An automatic and standardized planning of brain scans is highly useful for clinical applications, and for maximum utility should work on patients of all ages. In this study, we propose a method for fully automatic planning that utilizes the landmarks from two orthogonal images to define the geometry of the third scanning plane. The corpus callosum (CC) is segmented in sagittal images by an active shape model (ASM), and the result is further improved by weighting the boundary movement with confidence scores and incorporating region based refinement. Based on the extracted contour of the CC, several important landmarks are located and then combined with landmarks from the coronal or transverse plane to define the geometry of the third plane. Our automatic method is tested on 54 MR images from 24 patients and 3 healthy volunteers, with ages ranging from 4 months to 70 years old. The average accuracy with respect to two manually labeled points on the CC is 3.54 mm and 4.19 mm, and differed by an average of 2.48 degrees from the orientation of the line connecting them, demonstrating that our method is sufficiently accurate for clinical use.

  8. FRankenstein becomes a cyborg: the automatic recombination and realignment of fold recognition models in CASP6.

    PubMed

    Kosinski, Jan; Gajda, Michal J; Cymerman, Iwona A; Kurowski, Michal A; Pawlowski, Marcin; Boniecki, Michal; Obarska, Agnieszka; Papaj, Grzegorz; Sroczynska-Obuchowicz, Paulina; Tkaczuk, Karolina L; Sniezynska, Paulina; Sasin, Joanna M; Augustyn, Anna; Bujnicki, Janusz M; Feder, Marcin

    2005-01-01

    In the course of CASP6, we generated models for all targets using a new version of the "FRankenstein's monster approach." Previously (in CASP5) we were able to build many very accurate full-atom models by selection and recombination of well-folded fragments obtained from crude fold recognition (FR) results, followed by optimization of the sequence-structure fit and assessment of alternative alignments on the structural level. This procedure was however very arduous, as most of the steps required extensive visual and manual input from the human modeler. Now, we have automated the most tedious steps, such as superposition of alternative models, extraction of best-scoring fragments, and construction of a hybrid "monster" structure, as well as generation of alternative alignments in the regions that remain poorly scored in the refined hybrid model. We have also included the ROSETTA method to construct those parts of the target for which no reasonable structures were generated by FR methods (such as long insertions and terminal extensions). The analysis of successes and failures of the current version of the FRankenstein approach in modeling of CASP6 targets reveals that the considerably streamlined and automated method performs almost as well as the initial, mostly manual version, which suggests that it may be a useful tool for accurate protein structure prediction even in the hands of nonexperts. 2005 Wiley-Liss, Inc.

  9. Handedness in Nature: First Evidence on Manual Laterality on Bimanual Coordinated Tube Task in Wild Primates

    PubMed Central

    Zhao, Dapeng; Hopkins, William D.; Li, Baoguo

    2012-01-01

    Handedness is a defining feature of human manual skill and understanding the origin of manual specialization remains a central topic of inquiry in anthropology and other sciences. In this study, we examined hand preference in a sample of wild primates on a task that requires bimanual coordinated actions (tube task) that has been widely used in captive primates. The Sichuan snub-nosed monkey (Rhinopithecus roxellana) is an arboreal Old World monkey species that is endemic to China, and 24 adult individuals from the Qinling Mountains of China were included for the analysis of hand preference in the tube task. All subjects showed strong individual hand preferences and significant group-level left-handedness was found. There were no significant differences between males and females for either direction or strength of hand preference. Strength of hand preferences of adults was significantly greater than juveniles. Use of the index finger to extract the food was the dominant extractive-act. Our findings represent the first evidence of population-level left-handedness in wild Old World monkeys, and broaden our knowledge on evaluating primate hand preference via experimental manipulation in natural conditions. PMID:22410843

  10. Exploiting graph kernels for high performance biomedical relation extraction.

    PubMed

    Panyam, Nagesh C; Verspoor, Karin; Cohn, Trevor; Ramamohanarao, Kotagiri

    2018-01-30

    Relation extraction from biomedical publications is an important task in the area of semantic mining of text. Kernel methods for supervised relation extraction are often preferred over manual feature engineering methods, when classifying highly ordered structures such as trees and graphs obtained from syntactic parsing of a sentence. Tree kernels such as the Subset Tree Kernel and Partial Tree Kernel have been shown to be effective for classifying constituency parse trees and basic dependency parse graphs of a sentence. Graph kernels such as the All Path Graph kernel (APG) and Approximate Subgraph Matching (ASM) kernel have been shown to be suitable for classifying general graphs with cycles, such as the enhanced dependency parse graph of a sentence. In this work, we present a high performance Chemical-Induced Disease (CID) relation extraction system. We present a comparative study of kernel methods for the CID task and also extend our study to the Protein-Protein Interaction (PPI) extraction task, an important biomedical relation extraction task. We discuss novel modifications to the ASM kernel to boost its performance and a method to apply graph kernels for extracting relations expressed in multiple sentences. Our system for CID relation extraction attains an F-score of 60%, without using external knowledge sources or task specific heuristic or rules. In comparison, the state of the art Chemical-Disease Relation Extraction system achieves an F-score of 56% using an ensemble of multiple machine learning methods, which is then boosted to 61% with a rule based system employing task specific post processing rules. For the CID task, graph kernels outperform tree kernels substantially, and the best performance is obtained with APG kernel that attains an F-score of 60%, followed by the ASM kernel at 57%. The performance difference between the ASM and APG kernels for CID sentence level relation extraction is not significant. In our evaluation of ASM for the PPI task, ASM performed better than APG kernel for the BioInfer dataset, in the Area Under Curve (AUC) measure (74% vs 69%). However, for all the other PPI datasets, namely AIMed, HPRD50, IEPA and LLL, ASM is substantially outperformed by the APG kernel in F-score and AUC measures. We demonstrate a high performance Chemical Induced Disease relation extraction, without employing external knowledge sources or task specific heuristics. Our work shows that graph kernels are effective in extracting relations that are expressed in multiple sentences. We also show that the graph kernels, namely the ASM and APG kernels, substantially outperform the tree kernels. Among the graph kernels, we showed the ASM kernel as effective for biomedical relation extraction, with comparable performance to the APG kernel for datasets such as the CID-sentence level relation extraction and BioInfer in PPI. Overall, the APG kernel is shown to be significantly more accurate than the ASM kernel, achieving better performance on most datasets.

  11. Automatic segmentation of the glenohumeral cartilages from magnetic resonance images.

    PubMed

    Neubert, A; Yang, Z; Engstrom, C; Xia, Y; Strudwick, M W; Chandra, S S; Fripp, J; Crozier, S

    2016-10-01

    Magnetic resonance (MR) imaging plays a key role in investigating early degenerative disorders and traumatic injuries of the glenohumeral cartilages. Subtle morphometric and biochemical changes of potential relevance to clinical diagnosis, treatment planning, and evaluation can be assessed from measurements derived from in vivo MR segmentation of the cartilages. However, segmentation of the glenohumeral cartilages, using approaches spanning manual to automated methods, is technically challenging, due to their thin, curved structure and overlapping intensities of surrounding tissues. Automatic segmentation of the glenohumeral cartilages from MR imaging is not at the same level compared to the weight-bearing knee and hip joint cartilages despite the potential applications with respect to clinical investigation of shoulder disorders. In this work, the authors present a fully automated segmentation method for the glenohumeral cartilages using MR images of healthy shoulders. The method involves automated segmentation of the humerus and scapula bones using 3D active shape models, the extraction of the expected bone-cartilage interface, and cartilage segmentation using a graph-based method. The cartilage segmentation uses localization, patient specific tissue estimation, and a model of the cartilage thickness variation. The accuracy of this method was experimentally validated using a leave-one-out scheme on a database of MR images acquired from 44 asymptomatic subjects with a true fast imaging with steady state precession sequence on a 3 T scanner (Siemens Trio) using a dedicated shoulder coil. The automated results were compared to manual segmentations from two experts (an experienced radiographer and an experienced musculoskeletal anatomist) using the Dice similarity coefficient (DSC) and mean absolute surface distance (MASD) metrics. Accurate and precise bone segmentations were achieved with mean DSC of 0.98 and 0.93 for the humeral head and glenoid fossa, respectively. Mean DSC scores of 0.74 and 0.72 were obtained for the humeral and glenoid cartilage volumes, respectively. The manual interobserver reliability evaluated by DSC was 0.80 ± 0.03 and 0.76 ± 0.04 for the two cartilages, implying that the automated results were within an acceptable 10% difference. The MASD between the automatic and the corresponding manual cartilage segmentations was less than 0.4 mm (previous studies reported mean cartilage thickness of 1.3 mm). This work shows the feasibility of volumetric segmentation and separation of the glenohumeral cartilages from MR images. To their knowledge, this is the first fully automated algorithm for volumetric segmentation of the individual glenohumeral cartilages from MR images. The approach was validated against manual segmentations from experienced analysts. In future work, the approach will be validated on imaging datasets acquired with various MR contrasts in patients.

  12. Thoracic manual therapy is not more effective than placebo thoracic manual therapy in patients with shoulder dysfunctions: A systematic review with meta-analysis.

    PubMed

    Bizzarri, Paolo; Buzzatti, Luca; Cattrysse, Erik; Scafoglieri, Aldo

    2018-02-01

    Manual treatments targeting different regions (shoulder, cervical spine, thoracic spine, ribs) have been studied to deal with patients complaining of shoulder pain. Thoracic manual treatments seem able to produce beneficial effects on this group of patients. However, it is not clear whether the patient improvement is a consequence of thoracic manual therapy or a placebo effect. To compare the efficacy of thoracic manual therapy and placebo thoracic manual treatment for patients with shoulder dysfunction. Electronic databases (MEDLINE, CENTRAL, PEDro, CINAHL, WoS, EMBASE, ERIC) were searched through November 2016. Randomized Controlled Trials assessing pain, mobility and function were selected. The Cochrane bias estimation tool was applied. Outcome results were either extracted or computed from raw data. Meta-analysis was performed for outcomes with low heterogeneity. Four studies were included in the review. The methodology of the included studies was generally good except for one study that was rated as high risk of bias. Meta-analysis showed no significant effect for "pain at present" (SMD -0.02; 95% CI: -0.35, 0.32) and "pain during movement" (SMD -0.12; 95% CI: -0.45, 0.21). There is very low to low quality of evidence that a single session of thoracic manual therapy is not more effective than a single session of placebo thoracic manual therapy in patients with shoulder dysfunction at immediate post-treatment. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. Text data extraction for a prospective, research-focused data mart: implementation and validation

    PubMed Central

    2012-01-01

    Background Translational research typically requires data abstracted from medical records as well as data collected specifically for research. Unfortunately, many data within electronic health records are represented as text that is not amenable to aggregation for analyses. We present a scalable open source SQL Server Integration Services package, called Regextractor, for including regular expression parsers into a classic extract, transform, and load workflow. We have used Regextractor to abstract discrete data from textual reports from a number of ‘machine generated’ sources. To validate this package, we created a pulmonary function test data mart and analyzed the quality of the data mart versus manual chart review. Methods Eleven variables from pulmonary function tests performed closest to the initial clinical evaluation date were studied for 100 randomly selected subjects with scleroderma. One research assistant manually reviewed, abstracted, and entered relevant data into a database. Correlation with data obtained from the automated pulmonary function test data mart within the Northwestern Medical Enterprise Data Warehouse was determined. Results There was a near perfect (99.5%) agreement between results generated from the Regextractor package and those obtained via manual chart abstraction. The pulmonary function test data mart has been used subsequently to monitor disease progression of patients in the Northwestern Scleroderma Registry. In addition to the pulmonary function test example presented in this manuscript, the Regextractor package has been used to create cardiac catheterization and echocardiography data marts. The Regextractor package was released as open source software in October 2009 and has been downloaded 552 times as of 6/1/2012. Conclusions Collaboration between clinical researchers and biomedical informatics experts enabled the development and validation of a tool (Regextractor) to parse, abstract and assemble structured data from text data contained in the electronic health record. Regextractor has been successfully used to create additional data marts in other medical domains and is available to the public. PMID:22970696

  14. The Virginia method of determining the cement content of freshly mixed cement-soil mixtures : a manual prepared for the use of the Virginia Dept. of Highways.

    DOT National Transportation Integrated Search

    1971-01-01

    This manual describes a new method developed by the author, based on ASTM Method D2901, for determining the cement content of freshly mixed soil cement. The manual contains information on apparatus, reagents, procedures, source of equipment and reage...

  15. A Manual of Simplified Laboratory Methods for Operators of Wastewater Treatment Facilities.

    ERIC Educational Resources Information Center

    Westerhold, Arnold F., Ed.; Bennett, Ernest C., Ed.

    This manual is designed to provide the small wastewater treatment plant operator, as well as the new or inexperienced operator, with simplified methods for laboratory analysis of water and wastewater. It is emphasized that this manual is not a replacement for standard methods but a guide for plants with insufficient equipment to perform analyses…

  16. Towards Automated Three-Dimensional Tracking of Nephrons through Stacked Histological Image Sets

    PubMed Central

    Bhikha, Charita; Andreasen, Arne; Christensen, Erik I.; Letts, Robyn F. R.; Pantanowitz, Adam; Rubin, David M.; Thomsen, Jesper S.; Zhai, Xiao-Yue

    2015-01-01

    An automated approach for tracking individual nephrons through three-dimensional histological image sets of mouse and rat kidneys is presented. In a previous study, the available images were tracked manually through the image sets in order to explore renal microarchitecture. The purpose of the current research is to reduce the time and effort required to manually trace nephrons by creating an automated, intelligent system as a standard tool for such datasets. The algorithm is robust enough to isolate closely packed nephrons and track their convoluted paths despite a number of nonideal, interfering conditions such as local image distortions, artefacts, and interstitial tissue interference. The system comprises image preprocessing, feature extraction, and a custom graph-based tracking algorithm, which is validated by a rule base and a machine learning algorithm. A study of a selection of automatically tracked nephrons, when compared with manual tracking, yields a 95% tracking accuracy for structures in the cortex, while those in the medulla have lower accuracy due to narrower diameter and higher density. Limited manual intervention is introduced to improve tracking, enabling full nephron paths to be obtained with an average of 17 manual corrections per mouse nephron and 58 manual corrections per rat nephron. PMID:26170896

  17. Towards Automated Three-Dimensional Tracking of Nephrons through Stacked Histological Image Sets.

    PubMed

    Bhikha, Charita; Andreasen, Arne; Christensen, Erik I; Letts, Robyn F R; Pantanowitz, Adam; Rubin, David M; Thomsen, Jesper S; Zhai, Xiao-Yue

    2015-01-01

    An automated approach for tracking individual nephrons through three-dimensional histological image sets of mouse and rat kidneys is presented. In a previous study, the available images were tracked manually through the image sets in order to explore renal microarchitecture. The purpose of the current research is to reduce the time and effort required to manually trace nephrons by creating an automated, intelligent system as a standard tool for such datasets. The algorithm is robust enough to isolate closely packed nephrons and track their convoluted paths despite a number of nonideal, interfering conditions such as local image distortions, artefacts, and interstitial tissue interference. The system comprises image preprocessing, feature extraction, and a custom graph-based tracking algorithm, which is validated by a rule base and a machine learning algorithm. A study of a selection of automatically tracked nephrons, when compared with manual tracking, yields a 95% tracking accuracy for structures in the cortex, while those in the medulla have lower accuracy due to narrower diameter and higher density. Limited manual intervention is introduced to improve tracking, enabling full nephron paths to be obtained with an average of 17 manual corrections per mouse nephron and 58 manual corrections per rat nephron.

  18. Automated Classification of Selected Data Elements from Free-text Diagnostic Reports for Clinical Research.

    PubMed

    Löpprich, Martin; Krauss, Felix; Ganzinger, Matthias; Senghas, Karsten; Riezler, Stefan; Knaup, Petra

    2016-08-05

    In the Multiple Myeloma clinical registry at Heidelberg University Hospital, most data are extracted from discharge letters. Our aim was to analyze if it is possible to make the manual documentation process more efficient by using methods of natural language processing for multiclass classification of free-text diagnostic reports to automatically document the diagnosis and state of disease of myeloma patients. The first objective was to create a corpus consisting of free-text diagnosis paragraphs of patients with multiple myeloma from German diagnostic reports, and its manual annotation of relevant data elements by documentation specialists. The second objective was to construct and evaluate a framework using different NLP methods to enable automatic multiclass classification of relevant data elements from free-text diagnostic reports. The main diagnoses paragraph was extracted from the clinical report of one third randomly selected patients of the multiple myeloma research database from Heidelberg University Hospital (in total 737 selected patients). An EDC system was setup and two data entry specialists performed independently a manual documentation of at least nine specific data elements for multiple myeloma characterization. Both data entries were compared and assessed by a third specialist and an annotated text corpus was created. A framework was constructed, consisting of a self-developed package to split multiple diagnosis sequences into several subsequences, four different preprocessing steps to normalize the input data and two classifiers: a maximum entropy classifier (MEC) and a support vector machine (SVM). In total 15 different pipelines were examined and assessed by a ten-fold cross-validation, reiterated 100 times. For quality indication the average error rate and the average F1-score were conducted. For significance testing the approximate randomization test was used. The created annotated corpus consists of 737 different diagnoses paragraphs with a total number of 865 coded diagnosis. The dataset is publicly available in the supplementary online files for training and testing of further NLP methods. Both classifiers showed low average error rates (MEC: 1.05; SVM: 0.84) and high F1-scores (MEC: 0.89; SVM: 0.92). However the results varied widely depending on the classified data element. Preprocessing methods increased this effect and had significant impact on the classification, both positive and negative. The automatic diagnosis splitter increased the average error rate significantly, even if the F1-score decreased only slightly. The low average error rates and high average F1-scores of each pipeline demonstrate the suitability of the investigated NPL methods. However, it was also shown that there is no best practice for an automatic classification of data elements from free-text diagnostic reports.

  19. Multiplexed detection of mycotoxins in foods with a regenerable array.

    PubMed

    Ngundi, Miriam M; Shriver-Lake, Lisa C; Moore, Martin H; Ligler, Frances S; Taitt, Chris R

    2006-12-01

    The occurrence of different mycotoxins in cereal products calls for the development of a rapid, sensitive, and reliable detection method that is capable of analyzing samples for multiple toxins simultaneously. In this study, we report the development and application of a multiplexed competitive assay for the simultaneous detection of ochratoxin A (OTA) and deoxynivalenol (DON) in spiked barley, cornmeal, and wheat, as well as in naturally contaminated maize samples. Fluoroimmunoassays were performed with the Naval Research Laboratory array biosensor, by both a manual and an automated version of the system. This system employs evanescent-wave fluorescence excitation to probe binding events as they occur on the surface of a waveguide. Methanolic extracts of the samples were diluted threefold with buffer containing a mixture of fluorescent antibodies and were then passed over the arrays of mycotoxins immobilized on a waveguide. Fluorescent signals of the surface-bound antibody-antigen complexes decreased with increasing concentrations of free mycotoxins in the extract. After sample analysis was completed, surfaces were regenerated with 6 M guanidine hydrochloride in 50 mM glycine, pH 2.0. The limits of detection determined by the manual biosensor system were as follows: 1, 180, and 65 ng/g for DON and 1, 60, and 85 ng/g for OTA in cornmeal, wheat, and barley, respectively. The limits of detection in cornmeal determined with the automated array biosensor were 15 and 150 ng/g for OTA and DON, respectively.

  20. A displacement pump procedure to load extracts for automated gel permeation chromatography.

    PubMed

    Daft, J; Hopper, M; Hensley, D; Sisk, R

    1990-01-01

    Automated gel permeation chromatography (GPC) effectively separates lipids from pesticides in sample extracts that contain fat. Using a large syringe to load sample extracts manually onto GPC models having 5 mL holding loops is awkward, slow, and potentially hazardous. Loading with a small-volume displacement pump, however, is convenient and fast (ca 1 loop every 20 s). And more importantly, the analyst is not exposed to toxic organic vapors because the loading pump and its connecting lines do not leak in the way that a syringe does.

  1. Automating Frame Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sanfilippo, Antonio P.; Franklin, Lyndsey; Tratz, Stephen C.

    2008-04-01

    Frame Analysis has come to play an increasingly stronger role in the study of social movements in Sociology and Political Science. While significant steps have been made in providing a theory of frames and framing, a systematic characterization of the frame concept is still largely lacking and there are no rec-ognized criteria and methods that can be used to identify and marshal frame evi-dence reliably and in a time and cost effective manner. Consequently, current Frame Analysis work is still too reliant on manual annotation and subjective inter-pretation. The goal of this paper is to present an approach to themore » representation, acquisition and analysis of frame evidence which leverages Content Analysis, In-formation Extraction and Semantic Search methods to provide a systematic treat-ment of a Frame Analysis and automate frame annotation.« less

  2. Natural Language Processing Methods and Systems for Biomedical Ontology Learning

    PubMed Central

    Liu, Kaihong; Hogan, William R.; Crowley, Rebecca S.

    2010-01-01

    While the biomedical informatics community widely acknowledges the utility of domain ontologies, there remain many barriers to their effective use. One important requirement of domain ontologies is that they must achieve a high degree of coverage of the domain concepts and concept relationships. However, the development of these ontologies is typically a manual, time-consuming, and often error-prone process. Limited resources result in missing concepts and relationships as well as difficulty in updating the ontology as knowledge changes. Methodologies developed in the fields of natural language processing, information extraction, information retrieval and machine learning provide techniques for automating the enrichment of an ontology from free-text documents. In this article, we review existing methodologies and developed systems, and discuss how existing methods can benefit the development of biomedical ontologies. PMID:20647054

  3. Behavioral and functional strategies during tool use tasks in bonobos.

    PubMed

    Bardo, Ameline; Borel, Antony; Meunier, Hélène; Guéry, Jean-Pascal; Pouydebat, Emmanuelle

    2016-09-01

    Different primate species have developed extensive capacities for grasping and manipulating objects. However, the manual abilities of primates remain poorly known from a dynamic point of view. The aim of the present study was to quantify the functional and behavioral strategies used by captive bonobos (Pan paniscus) during tool use tasks. The study was conducted on eight captive bonobos which we observed during two tool use tasks: food extraction from a large piece of wood and food recovery from a maze. We focused on grasping postures, in-hand movements, the sequences of grasp postures used that have not been studied in bonobos, and the kind of tools selected. Bonobos used a great variety of grasping postures during both tool use tasks. They were capable of in-hand movement, demonstrated complex sequences of contacts, and showed more dynamic manipulation during the maze task than during the extraction task. They arrived on the location of the task with the tool already modified and used different kinds of tools according to the task. We also observed individual manual strategies. Bonobos were thus able to develop in-hand movements similar to humans and chimpanzees, demonstrated dynamic manipulation, and they responded to task constraints by selecting and modifying tools appropriately, usually before they started the tasks. These results show the necessity to quantify object manipulation in different species to better understand their real manual specificities, which is essential to reconstruct the evolution of primate manual abilities. © 2016 Wiley Periodicals, Inc.

  4. Unsupervised Fault Diagnosis of a Gear Transmission Chain Using a Deep Belief Network

    PubMed Central

    He, Jun; Yang, Shixi; Gan, Chunbiao

    2017-01-01

    Artificial intelligence (AI) techniques, which can effectively analyze massive amounts of fault data and automatically provide accurate diagnosis results, have been widely applied to fault diagnosis of rotating machinery. Conventional AI methods are applied using features selected by a human operator, which are manually extracted based on diagnostic techniques and field expertise. However, developing robust features for each diagnostic purpose is often labour-intensive and time-consuming, and the features extracted for one specific task may be unsuitable for others. In this paper, a novel AI method based on a deep belief network (DBN) is proposed for the unsupervised fault diagnosis of a gear transmission chain, and the genetic algorithm is used to optimize the structural parameters of the network. Compared to the conventional AI methods, the proposed method can adaptively exploit robust features related to the faults by unsupervised feature learning, thus requires less prior knowledge about signal processing techniques and diagnostic expertise. Besides, it is more powerful at modelling complex structured data. The effectiveness of the proposed method is validated using datasets from rolling bearings and gearbox. To show the superiority of the proposed method, its performance is compared with two well-known classifiers, i.e., back propagation neural network (BPNN) and support vector machine (SVM). The fault classification accuracies are 99.26% for rolling bearings and 100% for gearbox when using the proposed method, which are much higher than that of the other two methods. PMID:28677638

  5. Unsupervised Fault Diagnosis of a Gear Transmission Chain Using a Deep Belief Network.

    PubMed

    He, Jun; Yang, Shixi; Gan, Chunbiao

    2017-07-04

    Artificial intelligence (AI) techniques, which can effectively analyze massive amounts of fault data and automatically provide accurate diagnosis results, have been widely applied to fault diagnosis of rotating machinery. Conventional AI methods are applied using features selected by a human operator, which are manually extracted based on diagnostic techniques and field expertise. However, developing robust features for each diagnostic purpose is often labour-intensive and time-consuming, and the features extracted for one specific task may be unsuitable for others. In this paper, a novel AI method based on a deep belief network (DBN) is proposed for the unsupervised fault diagnosis of a gear transmission chain, and the genetic algorithm is used to optimize the structural parameters of the network. Compared to the conventional AI methods, the proposed method can adaptively exploit robust features related to the faults by unsupervised feature learning, thus requires less prior knowledge about signal processing techniques and diagnostic expertise. Besides, it is more powerful at modelling complex structured data. The effectiveness of the proposed method is validated using datasets from rolling bearings and gearbox. To show the superiority of the proposed method, its performance is compared with two well-known classifiers, i.e., back propagation neural network (BPNN) and support vector machine (SVM). The fault classification accuracies are 99.26% for rolling bearings and 100% for gearbox when using the proposed method, which are much higher than that of the other two methods.

  6. TEES 2.2: Biomedical Event Extraction for Diverse Corpora

    PubMed Central

    2015-01-01

    Background The Turku Event Extraction System (TEES) is a text mining program developed for the extraction of events, complex biomedical relationships, from scientific literature. Based on a graph-generation approach, the system detects events with the use of a rich feature set built via dependency parsing. The TEES system has achieved record performance in several of the shared tasks of its domain, and continues to be used in a variety of biomedical text mining tasks. Results The TEES system was quickly adapted to the BioNLP'13 Shared Task in order to provide a public baseline for derived systems. An automated approach was developed for learning the underlying annotation rules of event type, allowing immediate adaptation to the various subtasks, and leading to a first place in four out of eight tasks. The system for the automated learning of annotation rules is further enhanced in this paper to the point of requiring no manual adaptation to any of the BioNLP'13 tasks. Further, the scikit-learn machine learning library is integrated into the system, bringing a wide variety of machine learning methods usable with TEES in addition to the default SVM. A scikit-learn ensemble method is also used to analyze the importances of the features in the TEES feature sets. Conclusions The TEES system was introduced for the BioNLP'09 Shared Task and has since then demonstrated good performance in several other shared tasks. By applying the current TEES 2.2 system to multiple corpora from these past shared tasks an overarching analysis of the most promising methods and possible pitfalls in the evolving field of biomedical event extraction are presented. PMID:26551925

  7. Determination of melamine in animal feed based on liquid chromatography tandem mass spectrometry analysis and dynamic microwave-assisted extraction coupled on-line with strong cation-exchange resin clean-up.

    PubMed

    Chen, Ligang; Zeng, Qinglei; Du, Xiaobo; Sun, Xin; Zhang, Xiaopan; Xu, Yang; Yu, Aimin; Zhang, Hanqi; Ding, Lan

    2009-11-01

    In this work, a new method was developed for the determination of melamine (MEL) in animal feed. The method was based on the on-line coupling of dynamic microwave-assisted extraction (DMAE) to strong cation-exchange (SCX) resin clean-up. The MEL was first extracted by 90% acidified methanol aqueous solution (v/v, pH = 3) under the action of microwave energy, and then the extract was cooled and passed through the SCX resin. Thus, the protonated MEL was retained on the resin through ion exchange interaction and the sample matrixes were washed out. Some obvious benefits were achieved, such as acceleration of analytical process, together with reduction in manual handling, risk of contamination, loss of analyte, and sample consumption. Finally, the analyte was separated by a liquid chromatograph with a SCX analytical column, and then identified and quantitatived by a tandem mass spectrometry with positive ionization mode and multiple-reaction monitoring. The DMAE parameters were optimized by the Box-Behnken design. The linearity of quantification obtained by analyzing matrix-matched standards is in the range of 50-5,000 ng g(-1). The limit of detection and limit of quantification obtained are 12.3 and 41.0 ng g(-1), respectively. The mean intra- and inter-day precisions expressed as relative standard deviations with three fortified levels (50, 250, and 500 ng g(-1)) are 5.1% and 7.3%, respectively, and the recoveries of MEL are in the range of 76.1-93.5%. The proposed method was successfully applied to determine MEL in different animal feeds obtained from the local market. MEL was detectable with the contents of 279, 136, and 742 ng g(-1) in three samples.

  8. Convolutional neural networks for seizure prediction using intracranial and scalp electroencephalogram.

    PubMed

    Truong, Nhan Duy; Nguyen, Anh Duy; Kuhlmann, Levin; Bonyadi, Mohammad Reza; Yang, Jiawei; Ippolito, Samuel; Kavehei, Omid

    2018-05-07

    Seizure prediction has attracted growing attention as one of the most challenging predictive data analysis efforts to improve the life of patients with drug-resistant epilepsy and tonic seizures. Many outstanding studies have reported great results in providing sensible indirect (warning systems) or direct (interactive neural stimulation) control over refractory seizures, some of which achieved high performance. However, to achieve high sensitivity and a low false prediction rate, many of these studies relied on handcraft feature extraction and/or tailored feature extraction, which is performed for each patient independently. This approach, however, is not generalizable, and requires significant modifications for each new patient within a new dataset. In this article, we apply convolutional neural networks to different intracranial and scalp electroencephalogram (EEG) datasets and propose a generalized retrospective and patient-specific seizure prediction method. We use the short-time Fourier transform on 30-s EEG windows to extract information in both the frequency domain and the time domain. The algorithm automatically generates optimized features for each patient to best classify preictal and interictal segments. The method can be applied to any other patient from any dataset without the need for manual feature extraction. The proposed approach achieves sensitivity of 81.4%, 81.2%, and 75% and a false prediction rate of 0.06/h, 0.16/h, and 0.21/h on the Freiburg Hospital intracranial EEG dataset, the Boston Children's Hospital-MIT scalp EEG dataset, and the American Epilepsy Society Seizure Prediction Challenge dataset, respectively. Our prediction method is also statistically better than an unspecific random predictor for most of the patients in all three datasets. Copyright © 2018 Elsevier Ltd. All rights reserved.

  9. TEES 2.2: Biomedical Event Extraction for Diverse Corpora.

    PubMed

    Björne, Jari; Salakoski, Tapio

    2015-01-01

    The Turku Event Extraction System (TEES) is a text mining program developed for the extraction of events, complex biomedical relationships, from scientific literature. Based on a graph-generation approach, the system detects events with the use of a rich feature set built via dependency parsing. The TEES system has achieved record performance in several of the shared tasks of its domain, and continues to be used in a variety of biomedical text mining tasks. The TEES system was quickly adapted to the BioNLP'13 Shared Task in order to provide a public baseline for derived systems. An automated approach was developed for learning the underlying annotation rules of event type, allowing immediate adaptation to the various subtasks, and leading to a first place in four out of eight tasks. The system for the automated learning of annotation rules is further enhanced in this paper to the point of requiring no manual adaptation to any of the BioNLP'13 tasks. Further, the scikit-learn machine learning library is integrated into the system, bringing a wide variety of machine learning methods usable with TEES in addition to the default SVM. A scikit-learn ensemble method is also used to analyze the importances of the features in the TEES feature sets. The TEES system was introduced for the BioNLP'09 Shared Task and has since then demonstrated good performance in several other shared tasks. By applying the current TEES 2.2 system to multiple corpora from these past shared tasks an overarching analysis of the most promising methods and possible pitfalls in the evolving field of biomedical event extraction are presented.

  10. SU-E-I-87: Automated Liver Segmentation Method for CBCT Dataset by Combining Sparse Shape Composition and Probabilistic Atlas Construction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Dengwang; Liu, Li; Chen, Jinhu

    2014-06-01

    Purpose: The aiming of this study was to extract liver structures for daily Cone beam CT (CBCT) images automatically. Methods: Datasets were collected from 50 intravenous contrast planning CT images, which were regarded as training dataset for probabilistic atlas and shape prior model construction. Firstly, probabilistic atlas and shape prior model based on sparse shape composition (SSC) were constructed by iterative deformable registration. Secondly, the artifacts and noise were removed from the daily CBCT image by an edge-preserving filtering using total variation with L1 norm (TV-L1). Furthermore, the initial liver region was obtained by registering the incoming CBCT image withmore » the atlas utilizing edge-preserving deformable registration with multi-scale strategy, and then the initial liver region was converted to surface meshing which was registered with the shape model where the major variation of specific patient was modeled by sparse vectors. At the last stage, the shape and intensity information were incorporated into joint probabilistic model, and finally the liver structure was extracted by maximum a posteriori segmentation.Regarding the construction process, firstly the manually segmented contours were converted into meshes, and then arbitrary patient data was chosen as reference image to register with the rest of training datasets by deformable registration algorithm for constructing probabilistic atlas and prior shape model. To improve the efficiency of proposed method, the initial probabilistic atlas was used as reference image to register with other patient data for iterative construction for removing bias caused by arbitrary selection. Results: The experiment validated the accuracy of the segmentation results quantitatively by comparing with the manually ones. The volumetric overlap percentage between the automatically generated liver contours and the ground truth were on an average 88%–95% for CBCT images. Conclusion: The experiment demonstrated that liver structures of CBCT with artifacts can be extracted accurately for following adaptive radiation therapy. This work is supported by National Natural Science Foundation of China (No. 61201441), Research Fund for Excellent Young and Middle-aged Scientists of Shandong Province (No. BS2012DX038), Project of Shandong Province Higher Educational Science and Technology Program (No. J12LN23), Jinan youth science and technology star (No.20120109)« less

  11. Automatic estimation of voice onset time for word-initial stops by applying random forest to onset detection.

    PubMed

    Lin, Chi-Yueh; Wang, Hsiao-Chuan

    2011-07-01

    The voice onset time (VOT) of a stop consonant is the interval between its burst onset and voicing onset. Among a variety of research topics on VOT, one that has been studied for years is how VOTs are efficiently measured. Manual annotation is a feasible way, but it becomes a time-consuming task when the corpus size is large. This paper proposes an automatic VOT estimation method based on an onset detection algorithm. At first, a forced alignment is applied to identify the locations of stop consonants. Then a random forest based onset detector searches each stop segment for its burst and voicing onsets to estimate a VOT. The proposed onset detection can detect the onsets in an efficient and accurate manner with only a small amount of training data. The evaluation data extracted from the TIMIT corpus were 2344 words with a word-initial stop. The experimental results showed that 83.4% of the estimations deviate less than 10 ms from their manually labeled values, and 96.5% of the estimations deviate by less than 20 ms. Some factors that influence the proposed estimation method, such as place of articulation, voicing of a stop consonant, and quality of succeeding vowel, were also investigated. © 2011 Acoustical Society of America

  12. Automatic QRS complex detection using two-level convolutional neural network.

    PubMed

    Xiang, Yande; Lin, Zhitao; Meng, Jianyi

    2018-01-29

    The QRS complex is the most noticeable feature in the electrocardiogram (ECG) signal, therefore, its detection is critical for ECG signal analysis. The existing detection methods largely depend on hand-crafted manual features and parameters, which may introduce significant computational complexity, especially in the transform domains. In addition, fixed features and parameters are not suitable for detecting various kinds of QRS complexes under different circumstances. In this study, based on 1-D convolutional neural network (CNN), an accurate method for QRS complex detection is proposed. The CNN consists of object-level and part-level CNNs for extracting different grained ECG morphological features automatically. All the extracted morphological features are used by multi-layer perceptron (MLP) for QRS complex detection. Additionally, a simple ECG signal preprocessing technique which only contains difference operation in temporal domain is adopted. Based on the MIT-BIH arrhythmia (MIT-BIH-AR) database, the proposed detection method achieves overall sensitivity Sen = 99.77%, positive predictivity rate PPR = 99.91%, and detection error rate DER = 0.32%. In addition, the performance variation is performed according to different signal-to-noise ratio (SNR) values. An automatic QRS detection method using two-level 1-D CNN and simple signal preprocessing technique is proposed for QRS complex detection. Compared with the state-of-the-art QRS complex detection approaches, experimental results show that the proposed method acquires comparable accuracy.

  13. Subtype Coastline Determination in Urban Coast Based on Multiscale Features: a Case Study in Tianjin, China

    NASA Astrophysics Data System (ADS)

    Song, Y.; Ai, Y.; Zhu, H.

    2018-04-01

    In urban coast, coastline is a direct factor to reflect human activities. It is of crucial importance to the understanding of urban growth, resource development and ecological environment. Due to complexity and uncertainty in this type of coast, it is difficult to detect accurate coastline position and determine the subtypes of the coastline. In this paper, we present a multiscale feature-based subtype coastline determination (MFBSCD) method to extract coastline and determine the subtypes. In this method, uncertainty-considering coastline detection (UCCD) method is proposed to separate water and land for more accurate coastline position. The MFBSCD method can well integrate scale-invariant features of coastline in geometry and spatial structure to determine coastline in subtype scale, and can make subtypes verify with each other during processing to ensure the accuracy of final results. It was applied to Landsat Thematic Mapper (TM) and Operational Land Imager (OLI) images of Tianjin, China, and the accuracy of the extracted coastlines was assessed with the manually delineated coastline. The mean ME (misclassification error) and mean LM (Line Matching) are 0.0012 and 24.54 m respectively. The method provides an inexpensive and automated means of coastline mapping with subtype scale in coastal city sectors with intense human interference, which can be significant for coast resource management and evaluation of urban development.

  14. A System for Automated Extraction of Metadata from Scanned Documents using Layout Recognition and String Pattern Search Models.

    PubMed

    Misra, Dharitri; Chen, Siyuan; Thoma, George R

    2009-01-01

    One of the most expensive aspects of archiving digital documents is the manual acquisition of context-sensitive metadata useful for the subsequent discovery of, and access to, the archived items. For certain types of textual documents, such as journal articles, pamphlets, official government records, etc., where the metadata is contained within the body of the documents, a cost effective method is to identify and extract the metadata in an automated way, applying machine learning and string pattern search techniques.At the U. S. National Library of Medicine (NLM) we have developed an automated metadata extraction (AME) system that employs layout classification and recognition models with a metadata pattern search model for a text corpus with structured or semi-structured information. A combination of Support Vector Machine and Hidden Markov Model is used to create the layout recognition models from a training set of the corpus, following which a rule-based metadata search model is used to extract the embedded metadata by analyzing the string patterns within and surrounding each field in the recognized layouts.In this paper, we describe the design of our AME system, with focus on the metadata search model. We present the extraction results for a historic collection from the Food and Drug Administration, and outline how the system may be adapted for similar collections. Finally, we discuss some ongoing enhancements to our AME system.

  15. Assessment of apical transportation caused by nickel-titanium rotary systems with full rotation and reciprocating movements using extracted teeth and resin blocks with simulated root canals: A comparative study.

    PubMed

    Alrahabi, M; Zafar, M S

    2018-06-01

    : We compared apical transportation in the WaveOne and ProTaper Next systems, which are rotary nickel-titanium systems with reciprocating and continuous rotation movements, respectively, using manual measurements obtained from resin blocks with simulated root canals and double digital radiographs of extracted teeth. : We used 30 resin blocks with simulated root canals and 30 extracted teeth for this study. The same endodontist performed root canal shaping using the WaveOne or ProTaper Next system. We assessed apical transportation by measuring the amounts (in mm) of material lost 1 mm from the apical foramen in the resin blocks and by using double digital radiography for the extracted teeth. Significant differences between groups were assessed using t-tests. P < 0.05 was considered statistically significant. : The amount of apical transportation differed significantly between the two systems when resin blocks were used for assessment (P < 0.05), but there were no significant differences when extracted teeth were used (P < 0.05). In the current study, there was no significant difference in apical transportation between natural teeth prepared using WaveOne and those prepared using ProTaper Next. However, significant differences were observed between the two systems with resin blocks. These findings indicate that the use of resin blocks is not an accurate method for apical transportation evaluation.

  16. Extracting hurricane eye morphology from spaceborne SAR images using morphological analysis

    NASA Astrophysics Data System (ADS)

    Lee, Isabella K.; Shamsoddini, Ali; Li, Xiaofeng; Trinder, John C.; Li, Zeyu

    2016-07-01

    Hurricanes are among the most destructive global natural disasters. Thus recognizing and extracting their morphology is important for understanding their dynamics. Conventional optical sensors, due to cloud cover associated with hurricanes, cannot reveal the intense air-sea interaction occurring at the sea surface. In contrast, the unique capabilities of spaceborne synthetic aperture radar (SAR) data for cloud penetration, and its backscattering signal characteristics enable the extraction of the sea surface roughness. Therefore, SAR images enable the measurement of the size and shape of hurricane eyes, which reveal their evolution and strength. In this study, using six SAR hurricane images, we have developed a mathematical morphology method for automatically extracting the hurricane eyes from C-band SAR data. Skeleton pruning based on discrete skeleton evolution (DSE) was used to ensure global and local preservation of the hurricane eye shape. This distance weighted algorithm applied in a hierarchical structure for extraction of the edges of the hurricane eyes, can effectively avoid segmentation errors by reducing redundant skeletons attributed to speckle noise along the edges of the hurricane eye. As a consequence, the skeleton pruning has been accomplished without deficiencies in the key hurricane eye skeletons. A morphology-based analyses of the subsequent reconstructions of the hurricane eyes shows a high degree of agreement with the hurricane eye areas derived from reference data based on NOAA manual work.

  17. Automated and Semiautomated Segmentation of Rectal Tumor Volumes on Diffusion-Weighted MRI: Can It Replace Manual Volumetry?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heeswijk, Miriam M. van; Department of Surgery, Maastricht University Medical Centre, Maastricht; Lambregts, Doenja M.J., E-mail: d.lambregts@nki.nl

    Purpose: Diffusion-weighted imaging (DWI) tumor volumetry is promising for rectal cancer response assessment, but an important drawback is that manual per-slice tumor delineation can be highly time consuming. This study investigated whether manual DWI-volumetry can be reproduced using a (semi)automated segmentation approach. Methods and Materials: Seventy-nine patients underwent magnetic resonance imaging (MRI) that included DWI (highest b value [b1000 or b1100]) before and after chemoradiation therapy (CRT). Tumor volumes were assessed on b1000 (or b1100) DWI before and after CRT by means of (1) automated segmentation (by 2 inexperienced readers), (2) semiautomated segmentation (manual adjustment of the volumes obtained bymore » method 1 by 2 radiologists), and (3) manual segmentation (by 2 radiologists); this last assessment served as the reference standard. Intraclass correlation coefficients (ICC) and Dice similarity indices (DSI) were calculated to evaluate agreement between different methods and observers. Measurement times (from a radiologist's perspective) were recorded for each method. Results: Tumor volumes were not significantly different among the 3 methods, either before or after CRT (P=.08 to .92). ICCs compared to manual segmentation were 0.80 to 0.91 and 0.53 to 0.66 before and after CRT, respectively, for the automated segmentation and 0.91 to 0.97 and 0.61 to 0.75, respectively, for the semiautomated method. Interobserver agreement (ICC) pre and post CRT was 0.82 and 0.59 for automated segmentation, 0.91 and 0.73 for semiautomated segmentation, and 0.91 and 0.75 for manual segmentation, respectively. Mean DSI between the automated and semiautomated method were 0.83 and 0.58 pre-CRT and post-CRT, respectively; DSI between the automated and manual segmentation were 0.68 and 0.42 and 0.70 and 0.41 between the semiautomated and manual segmentation, respectively. Median measurement time for the radiologists was 0 seconds (pre- and post-CRT) for the automated method, 41 to 69 seconds (pre-CRT) and 60 to 67 seconds (post-CRT) for the semiautomated method, and 180 to 296 seconds (pre-CRT) and 84 to 91 seconds (post-CRT) for the manual method. Conclusions: DWI volumetry using a semiautomated segmentation approach is promising and a potentially time-saving alternative to manual tumor delineation, particularly for primary tumor volumetry. Once further optimized, it could be a helpful tool for tumor response assessment in rectal cancer.« less

  18. Methods of analysis by the U.S. Geological Survey National Water Quality Laboratory : determination of polycyclic aromatic hydrocarbon compounds in sediment by gas chromatography/mass spectrometry

    USGS Publications Warehouse

    Olson, Mary C.; Iverson, Jana L.; Furlong, Edward T.; Schroeder, Michael P.

    2004-01-01

    A method for the determination of 28 polycyclic aromatic hydrocarbons (PAHs) and 25 alkylated PAH homolog groups in sediment samples is described. The compounds are extracted from sediment by solvent extraction, followed by partial isolation using high-performance gel permeation chromatography. The compounds are identified and uantitated using capillary-column gas chromatography/mass spectrometry. The report presents performance data for full-scan ion monitoring. Method detection limits in laboratory reagent matrix samples range from 1.3 to 5.1 micrograms per kilogram for the 28 PAHs. The 25 groups of alkylated PAHs are homologs of five groups of isomeric parent PAHs. Because of the lack of authentic standards, these homologs are reported semiquantitatively using a response factor from a parent PAH or a specific alkylated PAH. Precision data for the alkylated PAH homologs are presented using two different standard reference manuals produced by the National Institute of Standards and Technology: SRM 1941b and SRM 1944. The percent relative standard deviations for identified alkylated PAH homolog groups ranged from 1.55 to 6.98 for SRM 1941b and from 6.11 to 12.0 for SRM 1944. Homolog group concentrations reported under this method include the concentrations of individually identified compounds that are members of the group. Organochlorine (OC) pesticides--including toxaphene, polychlorinated biphenyls (PCBs), and organophosphate (OP) pesticides--can be isolated simultaneously using this method. In brief, sediment samples are centrifuged to remove excess water and extracted overnight with dichloromethan (95 percent) and methanol (5 percent). The extract is concentrated and then filtered through a 0.2-micrometer polytetrafluoroethylene syringe filter. The PAH fraction is isolated by quantitatively injecting an aliquot of sample onto two polystyrene-divinylbenzene gel-permeation chromatographic columns connected in series. The compounds are eluted with dichloromethane, a PAH fraction is collected, and a portion of the coextracted interferences, including elemental sulfur, is separated and discarded. The extract is solvent exchanged, the volume is reduced, and internal standard is added. Sample analysis is completed using a gas chromatograph/mass spectrometer and full-scan acquisition.

  19. Machine learning based sample extraction for automatic speech recognition using dialectal Assamese speech.

    PubMed

    Agarwalla, Swapna; Sarma, Kandarpa Kumar

    2016-06-01

    Automatic Speaker Recognition (ASR) and related issues are continuously evolving as inseparable elements of Human Computer Interaction (HCI). With assimilation of emerging concepts like big data and Internet of Things (IoT) as extended elements of HCI, ASR techniques are found to be passing through a paradigm shift. Oflate, learning based techniques have started to receive greater attention from research communities related to ASR owing to the fact that former possess natural ability to mimic biological behavior and that way aids ASR modeling and processing. The current learning based ASR techniques are found to be evolving further with incorporation of big data, IoT like concepts. Here, in this paper, we report certain approaches based on machine learning (ML) used for extraction of relevant samples from big data space and apply them for ASR using certain soft computing techniques for Assamese speech with dialectal variations. A class of ML techniques comprising of the basic Artificial Neural Network (ANN) in feedforward (FF) and Deep Neural Network (DNN) forms using raw speech, extracted features and frequency domain forms are considered. The Multi Layer Perceptron (MLP) is configured with inputs in several forms to learn class information obtained using clustering and manual labeling. DNNs are also used to extract specific sentence types. Initially, from a large storage, relevant samples are selected and assimilated. Next, a few conventional methods are used for feature extraction of a few selected types. The features comprise of both spectral and prosodic types. These are applied to Recurrent Neural Network (RNN) and Fully Focused Time Delay Neural Network (FFTDNN) structures to evaluate their performance in recognizing mood, dialect, speaker and gender variations in dialectal Assamese speech. The system is tested under several background noise conditions by considering the recognition rates (obtained using confusion matrices and manually) and computation time. It is found that the proposed ML based sentence extraction techniques and the composite feature set used with RNN as classifier outperform all other approaches. By using ANN in FF form as feature extractor, the performance of the system is evaluated and a comparison is made. Experimental results show that the application of big data samples has enhanced the learning of the ASR system. Further, the ANN based sample and feature extraction techniques are found to be efficient enough to enable application of ML techniques in big data aspects as part of ASR systems. Copyright © 2015 Elsevier Ltd. All rights reserved.

  20. Automated extraction of DNA from blood and PCR setup using a Tecan Freedom EVO liquid handler for forensic genetic STR typing of reference samples.

    PubMed

    Stangegaard, Michael; Frøslev, Tobias G; Frank-Hansen, Rune; Hansen, Anders J; Morling, Niels

    2011-04-01

    We have implemented and validated automated protocols for DNA extraction and PCR setup using a Tecan Freedom EVO liquid handler mounted with the Te-MagS magnetic separation device (Tecan, Männedorf, Switzerland). The protocols were validated for accredited forensic genetic work according to ISO 17025 using the Qiagen MagAttract DNA Mini M48 kit (Qiagen GmbH, Hilden, Germany) from fresh whole blood and blood from deceased individuals. The workflow was simplified by returning the DNA extracts to the original tubes minimizing the risk of misplacing samples. The tubes that originally contained the samples were washed with MilliQ water before the return of the DNA extracts. The PCR was setup in 96-well microtiter plates. The methods were validated for the kits: AmpFℓSTR Identifiler, SGM Plus and Yfiler (Applied Biosystems, Foster City, CA), GenePrint FFFL and PowerPlex Y (Promega, Madison, WI). The automated protocols allowed for extraction and addition of PCR master mix of 96 samples within 3.5h. In conclusion, we demonstrated that (1) DNA extraction with magnetic beads and (2) PCR setup for accredited, forensic genetic short tandem repeat typing can be implemented on a simple automated liquid handler leading to the reduction of manual work, and increased quality and throughput. Copyright © 2011 Society for Laboratory Automation and Screening. Published by Elsevier Inc. All rights reserved.

Top