Sample records for automated quantitative analysis

  1. An approach to standardization of urine sediment analysis via suggestion of a common manual protocol.

    PubMed

    Ko, Dae-Hyun; Ji, Misuk; Kim, Sollip; Cho, Eun-Jung; Lee, Woochang; Yun, Yeo-Min; Chun, Sail; Min, Won-Ki

    2016-01-01

    The results of urine sediment analysis have been reported semiquantitatively. However, as recent guidelines recommend quantitative reporting of urine sediment, and with the development of automated urine sediment analyzers, there is an increasing need for quantitative analysis of urine sediment. Here, we developed a protocol for urine sediment analysis and quantified the results. Based on questionnaires, various reports, guidelines, and experimental results, we developed a protocol for urine sediment analysis. The results of this new protocol were compared with those obtained with a standardized chamber and an automated sediment analyzer. Reference intervals were also estimated using new protocol. We developed a protocol with centrifugation at 400 g for 5 min, with the average concentration factor of 30. The correlation between quantitative results of urine sediment analysis, the standardized chamber, and the automated sediment analyzer were generally good. The conversion factor derived from the new protocol showed a better fit with the results of manual count than the default conversion factor in the automated sediment analyzer. We developed a protocol for manual urine sediment analysis to quantitatively report the results. This protocol may provide a mean for standardization of urine sediment analysis.

  2. Cardiac imaging: working towards fully-automated machine analysis & interpretation.

    PubMed

    Slomka, Piotr J; Dey, Damini; Sitek, Arkadiusz; Motwani, Manish; Berman, Daniel S; Germano, Guido

    2017-03-01

    Non-invasive imaging plays a critical role in managing patients with cardiovascular disease. Although subjective visual interpretation remains the clinical mainstay, quantitative analysis facilitates objective, evidence-based management, and advances in clinical research. This has driven developments in computing and software tools aimed at achieving fully automated image processing and quantitative analysis. In parallel, machine learning techniques have been used to rapidly integrate large amounts of clinical and quantitative imaging data to provide highly personalized individual patient-based conclusions. Areas covered: This review summarizes recent advances in automated quantitative imaging in cardiology and describes the latest techniques which incorporate machine learning principles. The review focuses on the cardiac imaging techniques which are in wide clinical use. It also discusses key issues and obstacles for these tools to become utilized in mainstream clinical practice. Expert commentary: Fully-automated processing and high-level computer interpretation of cardiac imaging are becoming a reality. Application of machine learning to the vast amounts of quantitative data generated per scan and integration with clinical data also facilitates a move to more patient-specific interpretation. These developments are unlikely to replace interpreting physicians but will provide them with highly accurate tools to detect disease, risk-stratify, and optimize patient-specific treatment. However, with each technological advance, we move further from human dependence and closer to fully-automated machine interpretation.

  3. Cardiac imaging: working towards fully-automated machine analysis & interpretation

    PubMed Central

    Slomka, Piotr J; Dey, Damini; Sitek, Arkadiusz; Motwani, Manish; Berman, Daniel S; Germano, Guido

    2017-01-01

    Introduction Non-invasive imaging plays a critical role in managing patients with cardiovascular disease. Although subjective visual interpretation remains the clinical mainstay, quantitative analysis facilitates objective, evidence-based management, and advances in clinical research. This has driven developments in computing and software tools aimed at achieving fully automated image processing and quantitative analysis. In parallel, machine learning techniques have been used to rapidly integrate large amounts of clinical and quantitative imaging data to provide highly personalized individual patient-based conclusions. Areas covered This review summarizes recent advances in automated quantitative imaging in cardiology and describes the latest techniques which incorporate machine learning principles. The review focuses on the cardiac imaging techniques which are in wide clinical use. It also discusses key issues and obstacles for these tools to become utilized in mainstream clinical practice. Expert commentary Fully-automated processing and high-level computer interpretation of cardiac imaging are becoming a reality. Application of machine learning to the vast amounts of quantitative data generated per scan and integration with clinical data also facilitates a move to more patient-specific interpretation. These developments are unlikely to replace interpreting physicians but will provide them with highly accurate tools to detect disease, risk-stratify, and optimize patient-specific treatment. However, with each technological advance, we move further from human dependence and closer to fully-automated machine interpretation. PMID:28277804

  4. Automated Quantitative Rare Earth Elements Mineralogy by Scanning Electron Microscopy

    NASA Astrophysics Data System (ADS)

    Sindern, Sven; Meyer, F. Michael

    2016-09-01

    Increasing industrial demand of rare earth elements (REEs) stems from the central role they play for advanced technologies and the accelerating move away from carbon-based fuels. However, REE production is often hampered by the chemical, mineralogical as well as textural complexity of the ores with a need for better understanding of their salient properties. This is not only essential for in-depth genetic interpretations but also for a robust assessment of ore quality and economic viability. The design of energy and cost-efficient processing of REE ores depends heavily on information about REE element deportment that can be made available employing automated quantitative process mineralogy. Quantitative mineralogy assigns numeric values to compositional and textural properties of mineral matter. Scanning electron microscopy (SEM) combined with a suitable software package for acquisition of backscatter electron and X-ray signals, phase assignment and image analysis is one of the most efficient tools for quantitative mineralogy. The four different SEM-based automated quantitative mineralogy systems, i.e. FEI QEMSCAN and MLA, Tescan TIMA and Zeiss Mineralogic Mining, which are commercially available, are briefly characterized. Using examples of quantitative REE mineralogy, this chapter illustrates capabilities and limitations of automated SEM-based systems. Chemical variability of REE minerals and analytical uncertainty can reduce performance of phase assignment. This is shown for the REE phases parisite and synchysite. In another example from a monazite REE deposit, the quantitative mineralogical parameters surface roughness and mineral association derived from image analysis are applied for automated discrimination of apatite formed in a breakdown reaction of monazite and apatite formed by metamorphism prior to monazite breakdown. SEM-based automated mineralogy fulfils all requirements for characterization of complex unconventional REE ores that will become increasingly important for supply of REEs in the future.

  5. An Improved Method for Measuring Quantitative Resistance to the Wheat Pathogen Zymoseptoria tritici Using High-Throughput Automated Image Analysis.

    PubMed

    Stewart, Ethan L; Hagerty, Christina H; Mikaberidze, Alexey; Mundt, Christopher C; Zhong, Ziming; McDonald, Bruce A

    2016-07-01

    Zymoseptoria tritici causes Septoria tritici blotch (STB) on wheat. An improved method of quantifying STB symptoms was developed based on automated analysis of diseased leaf images made using a flatbed scanner. Naturally infected leaves (n = 949) sampled from fungicide-treated field plots comprising 39 wheat cultivars grown in Switzerland and 9 recombinant inbred lines (RIL) grown in Oregon were included in these analyses. Measures of quantitative resistance were percent leaf area covered by lesions, pycnidia size and gray value, and pycnidia density per leaf and lesion. These measures were obtained automatically with a batch-processing macro utilizing the image-processing software ImageJ. All phenotypes in both locations showed a continuous distribution, as expected for a quantitative trait. The trait distributions at both sites were largely overlapping even though the field and host environments were quite different. Cultivars and RILs could be assigned to two or more statistically different groups for each measured phenotype. Traditional visual assessments of field resistance were highly correlated with quantitative resistance measures based on image analysis for the Oregon RILs. These results show that automated image analysis provides a promising tool for assessing quantitative resistance to Z. tritici under field conditions.

  6. Automated quantitative cytological analysis using portable microfluidic microscopy.

    PubMed

    Jagannadh, Veerendra Kalyan; Murthy, Rashmi Sreeramachandra; Srinivasan, Rajesh; Gorthi, Sai Siva

    2016-06-01

    In this article, a portable microfluidic microscopy based approach for automated cytological investigations is presented. Inexpensive optical and electronic components have been used to construct a simple microfluidic microscopy system. In contrast to the conventional slide-based methods, the presented method employs microfluidics to enable automated sample handling and image acquisition. The approach involves the use of simple in-suspension staining and automated image acquisition to enable quantitative cytological analysis of samples. The applicability of the presented approach to research in cellular biology is shown by performing an automated cell viability assessment on a given population of yeast cells. Further, the relevance of the presented approach to clinical diagnosis and prognosis has been demonstrated by performing detection and differential assessment of malaria infection in a given sample. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  7. Quantitative and qualitative measure of intralaboratory two-dimensional protein gel reproducibility and the effects of sample preparation, sample load, and image analysis.

    PubMed

    Choe, Leila H; Lee, Kelvin H

    2003-10-01

    We investigate one approach to assess the quantitative variability in two-dimensional gel electrophoresis (2-DE) separations based on gel-to-gel variability, sample preparation variability, sample load differences, and the effect of automation on image analysis. We observe that 95% of spots present in three out of four replicate gels exhibit less than a 0.52 coefficient of variation (CV) in fluorescent stain intensity (% volume) for a single sample run on multiple gels. When four parallel sample preparations are performed, this value increases to 0.57. We do not observe any significant change in quantitative value for an increase or decrease in sample load of 30% when using appropriate image analysis variables. Increasing use of automation, while necessary in modern 2-DE experiments, does change the observed level of quantitative and qualitative variability among replicate gels. The number of spots that change qualitatively for a single sample run in parallel varies from a CV = 0.03 for fully manual analysis to CV = 0.20 for a fully automated analysis. We present a systematic method by which a single laboratory can measure gel-to-gel variability using only three gel runs.

  8. Applications of pathology-assisted image analysis of immunohistochemistry-based biomarkers in oncology.

    PubMed

    Shinde, V; Burke, K E; Chakravarty, A; Fleming, M; McDonald, A A; Berger, A; Ecsedy, J; Blakemore, S J; Tirrell, S M; Bowman, D

    2014-01-01

    Immunohistochemistry-based biomarkers are commonly used to understand target inhibition in key cancer pathways in preclinical models and clinical studies. Automated slide-scanning and advanced high-throughput image analysis software technologies have evolved into a routine methodology for quantitative analysis of immunohistochemistry-based biomarkers. Alongside the traditional pathology H-score based on physical slides, the pathology world is welcoming digital pathology and advanced quantitative image analysis, which have enabled tissue- and cellular-level analysis. An automated workflow was implemented that includes automated staining, slide-scanning, and image analysis methodologies to explore biomarkers involved in 2 cancer targets: Aurora A and NEDD8-activating enzyme (NAE). The 2 workflows highlight the evolution of our immunohistochemistry laboratory and the different needs and requirements of each biological assay. Skin biopsies obtained from MLN8237 (Aurora A inhibitor) phase 1 clinical trials were evaluated for mitotic and apoptotic index, while mitotic index and defects in chromosome alignment and spindles were assessed in tumor biopsies to demonstrate Aurora A inhibition. Additionally, in both preclinical xenograft models and an acute myeloid leukemia phase 1 trial of the NAE inhibitor MLN4924, development of a novel image algorithm enabled measurement of downstream pathway modulation upon NAE inhibition. In the highlighted studies, developing a biomarker strategy based on automated image analysis solutions enabled project teams to confirm target and pathway inhibition and understand downstream outcomes of target inhibition with increased throughput and quantitative accuracy. These case studies demonstrate a strategy that combines a pathologist's expertise with automated image analysis to support oncology drug discovery and development programs.

  9. Automated tumor analysis for molecular profiling in lung cancer

    PubMed Central

    Boyd, Clinton; James, Jacqueline A.; Loughrey, Maurice B.; Hougton, Joseph P.; Boyle, David P.; Kelly, Paul; Maxwell, Perry; McCleary, David; Diamond, James; McArt, Darragh G.; Tunstall, Jonathon; Bankhead, Peter; Salto-Tellez, Manuel

    2015-01-01

    The discovery and clinical application of molecular biomarkers in solid tumors, increasingly relies on nucleic acid extraction from FFPE tissue sections and subsequent molecular profiling. This in turn requires the pathological review of haematoxylin & eosin (H&E) stained slides, to ensure sample quality, tumor DNA sufficiency by visually estimating the percentage tumor nuclei and tumor annotation for manual macrodissection. In this study on NSCLC, we demonstrate considerable variation in tumor nuclei percentage between pathologists, potentially undermining the precision of NSCLC molecular evaluation and emphasising the need for quantitative tumor evaluation. We subsequently describe the development and validation of a system called TissueMark for automated tumor annotation and percentage tumor nuclei measurement in NSCLC using computerized image analysis. Evaluation of 245 NSCLC slides showed precise automated tumor annotation of cases using Tissuemark, strong concordance with manually drawn boundaries and identical EGFR mutational status, following manual macrodissection from the image analysis generated tumor boundaries. Automated analysis of cell counts for % tumor measurements by Tissuemark showed reduced variability and significant correlation (p < 0.001) with benchmark tumor cell counts. This study demonstrates a robust image analysis technology that can facilitate the automated quantitative analysis of tissue samples for molecular profiling in discovery and diagnostics. PMID:26317646

  10. Quantitative analysis of cardiovascular MR images.

    PubMed

    van der Geest, R J; de Roos, A; van der Wall, E E; Reiber, J H

    1997-06-01

    The diagnosis of cardiovascular disease requires the precise assessment of both morphology and function. Nearly all aspects of cardiovascular function and flow can be quantified nowadays with fast magnetic resonance (MR) imaging techniques. Conventional and breath-hold cine MR imaging allow the precise and highly reproducible assessment of global and regional left ventricular function. During the same examination, velocity encoded cine (VEC) MR imaging provides measurements of blood flow in the heart and great vessels. Quantitative image analysis often still relies on manual tracing of contours in the images. Reliable automated or semi-automated image analysis software would be very helpful to overcome the limitations associated with the manual and tedious processing of the images. Recent progress in MR imaging of the coronary arteries and myocardial perfusion imaging with contrast media, along with the further development of faster imaging sequences, suggest that MR imaging could evolve into a single technique ('one stop shop') for the evaluation of many aspects of heart disease. As a result, it is very likely that the need for automated image segmentation and analysis software algorithms will further increase. In this paper the developments directed towards the automated image analysis and semi-automated contour detection for cardiovascular MR imaging are presented.

  11. Systems Operations Studies for Automated Guideway Transit Systems : Quantitative Analysis of Alternative AGT Operational Control Strategies

    DOT National Transportation Integrated Search

    1981-10-01

    The objectives of the Systems Operation Studies (SOS) for automated guideway transit (AGT) systems are to develop models for the analysis of system operations, to evaluate performance and cost, and to establish guidelines for the design and operation...

  12. General Staining and Segmentation Procedures for High Content Imaging and Analysis.

    PubMed

    Chambers, Kevin M; Mandavilli, Bhaskar S; Dolman, Nick J; Janes, Michael S

    2018-01-01

    Automated quantitative fluorescence microscopy, also known as high content imaging (HCI), is a rapidly growing analytical approach in cell biology. Because automated image analysis relies heavily on robust demarcation of cells and subcellular regions, reliable methods for labeling cells is a critical component of the HCI workflow. Labeling of cells for image segmentation is typically performed with fluorescent probes that bind DNA for nuclear-based cell demarcation or with those which react with proteins for image analysis based on whole cell staining. These reagents, along with instrument and software settings, play an important role in the successful segmentation of cells in a population for automated and quantitative image analysis. In this chapter, we describe standard procedures for labeling and image segmentation in both live and fixed cell samples. The chapter will also provide troubleshooting guidelines for some of the common problems associated with these aspects of HCI.

  13. ASPECTS: an automation-assisted SPE method development system.

    PubMed

    Li, Ming; Chou, Judy; King, Kristopher W; Yang, Liyu

    2013-07-01

    A typical conventional SPE method development (MD) process usually involves deciding the chemistry of the sorbent and eluent based on information about the analyte; experimentally preparing and trying out various combinations of adsorption chemistry and elution conditions; quantitatively evaluating the various conditions; and comparing quantitative results from all combination of conditions to select the best condition for method qualification. The second and fourth steps have mostly been performed manually until now. We developed an automation-assisted system that expedites the conventional SPE MD process by automating 99% of the second step, and expedites the fourth step by automatically processing the results data and presenting it to the analyst in a user-friendly format. The automation-assisted SPE MD system greatly saves the manual labor in SPE MD work, prevents analyst errors from causing misinterpretation of quantitative results, and shortens data analysis and interpretation time.

  14. Nonlinear optical microscopy: use of second harmonic generation and two-photon microscopy for automated quantitative liver fibrosis studies.

    PubMed

    Sun, Wanxin; Chang, Shi; Tai, Dean C S; Tan, Nancy; Xiao, Guangfa; Tang, Huihuan; Yu, Hanry

    2008-01-01

    Liver fibrosis is associated with an abnormal increase in an extracellular matrix in chronic liver diseases. Quantitative characterization of fibrillar collagen in intact tissue is essential for both fibrosis studies and clinical applications. Commonly used methods, histological staining followed by either semiquantitative or computerized image analysis, have limited sensitivity, accuracy, and operator-dependent variations. The fibrillar collagen in sinusoids of normal livers could be observed through second-harmonic generation (SHG) microscopy. The two-photon excited fluorescence (TPEF) images, recorded simultaneously with SHG, clearly revealed the hepatocyte morphology. We have systematically optimized the parameters for the quantitative SHG/TPEF imaging of liver tissue and developed fully automated image analysis algorithms to extract the information of collagen changes and cell necrosis. Subtle changes in the distribution and amount of collagen and cell morphology are quantitatively characterized in SHG/TPEF images. By comparing to traditional staining, such as Masson's trichrome and Sirius red, SHG/TPEF is a sensitive quantitative tool for automated collagen characterization in liver tissue. Our system allows for enhanced detection and quantification of sinusoidal collagen fibers in fibrosis research and clinical diagnostics.

  15. High-Content Screening for Quantitative Cell Biology.

    PubMed

    Mattiazzi Usaj, Mojca; Styles, Erin B; Verster, Adrian J; Friesen, Helena; Boone, Charles; Andrews, Brenda J

    2016-08-01

    High-content screening (HCS), which combines automated fluorescence microscopy with quantitative image analysis, allows the acquisition of unbiased multiparametric data at the single cell level. This approach has been used to address diverse biological questions and identify a plethora of quantitative phenotypes of varying complexity in numerous different model systems. Here, we describe some recent applications of HCS, ranging from the identification of genes required for specific biological processes to the characterization of genetic interactions. We review the steps involved in the design of useful biological assays and automated image analysis, and describe major challenges associated with each. Additionally, we highlight emerging technologies and future challenges, and discuss how the field of HCS might be enhanced in the future. Copyright © 2016 Elsevier Ltd. All rights reserved.

  16. Toward standardized quantitative image quality (IQ) assessment in computed tomography (CT): A comprehensive framework for automated and comparative IQ analysis based on ICRU Report 87.

    PubMed

    Pahn, Gregor; Skornitzke, Stephan; Schlemmer, Hans-Peter; Kauczor, Hans-Ulrich; Stiller, Wolfram

    2016-01-01

    Based on the guidelines from "Report 87: Radiation Dose and Image-quality Assessment in Computed Tomography" of the International Commission on Radiation Units and Measurements (ICRU), a software framework for automated quantitative image quality analysis was developed and its usability for a variety of scientific questions demonstrated. The extendable framework currently implements the calculation of the recommended Fourier image quality (IQ) metrics modulation transfer function (MTF) and noise-power spectrum (NPS), and additional IQ quantities such as noise magnitude, CT number accuracy, uniformity across the field-of-view, contrast-to-noise ratio (CNR) and signal-to-noise ratio (SNR) of simulated lesions for a commercially available cone-beam phantom. Sample image data were acquired with different scan and reconstruction settings on CT systems from different manufacturers. Spatial resolution is analyzed in terms of edge-spread function, line-spread-function, and MTF. 3D NPS is calculated according to ICRU Report 87, and condensed to 2D and radially averaged 1D representations. Noise magnitude, CT numbers, and uniformity of these quantities are assessed on large samples of ROIs. Low-contrast resolution (CNR, SNR) is quantitatively evaluated as a function of lesion contrast and diameter. Simultaneous automated processing of several image datasets allows for straightforward comparative assessment. The presented framework enables systematic, reproducible, automated and time-efficient quantitative IQ analysis. Consistent application of the ICRU guidelines facilitates standardization of quantitative assessment not only for routine quality assurance, but for a number of research questions, e.g. the comparison of different scanner models or acquisition protocols, and the evaluation of new technology or reconstruction methods. Copyright © 2015 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  17. Quantitative semi-automated analysis of morphogenesis with single-cell resolution in complex embryos.

    PubMed

    Giurumescu, Claudiu A; Kang, Sukryool; Planchon, Thomas A; Betzig, Eric; Bloomekatz, Joshua; Yelon, Deborah; Cosman, Pamela; Chisholm, Andrew D

    2012-11-01

    A quantitative understanding of tissue morphogenesis requires description of the movements of individual cells in space and over time. In transparent embryos, such as C. elegans, fluorescently labeled nuclei can be imaged in three-dimensional time-lapse (4D) movies and automatically tracked through early cleavage divisions up to ~350 nuclei. A similar analysis of later stages of C. elegans development has been challenging owing to the increased error rates of automated tracking of large numbers of densely packed nuclei. We present Nucleitracker4D, a freely available software solution for tracking nuclei in complex embryos that integrates automated tracking of nuclei in local searches with manual curation. Using these methods, we have been able to track >99% of all nuclei generated in the C. elegans embryo. Our analysis reveals that ventral enclosure of the epidermis is accompanied by complex coordinated migration of the neuronal substrate. We can efficiently track large numbers of migrating nuclei in 4D movies of zebrafish cardiac morphogenesis, suggesting that this approach is generally useful in situations in which the number, packing or dynamics of nuclei present challenges for automated tracking.

  18. The role of 3-D interactive visualization in blind surveys of H I in galaxies

    NASA Astrophysics Data System (ADS)

    Punzo, D.; van der Hulst, J. M.; Roerdink, J. B. T. M.; Oosterloo, T. A.; Ramatsoku, M.; Verheijen, M. A. W.

    2015-09-01

    Upcoming H I surveys will deliver large datasets, and automated processing using the full 3-D information (two positional dimensions and one spectral dimension) to find and characterize H I objects is imperative. In this context, visualization is an essential tool for enabling qualitative and quantitative human control on an automated source finding and analysis pipeline. We discuss how Visual Analytics, the combination of automated data processing and human reasoning, creativity and intuition, supported by interactive visualization, enables flexible and fast interaction with the 3-D data, helping the astronomer to deal with the analysis of complex sources. 3-D visualization, coupled to modeling, provides additional capabilities helping the discovery and analysis of subtle structures in the 3-D domain. The requirements for a fully interactive visualization tool are: coupled 1-D/2-D/3-D visualization, quantitative and comparative capabilities, combined with supervised semi-automated analysis. Moreover, the source code must have the following characteristics for enabling collaborative work: open, modular, well documented, and well maintained. We review four state of-the-art, 3-D visualization packages assessing their capabilities and feasibility for use in the case of 3-D astronomical data.

  19. Automation of dimethylation after guanidination labeling chemistry and its compatibility with common buffers and surfactants for mass spectrometry-based shotgun quantitative proteome analysis.

    PubMed

    Lo, Andy; Tang, Yanan; Chen, Lu; Li, Liang

    2013-07-25

    Isotope labeling liquid chromatography-mass spectrometry (LC-MS) is a major analytical platform for quantitative proteome analysis. Incorporation of isotopes used to distinguish samples plays a critical role in the success of this strategy. In this work, we optimized and automated a chemical derivatization protocol (dimethylation after guanidination, 2MEGA) to increase the labeling reproducibility and reduce human intervention. We also evaluated the reagent compatibility of this protocol to handle biological samples in different types of buffers and surfactants. A commercially available liquid handler was used for reagent dispensation to minimize analyst intervention and at least twenty protein digest samples could be prepared in a single run. Different front-end sample preparation methods for protein solubilization (SDS, urea, Rapigest™, and ProteaseMAX™) and two commercially available cell lysis buffers were evaluated for compatibility with the automated protocol. It was found that better than 94% desired labeling could be obtained in all conditions studied except urea, where the rate was reduced to about 92% due to carbamylation on the peptide amines. This work illustrates the automated 2MEGA labeling process can be used to handle a wide range of protein samples containing various reagents that are often encountered in protein sample preparation for quantitative proteome analysis. Copyright © 2013 Elsevier B.V. All rights reserved.

  20. Automated Quantitative Nuclear Cardiology Methods

    PubMed Central

    Motwani, Manish; Berman, Daniel S.; Germano, Guido; Slomka, Piotr J.

    2016-01-01

    Quantitative analysis of SPECT and PET has become a major part of nuclear cardiology practice. Current software tools can automatically segment the left ventricle, quantify function, establish myocardial perfusion maps and estimate global and local measures of stress/rest perfusion – all with minimal user input. State-of-the-art automated techniques have been shown to offer high diagnostic accuracy for detecting coronary artery disease, as well as predict prognostic outcomes. This chapter briefly reviews these techniques, highlights several challenges and discusses the latest developments. PMID:26590779

  1. The impact of injector-based contrast agent administration in time-resolved MRA.

    PubMed

    Budjan, Johannes; Attenberger, Ulrike I; Schoenberg, Stefan O; Pietsch, Hubertus; Jost, Gregor

    2018-05-01

    Time-resolved contrast-enhanced MR angiography (4D-MRA), which allows the simultaneous visualization of the vasculature and blood-flow dynamics, is widely used in clinical routine. In this study, the impact of two different contrast agent injection methods on 4D-MRA was examined in a controlled, standardized setting in an animal model. Six anesthetized Goettingen minipigs underwent two identical 4D-MRA examinations at 1.5 T in a single session. The contrast agent (0.1 mmol/kg body weight gadobutrol, followed by 20 ml saline) was injected using either manual injection or an automated injection system. A quantitative comparison of vascular signal enhancement and quantitative renal perfusion analyses were performed. Analysis of signal enhancement revealed higher peak enhancements and shorter time to peak intervals for the automated injection. Significantly different bolus shapes were found: automated injection resulted in a compact first-pass bolus shape clearly separated from the recirculation while manual injection resulted in a disrupted first-pass bolus with two peaks. In the quantitative perfusion analyses, statistically significant differences in plasma flow values were found between the injection methods. The results of both qualitative and quantitative 4D-MRA depend on the contrast agent injection method, with automated injection providing more defined bolus shapes and more standardized examination protocols. • Automated and manual contrast agent injection result in different bolus shapes in 4D-MRA. • Manual injection results in an undefined and interrupted bolus with two peaks. • Automated injection provides more defined bolus shapes. • Automated injection can lead to more standardized examination protocols.

  2. Silicon sheet growth development for the large area silicon sheet task of the low cost solar array project. Quantitative analysis of defects in silicon

    NASA Technical Reports Server (NTRS)

    Natesh, R.

    1978-01-01

    The various steps involved in obtaining quantitative information of structural defects in crystalline silicon samples are described. Procedures discussed include: (1) chemical polishing; (2) chemical etching; and (3) automated image analysis of samples on the QTM 720 System.

  3. High-Content Microscopy Analysis of Subcellular Structures: Assay Development and Application to Focal Adhesion Quantification.

    PubMed

    Kroll, Torsten; Schmidt, David; Schwanitz, Georg; Ahmad, Mubashir; Hamann, Jana; Schlosser, Corinne; Lin, Yu-Chieh; Böhm, Konrad J; Tuckermann, Jan; Ploubidou, Aspasia

    2016-07-01

    High-content analysis (HCA) converts raw light microscopy images to quantitative data through the automated extraction, multiparametric analysis, and classification of the relevant information content. Combined with automated high-throughput image acquisition, HCA applied to the screening of chemicals or RNAi-reagents is termed high-content screening (HCS). Its power in quantifying cell phenotypes makes HCA applicable also to routine microscopy. However, developing effective HCA and bioinformatic analysis pipelines for acquisition of biologically meaningful data in HCS is challenging. Here, the step-by-step development of an HCA assay protocol and an HCS bioinformatics analysis pipeline are described. The protocol's power is demonstrated by application to focal adhesion (FA) detection, quantitative analysis of multiple FA features, and functional annotation of signaling pathways regulating FA size, using primary data of a published RNAi screen. The assay and the underlying strategy are aimed at researchers performing microscopy-based quantitative analysis of subcellular features, on a small scale or in large HCS experiments. © 2016 by John Wiley & Sons, Inc. Copyright © 2016 John Wiley & Sons, Inc.

  4. Automated frame selection process for high-resolution microendoscopy

    NASA Astrophysics Data System (ADS)

    Ishijima, Ayumu; Schwarz, Richard A.; Shin, Dongsuk; Mondrik, Sharon; Vigneswaran, Nadarajah; Gillenwater, Ann M.; Anandasabapathy, Sharmila; Richards-Kortum, Rebecca

    2015-04-01

    We developed an automated frame selection algorithm for high-resolution microendoscopy video sequences. The algorithm rapidly selects a representative frame with minimal motion artifact from a short video sequence, enabling fully automated image analysis at the point-of-care. The algorithm was evaluated by quantitative comparison of diagnostically relevant image features and diagnostic classification results obtained using automated frame selection versus manual frame selection. A data set consisting of video sequences collected in vivo from 100 oral sites and 167 esophageal sites was used in the analysis. The area under the receiver operating characteristic curve was 0.78 (automated selection) versus 0.82 (manual selection) for oral sites, and 0.93 (automated selection) versus 0.92 (manual selection) for esophageal sites. The implementation of fully automated high-resolution microendoscopy at the point-of-care has the potential to reduce the number of biopsies needed for accurate diagnosis of precancer and cancer in low-resource settings where there may be limited infrastructure and personnel for standard histologic analysis.

  5. Localization-based super-resolution imaging meets high-content screening.

    PubMed

    Beghin, Anne; Kechkar, Adel; Butler, Corey; Levet, Florian; Cabillic, Marine; Rossier, Olivier; Giannone, Gregory; Galland, Rémi; Choquet, Daniel; Sibarita, Jean-Baptiste

    2017-12-01

    Single-molecule localization microscopy techniques have proven to be essential tools for quantitatively monitoring biological processes at unprecedented spatial resolution. However, these techniques are very low throughput and are not yet compatible with fully automated, multiparametric cellular assays. This shortcoming is primarily due to the huge amount of data generated during imaging and the lack of software for automation and dedicated data mining. We describe an automated quantitative single-molecule-based super-resolution methodology that operates in standard multiwell plates and uses analysis based on high-content screening and data-mining software. The workflow is compatible with fixed- and live-cell imaging and allows extraction of quantitative data like fluorophore photophysics, protein clustering or dynamic behavior of biomolecules. We demonstrate that the method is compatible with high-content screening using 3D dSTORM and DNA-PAINT based super-resolution microscopy as well as single-particle tracking.

  6. A standardized kit for automated quantitative assessment of candidate protein biomarkers in human plasma.

    PubMed

    Percy, Andrew J; Mohammed, Yassene; Yang, Juncong; Borchers, Christoph H

    2015-12-01

    An increasingly popular mass spectrometry-based quantitative approach for health-related research in the biomedical field involves the use of stable isotope-labeled standards (SIS) and multiple/selected reaction monitoring (MRM/SRM). To improve inter-laboratory precision and enable more widespread use of this 'absolute' quantitative technique in disease-biomarker assessment studies, methods must be standardized. Results/methodology: Using this MRM-with-SIS-peptide approach, we developed an automated method (encompassing sample preparation, processing and analysis) for quantifying 76 candidate protein markers (spanning >4 orders of magnitude in concentration) in neat human plasma. The assembled biomarker assessment kit - the 'BAK-76' - contains the essential materials (SIS mixes), methods (for acquisition and analysis), and tools (Qualis-SIS software) for performing biomarker discovery or verification studies in a rapid and standardized manner.

  7. Quantitative semi-automated analysis of morphogenesis with single-cell resolution in complex embryos

    PubMed Central

    Giurumescu, Claudiu A.; Kang, Sukryool; Planchon, Thomas A.; Betzig, Eric; Bloomekatz, Joshua; Yelon, Deborah; Cosman, Pamela; Chisholm, Andrew D.

    2012-01-01

    A quantitative understanding of tissue morphogenesis requires description of the movements of individual cells in space and over time. In transparent embryos, such as C. elegans, fluorescently labeled nuclei can be imaged in three-dimensional time-lapse (4D) movies and automatically tracked through early cleavage divisions up to ~350 nuclei. A similar analysis of later stages of C. elegans development has been challenging owing to the increased error rates of automated tracking of large numbers of densely packed nuclei. We present Nucleitracker4D, a freely available software solution for tracking nuclei in complex embryos that integrates automated tracking of nuclei in local searches with manual curation. Using these methods, we have been able to track >99% of all nuclei generated in the C. elegans embryo. Our analysis reveals that ventral enclosure of the epidermis is accompanied by complex coordinated migration of the neuronal substrate. We can efficiently track large numbers of migrating nuclei in 4D movies of zebrafish cardiac morphogenesis, suggesting that this approach is generally useful in situations in which the number, packing or dynamics of nuclei present challenges for automated tracking. PMID:23052905

  8. Influence of sample preparation and reliability of automated numerical refocusing in stain-free analysis of dissected tissues with quantitative phase digital holographic microscopy

    NASA Astrophysics Data System (ADS)

    Kemper, Björn; Lenz, Philipp; Bettenworth, Dominik; Krausewitz, Philipp; Domagk, Dirk; Ketelhut, Steffi

    2015-05-01

    Digital holographic microscopy (DHM) has been demonstrated to be a versatile tool for high resolution non-destructive quantitative phase imaging of surfaces and multi-modal minimally-invasive monitoring of living cell cultures in-vitro. DHM provides quantitative monitoring of physiological processes through functional imaging and structural analysis which, for example, gives new insight into signalling of cellular water permeability and cell morphology changes due to toxins and infections. Also the analysis of dissected tissues quantitative DHM phase contrast prospects application fields by stain-free imaging and the quantification of tissue density changes. We show that DHM allows imaging of different tissue layers with high contrast in unstained tissue sections. As the investigation of fixed samples represents a very important application field in pathology, we also analyzed the influence of the sample preparation. The retrieved data demonstrate that the quality of quantitative DHM phase images of dissected tissues depends strongly on the fixing method and common staining agents. As in DHM the reconstruction is performed numerically, multi-focus imaging is achieved from a single digital hologram. Thus, we evaluated the automated refocussing feature of DHM for application on different types of dissected tissues and revealed that on moderately stained samples highly reproducible holographic autofocussing can be achieved. Finally, it is demonstrated that alterations of the spatial refractive index distribution in murine and human tissue samples represent a reliable absolute parameter that is related of different degrees of inflammation in experimental colitis and Crohn's disease. This paves the way towards the usage of DHM in digital pathology for automated histological examinations and further studies to elucidate the translational potential of quantitative phase microscopy for the clinical management of patients, e.g., with inflammatory bowel disease.

  9. "BRAIN": Baruch Retrieval of Automated Information for Negotiations.

    ERIC Educational Resources Information Center

    Levenstein, Aaron, Ed.

    1981-01-01

    A data processing program that can be used as a research and collective bargaining aid for colleges is briefly described and the fields of the system are outlined. The system, known as BRAIN (Baruch Retrieval of Automated Information for Negotiations), is designed primarily as an instrument for quantitative and qualitative analysis. BRAIN consists…

  10. An Automated System for Chromosome Analysis

    NASA Technical Reports Server (NTRS)

    Castleman, K. R.; Melnyk, J. H.

    1976-01-01

    The design, construction, and testing of a complete system to produce karyotypes and chromosome measurement data from human blood samples, and to provide a basis for statistical analysis of quantitative chromosome measurement data are described.

  11. Automated classification of cell morphology by coherence-controlled holographic microscopy

    NASA Astrophysics Data System (ADS)

    Strbkova, Lenka; Zicha, Daniel; Vesely, Pavel; Chmelik, Radim

    2017-08-01

    In the last few years, classification of cells by machine learning has become frequently used in biology. However, most of the approaches are based on morphometric (MO) features, which are not quantitative in terms of cell mass. This may result in poor classification accuracy. Here, we study the potential contribution of coherence-controlled holographic microscopy enabling quantitative phase imaging for the classification of cell morphologies. We compare our approach with the commonly used method based on MO features. We tested both classification approaches in an experiment with nutritionally deprived cancer tissue cells, while employing several supervised machine learning algorithms. Most of the classifiers provided higher performance when quantitative phase features were employed. Based on the results, it can be concluded that the quantitative phase features played an important role in improving the performance of the classification. The methodology could be valuable help in refining the monitoring of live cells in an automated fashion. We believe that coherence-controlled holographic microscopy, as a tool for quantitative phase imaging, offers all preconditions for the accurate automated analysis of live cell behavior while enabling noninvasive label-free imaging with sufficient contrast and high-spatiotemporal phase sensitivity.

  12. Automated classification of cell morphology by coherence-controlled holographic microscopy.

    PubMed

    Strbkova, Lenka; Zicha, Daniel; Vesely, Pavel; Chmelik, Radim

    2017-08-01

    In the last few years, classification of cells by machine learning has become frequently used in biology. However, most of the approaches are based on morphometric (MO) features, which are not quantitative in terms of cell mass. This may result in poor classification accuracy. Here, we study the potential contribution of coherence-controlled holographic microscopy enabling quantitative phase imaging for the classification of cell morphologies. We compare our approach with the commonly used method based on MO features. We tested both classification approaches in an experiment with nutritionally deprived cancer tissue cells, while employing several supervised machine learning algorithms. Most of the classifiers provided higher performance when quantitative phase features were employed. Based on the results, it can be concluded that the quantitative phase features played an important role in improving the performance of the classification. The methodology could be valuable help in refining the monitoring of live cells in an automated fashion. We believe that coherence-controlled holographic microscopy, as a tool for quantitative phase imaging, offers all preconditions for the accurate automated analysis of live cell behavior while enabling noninvasive label-free imaging with sufficient contrast and high-spatiotemporal phase sensitivity. (2017) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).

  13. Automated detection of arterial input function in DSC perfusion MRI in a stroke rat model

    NASA Astrophysics Data System (ADS)

    Yeh, M.-Y.; Lee, T.-H.; Yang, S.-T.; Kuo, H.-H.; Chyi, T.-K.; Liu, H.-L.

    2009-05-01

    Quantitative cerebral blood flow (CBF) estimation requires deconvolution of the tissue concentration time curves with an arterial input function (AIF). However, image-based determination of AIF in rodent is challenged due to limited spatial resolution. We evaluated the feasibility of quantitative analysis using automated AIF detection and compared the results with commonly applied semi-quantitative analysis. Permanent occlusion of bilateral or unilateral common carotid artery was used to induce cerebral ischemia in rats. The image using dynamic susceptibility contrast method was performed on a 3-T magnetic resonance scanner with a spin-echo echo-planar-image sequence (TR/TE = 700/80 ms, FOV = 41 mm, matrix = 64, 3 slices, SW = 2 mm), starting from 7 s prior to contrast injection (1.2 ml/kg) at four different time points. For quantitative analysis, CBF was calculated by the AIF which was obtained from 10 voxels with greatest contrast enhancement after deconvolution. For semi-quantitative analysis, relative CBF was estimated by the integral divided by the first moment of the relaxivity time curves. We observed if the AIFs obtained in the three different ROIs (whole brain, hemisphere without lesion and hemisphere with lesion) were similar, the CBF ratios (lesion/normal) between quantitative and semi-quantitative analyses might have a similar trend at different operative time points. If the AIFs were different, the CBF ratios might be different. We concluded that using local maximum one can define proper AIF without knowing the anatomical location of arteries in a stroke rat model.

  14. Automated model-based quantitative analysis of phantoms with spherical inserts in FDG PET scans.

    PubMed

    Ulrich, Ethan J; Sunderland, John J; Smith, Brian J; Mohiuddin, Imran; Parkhurst, Jessica; Plichta, Kristin A; Buatti, John M; Beichel, Reinhard R

    2018-01-01

    Quality control plays an increasingly important role in quantitative PET imaging and is typically performed using phantoms. The purpose of this work was to develop and validate a fully automated analysis method for two common PET/CT quality assurance phantoms: the NEMA NU-2 IQ and SNMMI/CTN oncology phantom. The algorithm was designed to only utilize the PET scan to enable the analysis of phantoms with thin-walled inserts. We introduce a model-based method for automated analysis of phantoms with spherical inserts. Models are first constructed for each type of phantom to be analyzed. A robust insert detection algorithm uses the model to locate all inserts inside the phantom. First, candidates for inserts are detected using a scale-space detection approach. Second, candidates are given an initial label using a score-based optimization algorithm. Third, a robust model fitting step aligns the phantom model to the initial labeling and fixes incorrect labels. Finally, the detected insert locations are refined and measurements are taken for each insert and several background regions. In addition, an approach for automated selection of NEMA and CTN phantom models is presented. The method was evaluated on a diverse set of 15 NEMA and 20 CTN phantom PET/CT scans. NEMA phantoms were filled with radioactive tracer solution at 9.7:1 activity ratio over background, and CTN phantoms were filled with 4:1 and 2:1 activity ratio over background. For quantitative evaluation, an independent reference standard was generated by two experts using PET/CT scans of the phantoms. In addition, the automated approach was compared against manual analysis, which represents the current clinical standard approach, of the PET phantom scans by four experts. The automated analysis method successfully detected and measured all inserts in all test phantom scans. It is a deterministic algorithm (zero variability), and the insert detection RMS error (i.e., bias) was 0.97, 1.12, and 1.48 mm for phantom activity ratios 9.7:1, 4:1, and 2:1, respectively. For all phantoms and at all contrast ratios, the average RMS error was found to be significantly lower for the proposed automated method compared to the manual analysis of the phantom scans. The uptake measurements produced by the automated method showed high correlation with the independent reference standard (R 2 ≥ 0.9987). In addition, the average computing time for the automated method was 30.6 s and was found to be significantly lower (P ≪ 0.001) compared to manual analysis (mean: 247.8 s). The proposed automated approach was found to have less error when measured against the independent reference than the manual approach. It can be easily adapted to other phantoms with spherical inserts. In addition, it eliminates inter- and intraoperator variability in PET phantom analysis and is significantly more time efficient, and therefore, represents a promising approach to facilitate and simplify PET standardization and harmonization efforts. © 2017 American Association of Physicists in Medicine.

  15. Quantitative coronary plaque analysis predicts high-risk plaque morphology on coronary computed tomography angiography: results from the ROMICAT II trial.

    PubMed

    Liu, Ting; Maurovich-Horvat, Pál; Mayrhofer, Thomas; Puchner, Stefan B; Lu, Michael T; Ghemigian, Khristine; Kitslaar, Pieter H; Broersen, Alexander; Pursnani, Amit; Hoffmann, Udo; Ferencik, Maros

    2018-02-01

    Semi-automated software can provide quantitative assessment of atherosclerotic plaques on coronary CT angiography (CTA). The relationship between established qualitative high-risk plaque features and quantitative plaque measurements has not been studied. We analyzed the association between quantitative plaque measurements and qualitative high-risk plaque features on coronary CTA. We included 260 patients with plaque who underwent coronary CTA in the Rule Out Myocardial Infarction/Ischemia Using Computer Assisted Tomography (ROMICAT) II trial. Quantitative plaque assessment and qualitative plaque characterization were performed on a per coronary segment basis. Quantitative coronary plaque measurements included plaque volume, plaque burden, remodeling index, and diameter stenosis. In qualitative analysis, high-risk plaque was present if positive remodeling, low CT attenuation plaque, napkin-ring sign or spotty calcium were detected. Univariable and multivariable logistic regression analyses were performed to assess the association between quantitative and qualitative high-risk plaque assessment. Among 888 segments with coronary plaque, high-risk plaque was present in 391 (44.0%) segments by qualitative analysis. In quantitative analysis, segments with high-risk plaque had higher total plaque volume, low CT attenuation plaque volume, plaque burden and remodeling index. Quantitatively assessed low CT attenuation plaque volume (odds ratio 1.12 per 1 mm 3 , 95% CI 1.04-1.21), positive remodeling (odds ratio 1.25 per 0.1, 95% CI 1.10-1.41) and plaque burden (odds ratio 1.53 per 0.1, 95% CI 1.08-2.16) were associated with high-risk plaque. Quantitative coronary plaque characteristics (low CT attenuation plaque volume, positive remodeling and plaque burden) measured by semi-automated software correlated with qualitative assessment of high-risk plaque features.

  16. Holographic Interferometry and Image Analysis for Aerodynamic Testing

    DTIC Science & Technology

    1980-09-01

    tunnels, (2) development of automated image analysis techniques for reducing quantitative flow-field data from holographic interferograms, and (3...investigation and development of software for the application of digital image analysis to other photographic techniques used in wind tunnel testing.

  17. Quantitative determination of opioids in whole blood using fully automated dried blood spot desorption coupled to on-line SPE-LC-MS/MS.

    PubMed

    Verplaetse, Ruth; Henion, Jack

    2016-01-01

    Opioids are well known, widely used painkillers. Increased stability of opioids in the dried blood spot (DBS) matrix compared to blood/plasma has been described. Other benefits provided by DBS techniques include point-of-care collection, less invasive micro sampling, more economical shipment, and convenient storage. Current methodology for analysis of micro whole blood samples for opioids is limited to the classical DBS workflow, including tedious manual punching of the DBS cards followed by extraction and liquid chromatography-tandem mass spectrometry (LC-MS/MS) bioanalysis. The goal of this study was to develop and validate a fully automated on-line sample preparation procedure for the analysis of DBS micro samples relevant to the detection of opioids in finger prick blood. To this end, automated flow-through elution of DBS cards was followed by on-line solid-phase extraction (SPE) and analysis by LC-MS/MS. Selective, sensitive, accurate, and reproducible quantitation of five representative opioids in human blood at sub-therapeutic, therapeutic, and toxic levels was achieved. The range of reliable response (R(2)  ≥0.997) was 1 to 500 ng/mL whole blood for morphine, codeine, oxycodone, hydrocodone; and 0.1 to 50 ng/mL for fentanyl. Inter-day, intra-day, and matrix inter-lot accuracy and precision was less than 15% (even at lower limits of quantitation (LLOQ) level). The method was successfully used to measure hydrocodone and its major metabolite norhydrocodone in incurred human samples. Our data support the enormous potential of DBS sampling and automated analysis for monitoring opioids as well as other pharmaceuticals in both anti-doping and pain management regimens. Copyright © 2015 John Wiley & Sons, Ltd.

  18. TRIC: an automated alignment strategy for reproducible protein quantification in targeted proteomics

    PubMed Central

    Röst, Hannes L.; Liu, Yansheng; D’Agostino, Giuseppe; Zanella, Matteo; Navarro, Pedro; Rosenberger, George; Collins, Ben C.; Gillet, Ludovic; Testa, Giuseppe; Malmström, Lars; Aebersold, Ruedi

    2016-01-01

    Large scale, quantitative proteomic studies have become essential for the analysis of clinical cohorts, large perturbation experiments and systems biology studies. While next-generation mass spectrometric techniques such as SWATH-MS have substantially increased throughput and reproducibility, ensuring consistent quantification of thousands of peptide analytes across multiple LC-MS/MS runs remains a challenging and laborious manual process. To produce highly consistent and quantitatively accurate proteomics data matrices in an automated fashion, we have developed the TRIC software which utilizes fragment ion data to perform cross-run alignment, consistent peak-picking and quantification for high throughput targeted proteomics. TRIC uses a graph-based alignment strategy based on non-linear retention time correction to integrate peak elution information from all LC-MS/MS runs acquired in a study. When compared to state-of-the-art SWATH-MS data analysis, the algorithm was able to reduce the identification error by more than 3-fold at constant recall, while correcting for highly non-linear chromatographic effects. On a pulsed-SILAC experiment performed on human induced pluripotent stem (iPS) cells, TRIC was able to automatically align and quantify thousands of light and heavy isotopic peak groups and substantially increased the quantitative completeness and biological information in the data, providing insights into protein dynamics of iPS cells. Overall, this study demonstrates the importance of consistent quantification in highly challenging experimental setups, and proposes an algorithm to automate this task, constituting the last missing piece in a pipeline for automated analysis of massively parallel targeted proteomics datasets. PMID:27479329

  19. Complex Genetics of Behavior: BXDs in the Automated Home-Cage.

    PubMed

    Loos, Maarten; Verhage, Matthijs; Spijker, Sabine; Smit, August B

    2017-01-01

    This chapter describes a use case for the genetic dissection and automated analysis of complex behavioral traits using the genetically diverse panel of BXD mouse recombinant inbred strains. Strains of the BXD resource differ widely in terms of gene and protein expression in the brain, as well as in their behavioral repertoire. A large mouse resource opens the possibility for gene finding studies underlying distinct behavioral phenotypes, however, such a resource poses a challenge in behavioral phenotyping. To address the specifics of large-scale screening we describe how to investigate: (1) how to assess mouse behavior systematically in addressing a large genetic cohort, (2) how to dissect automation-derived longitudinal mouse behavior into quantitative parameters, and (3) how to map these quantitative traits to the genome, deriving loci underlying aspects of behavior.

  20. Automated Simulation For Analysis And Design

    NASA Technical Reports Server (NTRS)

    Cantwell, E.; Shenk, Tim; Robinson, Peter; Upadhye, R.

    1992-01-01

    Design Assistant Workstation (DAWN) software being developed to facilitate simulation of qualitative and quantitative aspects of behavior of life-support system in spacecraft, chemical-processing plant, heating and cooling system of large building, or any of variety of systems including interacting process streams and processes. Used to analyze alternative design scenarios or specific designs of such systems. Expert system will automate part of design analysis: reason independently by simulating design scenarios and return to designer with overall evaluations and recommendations.

  1. Automated selected reaction monitoring software for accurate label-free protein quantification.

    PubMed

    Teleman, Johan; Karlsson, Christofer; Waldemarson, Sofia; Hansson, Karin; James, Peter; Malmström, Johan; Levander, Fredrik

    2012-07-06

    Selected reaction monitoring (SRM) is a mass spectrometry method with documented ability to quantify proteins accurately and reproducibly using labeled reference peptides. However, the use of labeled reference peptides becomes impractical if large numbers of peptides are targeted and when high flexibility is desired when selecting peptides. We have developed a label-free quantitative SRM workflow that relies on a new automated algorithm, Anubis, for accurate peak detection. Anubis efficiently removes interfering signals from contaminating peptides to estimate the true signal of the targeted peptides. We evaluated the algorithm on a published multisite data set and achieved results in line with manual data analysis. In complex peptide mixtures from whole proteome digests of Streptococcus pyogenes we achieved a technical variability across the entire proteome abundance range of 6.5-19.2%, which was considerably below the total variation across biological samples. Our results show that the label-free SRM workflow with automated data analysis is feasible for large-scale biological studies, opening up new possibilities for quantitative proteomics and systems biology.

  2. Note: An automated image analysis method for high-throughput classification of surface-bound bacterial cell motions.

    PubMed

    Shen, Simon; Syal, Karan; Tao, Nongjian; Wang, Shaopeng

    2015-12-01

    We present a Single-Cell Motion Characterization System (SiCMoCS) to automatically extract bacterial cell morphological features from microscope images and use those features to automatically classify cell motion for rod shaped motile bacterial cells. In some imaging based studies, bacteria cells need to be attached to the surface for time-lapse observation of cellular processes such as cell membrane-protein interactions and membrane elasticity. These studies often generate large volumes of images. Extracting accurate bacterial cell morphology features from these images is critical for quantitative assessment. Using SiCMoCS, we demonstrated simultaneous and automated motion tracking and classification of hundreds of individual cells in an image sequence of several hundred frames. This is a significant improvement from traditional manual and semi-automated approaches to segmenting bacterial cells based on empirical thresholds, and a first attempt to automatically classify bacterial motion types for motile rod shaped bacterial cells, which enables rapid and quantitative analysis of various types of bacterial motion.

  3. Accuracy and reproducibility of aortic annular measurements obtained from echocardiographic 3D manual and semi-automated software analyses in patients referred for transcatheter aortic valve implantation: implication for prosthesis size selection.

    PubMed

    Stella, Stefano; Italia, Leonardo; Geremia, Giulia; Rosa, Isabella; Ancona, Francesco; Marini, Claudia; Capogrosso, Cristina; Giglio, Manuela; Montorfano, Matteo; Latib, Azeem; Margonato, Alberto; Colombo, Antonio; Agricola, Eustachio

    2018-02-06

    A 3D transoesophageal echocardiography (3D-TOE) reconstruction tool has recently been introduced. The system automatically configures a geometric model of the aortic root and performs quantitative analysis of these structures. We compared the measurements of the aortic annulus (AA) obtained by semi-automated 3D-TOE quantitative software and manual analysis vs. multislice computed tomography (MSCT) ones. One hundred and seventy-five patients (mean age 81.3 ± 6.3 years, 77 men) who underwent both MSCT and 3D-TOE for annulus assessment before transcatheter aortic valve implantation were analysed. Hypothetical prosthetic valve sizing was evaluated using the 3D manual, semi-automated measurements using manufacturer-recommended CT-based sizing algorithm as gold standard. Good correlation between 3D-TOE methods vs. MSCT measurements was found, but the semi-automated analysis demonstrated slightly better correlations for AA major diameter (r = 0.89), perimeter (r = 0.89), and area (r = 0.85) (all P < 0.0001) than manual one. Both 3D methods underestimated the MSCT measurements, but semi-automated measurements showed narrower limits of agreement and lesser bias than manual measurements for most of AA parameters. On average, 3D-TOE semi-automated major diameter, area, and perimeter underestimated the respective MSCT measurements by 7.4%, 3.5%, and 4.4%, respectively, whereas minor diameter was overestimated by 0.3%. Moderate agreement for valve sizing for both 3D-TOE techniques was found: Kappa agreement 0.5 for both semi-automated and manual analysis. Interobserver and intraobserver agreements for the AA measurements were excellent for both techniques (intraclass correlation coefficients for all parameters >0.80). The 3D-TOE semi-automated analysis of AA is feasible and reliable and can be used in clinical practice as an alternative to MSCT for AA assessment. Published on behalf of the European Society of Cardiology. All rights reserved. © The Author(s) 2018. For permissions, please email: journals.permissions@oup.com.

  4. SuperSegger: robust image segmentation, analysis and lineage tracking of bacterial cells.

    PubMed

    Stylianidou, Stella; Brennan, Connor; Nissen, Silas B; Kuwada, Nathan J; Wiggins, Paul A

    2016-11-01

    Many quantitative cell biology questions require fast yet reliable automated image segmentation to identify and link cells from frame-to-frame, and characterize the cell morphology and fluorescence. We present SuperSegger, an automated MATLAB-based image processing package well-suited to quantitative analysis of high-throughput live-cell fluorescence microscopy of bacterial cells. SuperSegger incorporates machine-learning algorithms to optimize cellular boundaries and automated error resolution to reliably link cells from frame-to-frame. Unlike existing packages, it can reliably segment microcolonies with many cells, facilitating the analysis of cell-cycle dynamics in bacteria as well as cell-contact mediated phenomena. This package has a range of built-in capabilities for characterizing bacterial cells, including the identification of cell division events, mother, daughter and neighbouring cells, and computing statistics on cellular fluorescence, the location and intensity of fluorescent foci. SuperSegger provides a variety of postprocessing data visualization tools for single cell and population level analysis, such as histograms, kymographs, frame mosaics, movies and consensus images. Finally, we demonstrate the power of the package by analyzing lag phase growth with single cell resolution. © 2016 John Wiley & Sons Ltd.

  5. Automated PCR setup for forensic casework samples using the Normalization Wizard and PCR Setup robotic methods.

    PubMed

    Greenspoon, S A; Sykes, K L V; Ban, J D; Pollard, A; Baisden, M; Farr, M; Graham, N; Collins, B L; Green, M M; Christenson, C C

    2006-12-20

    Human genome, pharmaceutical and research laboratories have long enjoyed the application of robotics to performing repetitive laboratory tasks. However, the utilization of robotics in forensic laboratories for processing casework samples is relatively new and poses particular challenges. Since the quantity and quality (a mixture versus a single source sample, the level of degradation, the presence of PCR inhibitors) of the DNA contained within a casework sample is unknown, particular attention must be paid to procedural susceptibility to contamination, as well as DNA yield, especially as it pertains to samples with little biological material. The Virginia Department of Forensic Science (VDFS) has successfully automated forensic casework DNA extraction utilizing the DNA IQ(trade mark) System in conjunction with the Biomek 2000 Automation Workstation. Human DNA quantitation is also performed in a near complete automated fashion utilizing the AluQuant Human DNA Quantitation System and the Biomek 2000 Automation Workstation. Recently, the PCR setup for casework samples has been automated, employing the Biomek 2000 Automation Workstation and Normalization Wizard, Genetic Identity version, which utilizes the quantitation data, imported into the software, to create a customized automated method for DNA dilution, unique to that plate of DNA samples. The PCR Setup software method, used in conjunction with the Normalization Wizard method and written for the Biomek 2000, functions to mix the diluted DNA samples, transfer the PCR master mix, and transfer the diluted DNA samples to PCR amplification tubes. Once the process is complete, the DNA extracts, still on the deck of the robot in PCR amplification strip tubes, are transferred to pre-labeled 1.5 mL tubes for long-term storage using an automated method. The automation of these steps in the process of forensic DNA casework analysis has been accomplished by performing extensive optimization, validation and testing of the software methods.

  6. Automated Quantitative Analysis of Retinal Microvasculature in Normal Eyes on Optical Coherence Tomography Angiography.

    PubMed

    Lupidi, Marco; Coscas, Florence; Cagini, Carlo; Fiore, Tito; Spaccini, Elisa; Fruttini, Daniela; Coscas, Gabriel

    2016-09-01

    To describe a new automated quantitative technique for displaying and analyzing macular vascular perfusion using optical coherence tomography angiography (OCT-A) and to determine a normative data set, which might be used as reference in identifying progressive changes due to different retinal vascular diseases. Reliability study. A retrospective review of 47 eyes of 47 consecutive healthy subjects imaged with a spectral-domain OCT-A device was performed in a single institution. Full-spectrum amplitude-decorrelation angiography generated OCT angiograms of the retinal superficial and deep capillary plexuses. A fully automated custom-built software was used to provide quantitative data on the foveal avascular zone (FAZ) features and the total vascular and avascular surfaces. A comparative analysis between central macular thickness (and volume) and FAZ metrics was performed. Repeatability and reproducibility were also assessed in order to establish the feasibility and reliability of the method. The comparative analysis between the superficial capillary plexus and the deep capillary plexus revealed a statistically significant difference (P < .05) in terms of FAZ perimeter, surface, and major axis and a not statistically significant difference (P > .05) when considering total vascular and avascular surfaces. A linear correlation was demonstrated between central macular thickness (and volume) and the FAZ surface. Coefficients of repeatability and reproducibility were less than 0.4, thus demonstrating high intraobserver repeatability and interobserver reproducibility for all the examined data. A quantitative approach on retinal vascular perfusion, which is visible on Spectralis OCT angiography, may offer an objective and reliable method for monitoring disease progression in several retinal vascular diseases. Copyright © 2016 Elsevier Inc. All rights reserved.

  7. Driver-centred vehicle automation: using network analysis for agent-based modelling of the driver in highly automated driving systems.

    PubMed

    Banks, Victoria A; Stanton, Neville A

    2016-11-01

    To the average driver, the concept of automation in driving infers that they can become completely 'hands and feet free'. This is a common misconception, however, one that has been shown through the application of Network Analysis to new Cruise Assist technologies that may feature on our roads by 2020. Through the adoption of a Systems Theoretic approach, this paper introduces the concept of driver-initiated automation which reflects the role of the driver in highly automated driving systems. Using a combination of traditional task analysis and the application of quantitative network metrics, this agent-based modelling paper shows how the role of the driver remains an integral part of the driving system implicating the need for designers to ensure they are provided with the tools necessary to remain actively in-the-loop despite giving increasing opportunities to delegate their control to the automated subsystems. Practitioner Summary: This paper describes and analyses a driver-initiated command and control system of automation using representations afforded by task and social networks to understand how drivers remain actively involved in the task. A network analysis of different driver commands suggests that such a strategy does maintain the driver in the control loop.

  8. Software for Automated Image-to-Image Co-registration

    NASA Technical Reports Server (NTRS)

    Benkelman, Cody A.; Hughes, Heidi

    2007-01-01

    The project objectives are: a) Develop software to fine-tune image-to-image co-registration, presuming images are orthorectified prior to input; b) Create a reusable software development kit (SDK) to enable incorporation of these tools into other software; d) provide automated testing for quantitative analysis; and e) Develop software that applies multiple techniques to achieve subpixel precision in the co-registration of image pairs.

  9. Automated image analysis for quantitative fluorescence in situ hybridization with environmental samples.

    PubMed

    Zhou, Zhi; Pons, Marie Noëlle; Raskin, Lutgarde; Zilles, Julie L

    2007-05-01

    When fluorescence in situ hybridization (FISH) analyses are performed with complex environmental samples, difficulties related to the presence of microbial cell aggregates and nonuniform background fluorescence are often encountered. The objective of this study was to develop a robust and automated quantitative FISH method for complex environmental samples, such as manure and soil. The method and duration of sample dispersion were optimized to reduce the interference of cell aggregates. An automated image analysis program that detects cells from 4',6'-diamidino-2-phenylindole (DAPI) micrographs and extracts the maximum and mean fluorescence intensities for each cell from corresponding FISH images was developed with the software Visilog. Intensity thresholds were not consistent even for duplicate analyses, so alternative ways of classifying signals were investigated. In the resulting method, the intensity data were divided into clusters using fuzzy c-means clustering, and the resulting clusters were classified as target (positive) or nontarget (negative). A manual quality control confirmed this classification. With this method, 50.4, 72.1, and 64.9% of the cells in two swine manure samples and one soil sample, respectively, were positive as determined with a 16S rRNA-targeted bacterial probe (S-D-Bact-0338-a-A-18). Manual counting resulted in corresponding values of 52.3, 70.6, and 61.5%, respectively. In two swine manure samples and one soil sample 21.6, 12.3, and 2.5% of the cells were positive with an archaeal probe (S-D-Arch-0915-a-A-20), respectively. Manual counting resulted in corresponding values of 22.4, 14.0, and 2.9%, respectively. This automated method should facilitate quantitative analysis of FISH images for a variety of complex environmental samples.

  10. On the Agreement between Manual and Automated Methods for Single-Trial Detection and Estimation of Features from Event-Related Potentials

    PubMed Central

    Biurrun Manresa, José A.; Arguissain, Federico G.; Medina Redondo, David E.; Mørch, Carsten D.; Andersen, Ole K.

    2015-01-01

    The agreement between humans and algorithms on whether an event-related potential (ERP) is present or not and the level of variation in the estimated values of its relevant features are largely unknown. Thus, the aim of this study was to determine the categorical and quantitative agreement between manual and automated methods for single-trial detection and estimation of ERP features. To this end, ERPs were elicited in sixteen healthy volunteers using electrical stimulation at graded intensities below and above the nociceptive withdrawal reflex threshold. Presence/absence of an ERP peak (categorical outcome) and its amplitude and latency (quantitative outcome) in each single-trial were evaluated independently by two human observers and two automated algorithms taken from existing literature. Categorical agreement was assessed using percentage positive and negative agreement and Cohen’s κ, whereas quantitative agreement was evaluated using Bland-Altman analysis and the coefficient of variation. Typical values for the categorical agreement between manual and automated methods were derived, as well as reference values for the average and maximum differences that can be expected if one method is used instead of the others. Results showed that the human observers presented the highest categorical and quantitative agreement, and there were significantly large differences between detection and estimation of quantitative features among methods. In conclusion, substantial care should be taken in the selection of the detection/estimation approach, since factors like stimulation intensity and expected number of trials with/without response can play a significant role in the outcome of a study. PMID:26258532

  11. ARAM: an automated image analysis software to determine rosetting parameters and parasitaemia in Plasmodium samples.

    PubMed

    Kudella, Patrick Wolfgang; Moll, Kirsten; Wahlgren, Mats; Wixforth, Achim; Westerhausen, Christoph

    2016-04-18

    Rosetting is associated with severe malaria and a primary cause of death in Plasmodium falciparum infections. Detailed understanding of this adhesive phenomenon may enable the development of new therapies interfering with rosette formation. For this, it is crucial to determine parameters such as rosetting and parasitaemia of laboratory strains or patient isolates, a bottleneck in malaria research due to the time consuming and error prone manual analysis of specimens. Here, the automated, free, stand-alone analysis software automated rosetting analyzer for micrographs (ARAM) to determine rosetting rate, rosette size distribution as well as parasitaemia with a convenient graphical user interface is presented. Automated rosetting analyzer for micrographs is an executable with two operation modes for automated identification of objects on images. The default mode detects red blood cells and fluorescently labelled parasitized red blood cells by combining an intensity-gradient with a threshold filter. The second mode determines object location and size distribution from a single contrast method. The obtained results are compared with standardized manual analysis. Automated rosetting analyzer for micrographs calculates statistical confidence probabilities for rosetting rate and parasitaemia. Automated rosetting analyzer for micrographs analyses 25 cell objects per second reliably delivering identical results compared to manual analysis. For the first time rosette size distribution is determined in a precise and quantitative manner employing ARAM in combination with established inhibition tests. Additionally ARAM measures the essential observables parasitaemia, rosetting rate and size as well as location of all detected objects and provides confidence intervals for the determined observables. No other existing software solution offers this range of function. The second, non-malaria specific, analysis mode of ARAM offers the functionality to detect arbitrary objects. Automated rosetting analyzer for micrographs has the capability to push malaria research to a more quantitative and statistically significant level with increased reliability due to operator independence. As an installation file for Windows © 7, 8.1 and 10 is available for free, ARAM offers a novel open and easy-to-use platform for the malaria community to elucidate resetting. © 7, 8.1 and 10 is available for free, ARAM offers a novel open and easy-to-use platform for the malaria community to elucidate rosetting.

  12. Data-Driven Surface Traversability Analysis for Mars 2020 Landing Site Selection

    NASA Technical Reports Server (NTRS)

    Ono, Masahiro; Rothrock, Brandon; Almeida, Eduardo; Ansar, Adnan; Otero, Richard; Huertas, Andres; Heverly, Matthew

    2015-01-01

    The objective of this paper is three-fold: 1) to describe the engineering challenges in the surface mobility of the Mars 2020 Rover mission that are considered in the landing site selection processs, 2) to introduce new automated traversability analysis capabilities, and 3) to present the preliminary analysis results for top candidate landing sites. The analysis capabilities presented in this paper include automated terrain classification, automated rock detection, digital elevation model (DEM) generation, and multi-ROI (region of interest) route planning. These analysis capabilities enable to fully utilize the vast volume of high-resolution orbiter imagery, quantitatively evaluate surface mobility requirements for each candidate site, and reject subjectivity in the comparison between sites in terms of engineering considerations. The analysis results supported the discussion in the Second Landing Site Workshop held in August 2015, which resulted in selecting eight candidate sites that will be considered in the third workshop.

  13. A time-series method for automated measurement of changes in mitotic and interphase duration from time-lapse movies.

    PubMed

    Sigoillot, Frederic D; Huckins, Jeremy F; Li, Fuhai; Zhou, Xiaobo; Wong, Stephen T C; King, Randall W

    2011-01-01

    Automated time-lapse microscopy can visualize proliferation of large numbers of individual cells, enabling accurate measurement of the frequency of cell division and the duration of interphase and mitosis. However, extraction of quantitative information by manual inspection of time-lapse movies is too time-consuming to be useful for analysis of large experiments. Here we present an automated time-series approach that can measure changes in the duration of mitosis and interphase in individual cells expressing fluorescent histone 2B. The approach requires analysis of only 2 features, nuclear area and average intensity. Compared to supervised learning approaches, this method reduces processing time and does not require generation of training data sets. We demonstrate that this method is as sensitive as manual analysis in identifying small changes in interphase or mitotic duration induced by drug or siRNA treatment. This approach should facilitate automated analysis of high-throughput time-lapse data sets to identify small molecules or gene products that influence timing of cell division.

  14. Temporal lobe epilepsy: quantitative MR volumetry in detection of hippocampal atrophy.

    PubMed

    Farid, Nikdokht; Girard, Holly M; Kemmotsu, Nobuko; Smith, Michael E; Magda, Sebastian W; Lim, Wei Y; Lee, Roland R; McDonald, Carrie R

    2012-08-01

    To determine the ability of fully automated volumetric magnetic resonance (MR) imaging to depict hippocampal atrophy (HA) and to help correctly lateralize the seizure focus in patients with temporal lobe epilepsy (TLE). This study was conducted with institutional review board approval and in compliance with HIPAA regulations. Volumetric MR imaging data were analyzed for 34 patients with TLE and 116 control subjects. Structural volumes were calculated by using U.S. Food and Drug Administration-cleared software for automated quantitative MR imaging analysis (NeuroQuant). Results of quantitative MR imaging were compared with visual detection of atrophy, and, when available, with histologic specimens. Receiver operating characteristic analyses were performed to determine the optimal sensitivity and specificity of quantitative MR imaging for detecting HA and asymmetry. A linear classifier with cross validation was used to estimate the ability of quantitative MR imaging to help lateralize the seizure focus. Quantitative MR imaging-derived hippocampal asymmetries discriminated patients with TLE from control subjects with high sensitivity (86.7%-89.5%) and specificity (92.2%-94.1%). When a linear classifier was used to discriminate left versus right TLE, hippocampal asymmetry achieved 94% classification accuracy. Volumetric asymmetries of other subcortical structures did not improve classification. Compared with invasive video electroencephalographic recordings, lateralization accuracy was 88% with quantitative MR imaging and 85% with visual inspection of volumetric MR imaging studies but only 76% with visual inspection of clinical MR imaging studies. Quantitative MR imaging can depict the presence and laterality of HA in TLE with accuracy rates that may exceed those achieved with visual inspection of clinical MR imaging studies. Thus, quantitative MR imaging may enhance standard visual analysis, providing a useful and viable means for translating volumetric analysis into clinical practice.

  15. Automated CT Scan Scores of Bronchiectasis and Air Trapping in Cystic Fibrosis

    PubMed Central

    Swiercz, Waldemar; Heltshe, Sonya L.; Anthony, Margaret M.; Szefler, Paul; Klein, Rebecca; Strain, John; Brody, Alan S.; Sagel, Scott D.

    2014-01-01

    Background: Computer analysis of high-resolution CT (HRCT) scans may improve the assessment of structural lung injury in children with cystic fibrosis (CF). The goal of this cross-sectional pilot study was to validate automated, observer-independent image analysis software to establish objective, simple criteria for bronchiectasis and air trapping. Methods: HRCT scans of the chest were performed in 35 children with CF and compared with scans from 12 disease control subjects. Automated image analysis software was developed to count visible airways on inspiratory images and to measure a low attenuation density (LAD) index on expiratory images. Among the children with CF, relationships among automated measures, Brody HRCT scanning scores, lung function, and sputum markers of inflammation were assessed. Results: The number of total, central, and peripheral airways on inspiratory images and LAD (%) on expiratory images were significantly higher in children with CF compared with control subjects. Among subjects with CF, peripheral airway counts correlated strongly with Brody bronchiectasis scores by two raters (r = 0.86, P < .0001; r = 0.91, P < .0001), correlated negatively with lung function, and were positively associated with sputum free neutrophil elastase activity. LAD (%) correlated with Brody air trapping scores (r = 0.83, P < .0001; r = 0.69, P < .0001) but did not correlate with lung function or sputum inflammatory markers. Conclusions: Quantitative airway counts and LAD (%) on HRCT scans appear to be useful surrogates for bronchiectasis and air trapping in children with CF. Our automated methodology provides objective quantitative measures of bronchiectasis and air trapping that may serve as end points in CF clinical trials. PMID:24114359

  16. Automated fine structure image analysis method for discrimination of diabetic retinopathy stage using conjunctival microvasculature images

    PubMed Central

    Khansari, Maziyar M; O’Neill, William; Penn, Richard; Chau, Felix; Blair, Norman P; Shahidi, Mahnaz

    2016-01-01

    The conjunctiva is a densely vascularized mucus membrane covering the sclera of the eye with a unique advantage of accessibility for direct visualization and non-invasive imaging. The purpose of this study is to apply an automated quantitative method for discrimination of different stages of diabetic retinopathy (DR) using conjunctival microvasculature images. Fine structural analysis of conjunctival microvasculature images was performed by ordinary least square regression and Fisher linear discriminant analysis. Conjunctival images between groups of non-diabetic and diabetic subjects at different stages of DR were discriminated. The automated method’s discriminate rates were higher than those determined by human observers. The method allowed sensitive and rapid discrimination by assessment of conjunctival microvasculature images and can be potentially useful for DR screening and monitoring. PMID:27446692

  17. Performance of automated scoring of ER, PR, HER2, CK5/6 and EGFR in breast cancer tissue microarrays in the Breast Cancer Association Consortium

    PubMed Central

    Howat, William J; Blows, Fiona M; Provenzano, Elena; Brook, Mark N; Morris, Lorna; Gazinska, Patrycja; Johnson, Nicola; McDuffus, Leigh‐Anne; Miller, Jodi; Sawyer, Elinor J; Pinder, Sarah; van Deurzen, Carolien H M; Jones, Louise; Sironen, Reijo; Visscher, Daniel; Caldas, Carlos; Daley, Frances; Coulson, Penny; Broeks, Annegien; Sanders, Joyce; Wesseling, Jelle; Nevanlinna, Heli; Fagerholm, Rainer; Blomqvist, Carl; Heikkilä, Päivi; Ali, H Raza; Dawson, Sarah‐Jane; Figueroa, Jonine; Lissowska, Jolanta; Brinton, Louise; Mannermaa, Arto; Kataja, Vesa; Kosma, Veli‐Matti; Cox, Angela; Brock, Ian W; Cross, Simon S; Reed, Malcolm W; Couch, Fergus J; Olson, Janet E; Devillee, Peter; Mesker, Wilma E; Seyaneve, Caroline M; Hollestelle, Antoinette; Benitez, Javier; Perez, Jose Ignacio Arias; Menéndez, Primitiva; Bolla, Manjeet K; Easton, Douglas F; Schmidt, Marjanka K; Pharoah, Paul D; Sherman, Mark E

    2014-01-01

    Abstract Breast cancer risk factors and clinical outcomes vary by tumour marker expression. However, individual studies often lack the power required to assess these relationships, and large‐scale analyses are limited by the need for high throughput, standardized scoring methods. To address these limitations, we assessed whether automated image analysis of immunohistochemically stained tissue microarrays can permit rapid, standardized scoring of tumour markers from multiple studies. Tissue microarray sections prepared in nine studies containing 20 263 cores from 8267 breast cancers stained for two nuclear (oestrogen receptor, progesterone receptor), two membranous (human epidermal growth factor receptor 2 and epidermal growth factor receptor) and one cytoplasmic (cytokeratin 5/6) marker were scanned as digital images. Automated algorithms were used to score markers in tumour cells using the Ariol system. We compared automated scores against visual reads, and their associations with breast cancer survival. Approximately 65–70% of tissue microarray cores were satisfactory for scoring. Among satisfactory cores, agreement between dichotomous automated and visual scores was highest for oestrogen receptor (Kappa = 0.76), followed by human epidermal growth factor receptor 2 (Kappa = 0.69) and progesterone receptor (Kappa = 0.67). Automated quantitative scores for these markers were associated with hazard ratios for breast cancer mortality in a dose‐response manner. Considering visual scores of epidermal growth factor receptor or cytokeratin 5/6 as the reference, automated scoring achieved excellent negative predictive value (96–98%), but yielded many false positives (positive predictive value = 30–32%). For all markers, we observed substantial heterogeneity in automated scoring performance across tissue microarrays. Automated analysis is a potentially useful tool for large‐scale, quantitative scoring of immunohistochemically stained tissue microarrays available in consortia. However, continued optimization, rigorous marker‐specific quality control measures and standardization of tissue microarray designs, staining and scoring protocols is needed to enhance results. PMID:27499890

  18. Semi-automated identification of cones in the human retina using circle Hough transform

    PubMed Central

    Bukowska, Danuta M.; Chew, Avenell L.; Huynh, Emily; Kashani, Irwin; Wan, Sue Ling; Wan, Pak Ming; Chen, Fred K

    2015-01-01

    A large number of human retinal diseases are characterized by a progressive loss of cones, the photoreceptors critical for visual acuity and color perception. Adaptive Optics (AO) imaging presents a potential method to study these cells in vivo. However, AO imaging in ophthalmology is a relatively new phenomenon and quantitative analysis of these images remains difficult and tedious using manual methods. This paper illustrates a novel semi-automated quantitative technique enabling registration of AO images to macular landmarks, cone counting and its radius quantification at specified distances from the foveal center. The new cone counting approach employs the circle Hough transform (cHT) and is compared to automated counting methods, as well as arbitrated manual cone identification. We explore the impact of varying the circle detection parameter on the validity of cHT cone counting and discuss the potential role of using this algorithm in detecting both cones and rods separately. PMID:26713186

  19. Quantification of diffusion tensor imaging in normal white matter maturation of early childhood using an automated processing pipeline.

    PubMed

    Loh, K B; Ramli, N; Tan, L K; Roziah, M; Rahmat, K; Ariffin, H

    2012-07-01

    The degree and status of white matter myelination can be sensitively monitored using diffusion tensor imaging (DTI). This study looks at the measurement of fractional anistropy (FA) and mean diffusivity (MD) using an automated ROI with an existing DTI atlas. Anatomical MRI and structural DTI were performed cross-sectionally on 26 normal children (newborn to 48 months old), using 1.5-T MRI. The automated processing pipeline was implemented to convert diffusion-weighted images into the NIfTI format. DTI-TK software was used to register the processed images to the ICBM DTI-81 atlas, while AFNI software was used for automated atlas-based volumes of interest (VOIs) and statistical value extraction. DTI exhibited consistent grey-white matter contrast. Triphasic temporal variation of the FA and MD values was noted, with FA increasing and MD decreasing rapidly early in the first 12 months. The second phase lasted 12-24 months during which the rate of FA and MD changes was reduced. After 24 months, the FA and MD values plateaued. DTI is a superior technique to conventional MR imaging in depicting WM maturation. The use of the automated processing pipeline provides a reliable environment for quantitative analysis of high-throughput DTI data. Diffusion tensor imaging outperforms conventional MRI in depicting white matter maturation. • DTI will become an important clinical tool for diagnosing paediatric neurological diseases. • DTI appears especially helpful for developmental abnormalities, tumours and white matter disease. • An automated processing pipeline assists quantitative analysis of high throughput DTI data.

  20. Quantitative characterisation of sedimentary grains

    NASA Astrophysics Data System (ADS)

    Tunwal, Mohit; Mulchrone, Kieran F.; Meere, Patrick A.

    2016-04-01

    Analysis of sedimentary texture helps in determining the formation, transportation and deposition processes of sedimentary rocks. Grain size analysis is traditionally quantitative, whereas grain shape analysis is largely qualitative. A semi-automated approach to quantitatively analyse shape and size of sand sized sedimentary grains is presented. Grain boundaries are manually traced from thin section microphotographs in the case of lithified samples and are automatically identified in the case of loose sediments. Shape and size paramters can then be estimated using a software package written on the Mathematica platform. While automated methodology already exists for loose sediment analysis, the available techniques for the case of lithified samples are limited to cases of high definition thin section microphotographs showing clear contrast between framework grains and matrix. Along with the size of grain, shape parameters such as roundness, angularity, circularity, irregularity and fractal dimension are measured. A new grain shape parameter developed using Fourier descriptors has also been developed. To test this new approach theoretical examples were analysed and produce high quality results supporting the accuracy of the algorithm. Furthermore sandstone samples from known aeolian and fluvial environments from the Dingle Basin, County Kerry, Ireland were collected and analysed. Modern loose sediments from glacial till from County Cork, Ireland and aeolian sediments from Rajasthan, India have also been collected and analysed. A graphical summary of the data is presented and allows for quantitative distinction between samples extracted from different sedimentary environments.

  1. Human brain atlas for automated region of interest selection in quantitative susceptibility mapping: application to determine iron content in deep gray matter structures.

    PubMed

    Lim, Issel Anne L; Faria, Andreia V; Li, Xu; Hsu, Johnny T C; Airan, Raag D; Mori, Susumu; van Zijl, Peter C M

    2013-11-15

    The purpose of this paper is to extend the single-subject Eve atlas from Johns Hopkins University, which currently contains diffusion tensor and T1-weighted anatomical maps, by including contrast based on quantitative susceptibility mapping. The new atlas combines a "deep gray matter parcellation map" (DGMPM) derived from a single-subject quantitative susceptibility map with the previously established "white matter parcellation map" (WMPM) from the same subject's T1-weighted and diffusion tensor imaging data into an MNI coordinate map named the "Everything Parcellation Map in Eve Space," also known as the "EvePM." It allows automated segmentation of gray matter and white matter structures. Quantitative susceptibility maps from five healthy male volunteers (30 to 33 years of age) were coregistered to the Eve Atlas with AIR and Large Deformation Diffeomorphic Metric Mapping (LDDMM), and the transformation matrices were applied to the EvePM to produce automated parcellation in subject space. Parcellation accuracy was measured with a kappa analysis for the left and right structures of six deep gray matter regions. For multi-orientation QSM images, the Kappa statistic was 0.85 between automated and manual segmentation, with the inter-rater reproducibility Kappa being 0.89 for the human raters, suggesting "almost perfect" agreement between all segmentation methods. Segmentation seemed slightly more difficult for human raters on single-orientation QSM images, with the Kappa statistic being 0.88 between automated and manual segmentation, and 0.85 and 0.86 between human raters. Overall, this atlas provides a time-efficient tool for automated coregistration and segmentation of quantitative susceptibility data to analyze many regions of interest. These data were used to establish a baseline for normal magnetic susceptibility measurements for over 60 brain structures of 30- to 33-year-old males. Correlating the average susceptibility with age-based iron concentrations in gray matter structures measured by Hallgren and Sourander (1958) allowed interpolation of the average iron concentration of several deep gray matter regions delineated in the EvePM. Copyright © 2013 Elsevier Inc. All rights reserved.

  2. Human brain atlas for automated region of interest selection in quantitative susceptibility mapping: application to determine iron content in deep gray matter structures

    PubMed Central

    Lim, Issel Anne L.; Faria, Andreia V.; Li, Xu; Hsu, Johnny T.C.; Airan, Raag D.; Mori, Susumu; van Zijl, Peter C. M.

    2013-01-01

    The purpose of this paper is to extend the single-subject Eve atlas from Johns Hopkins University, which currently contains diffusion tensor and T1-weighted anatomical maps, by including contrast based on quantitative susceptibility mapping. The new atlas combines a “deep gray matter parcellation map” (DGMPM) derived from a single-subject quantitative susceptibility map with the previously established “white matter parcellation map” (WMPM) from the same subject’s T1-weighted and diffusion tensor imaging data into an MNI coordinate map named the “Everything Parcellation Map in Eve Space,” also known as the “EvePM.” It allows automated segmentation of gray matter and white matter structures. Quantitative susceptibility maps from five healthy male volunteers (30 to 33 years of age) were coregistered to the Eve Atlas with AIR and Large Deformation Diffeomorphic Metric Mapping (LDDMM), and the transformation matrices were applied to the EvePM to produce automated parcellation in subject space. Parcellation accuracy was measured with a kappa analysis for the left and right structures of six deep gray matter regions. For multi-orientation QSM images, the Kappa statistic was 0.85 between automated and manual segmentation, with the inter-rater reproducibility Kappa being 0.89 for the human raters, suggesting “almost perfect” agreement between all segmentation methods. Segmentation seemed slightly more difficult for human raters on single-orientation QSM images, with the Kappa statistic being 0.88 between automated and manual segmentation, and 0.85 and 0.86 between human raters. Overall, this atlas provides a time-efficient tool for automated coregistration and segmentation of quantitative susceptibility data to analyze many regions of interest. These data were used to establish a baseline for normal magnetic susceptibility measurements for over 60 brain structures of 30- to 33-year-old males. Correlating the average susceptibility with age-based iron concentrations in gray matter structures measured by Hallgren and Sourander (1958) allowed interpolation of the average iron concentration of several deep gray matter regions delineated in the EvePM. PMID:23769915

  3. Receiver-operating-characteristic analysis of an automated program for analyzing striatal uptake of 123I-ioflupane SPECT images: calibration using visual reads.

    PubMed

    Kuo, Phillip Hsin; Avery, Ryan; Krupinski, Elizabeth; Lei, Hong; Bauer, Adam; Sherman, Scott; McMillan, Natalie; Seibyl, John; Zubal, George

    2013-03-01

    A fully automated objective striatal analysis (OSA) program that quantitates dopamine transporter uptake in subjects with suspected Parkinson's disease was applied to images from clinical (123)I-ioflupane studies. The striatal binding ratios or alternatively the specific binding ratio (SBR) of the lowest putamen uptake was computed, and receiver-operating-characteristic (ROC) analysis was applied to 94 subjects to determine the best discriminator using this quantitative method. Ninety-four (123)I-ioflupane SPECT scans were analyzed from patients referred to our clinical imaging department and were reconstructed using the manufacturer-supplied reconstruction and filtering parameters for the radiotracer. Three trained readers conducted independent visual interpretations and reported each case as either normal or showing dopaminergic deficit (abnormal). The same images were analyzed using the OSA software, which locates the striatal and occipital structures and places regions of interest on the caudate and putamen. Additionally, the OSA places a region of interest on the occipital region that is used to calculate the background-subtracted SBR. The lower SBR of the 2 putamen regions was taken as the quantitative report. The 33 normal (bilateral comma-shaped striata) and 61 abnormal (unilateral or bilateral dopaminergic deficit) studies were analyzed to generate ROC curves. Twenty-nine of the scans were interpreted as normal and 59 as abnormal by all 3 readers. For 12 scans, the 3 readers did not unanimously agree in their interpretations (discordant). The ROC analysis, which used the visual-majority-consensus interpretation from the readers as the gold standard, yielded an area under the curve of 0.958 when using 1.08 as the threshold SBR for the lowest putamen. The sensitivity and specificity of the automated quantitative analysis were 95% and 89%, respectively. The OSA program delivers SBR quantitative values that have a high sensitivity and specificity, compared with visual interpretations by trained nuclear medicine readers. Such a program could be a helpful aid for readers not yet experienced with (123)I-ioflupane SPECT images and if further adapted and validated may be useful to assess disease progression during pharmaceutical testing of therapies.

  4. An entirely automated method to score DSS-induced colitis in mice by digital image analysis of pathology slides

    PubMed Central

    Kozlowski, Cleopatra; Jeet, Surinder; Beyer, Joseph; Guerrero, Steve; Lesch, Justin; Wang, Xiaoting; DeVoss, Jason; Diehl, Lauri

    2013-01-01

    SUMMARY The DSS (dextran sulfate sodium) model of colitis is a mouse model of inflammatory bowel disease. Microscopic symptoms include loss of crypt cells from the gut lining and infiltration of inflammatory cells into the colon. An experienced pathologist requires several hours per study to score histological changes in selected regions of the mouse gut. In order to increase the efficiency of scoring, Definiens Developer software was used to devise an entirely automated method to quantify histological changes in the whole H&E slide. When the algorithm was applied to slides from historical drug-discovery studies, automated scores classified 88% of drug candidates in the same way as pathologists’ scores. In addition, another automated image analysis method was developed to quantify colon-infiltrating macrophages, neutrophils, B cells and T cells in immunohistochemical stains of serial sections of the H&E slides. The timing of neutrophil and macrophage infiltration had the highest correlation to pathological changes, whereas T and B cell infiltration occurred later. Thus, automated image analysis enables quantitative comparisons between tissue morphology changes and cell-infiltration dynamics. PMID:23580198

  5. Human performance consequences of stages and levels of automation: an integrated meta-analysis.

    PubMed

    Onnasch, Linda; Wickens, Christopher D; Li, Huiyang; Manzey, Dietrich

    2014-05-01

    We investigated how automation-induced human performance consequences depended on the degree of automation (DOA). Function allocation between human and automation can be represented in terms of the stages and levels taxonomy proposed by Parasuraman, Sheridan, and Wickens. Higher DOAs are achieved both by later stages and higher levels within stages. A meta-analysis based on data of 18 experiments examines the mediating effects of DOA on routine system performance, performance when the automation fails, workload, and situation awareness (SA). The effects of DOA on these measures are summarized by level of statistical significance. We found (a) a clear automation benefit for routine system performance with increasing DOA, (b) a similar but weaker pattern for workload when automation functioned properly, and (c) a negative impact of higher DOA on failure system performance and SA. Most interesting was the finding that negative consequences of automation seem to be most likely when DOA moved across a critical boundary, which was identified between automation supporting information analysis and automation supporting action selection. Results support the proposed cost-benefit trade-off with regard to DOA. It seems that routine performance and workload on one hand, and the potential loss of SA and manual skills on the other hand, directly trade off and that appropriate function allocation can serve only one of the two aspects. Findings contribute to the body of research on adequate function allocation by providing an overall picture through quantitatively combining data from a variety of studies across varying domains.

  6. Comprehensive automation of the solid phase extraction gas chromatographic mass spectrometric analysis (SPE-GC/MS) of opioids, cocaine, and metabolites from serum and other matrices.

    PubMed

    Lerch, Oliver; Temme, Oliver; Daldrup, Thomas

    2014-07-01

    The analysis of opioids, cocaine, and metabolites from blood serum is a routine task in forensic laboratories. Commonly, the employed methods include many manual or partly automated steps like protein precipitation, dilution, solid phase extraction, evaporation, and derivatization preceding a gas chromatography (GC)/mass spectrometry (MS) or liquid chromatography (LC)/MS analysis. In this study, a comprehensively automated method was developed from a validated, partly automated routine method. This was possible by replicating method parameters on the automated system. Only marginal optimization of parameters was necessary. The automation relying on an x-y-z robot after manual protein precipitation includes the solid phase extraction, evaporation of the eluate, derivatization (silylation with N-methyl-N-trimethylsilyltrifluoroacetamide, MSTFA), and injection into a GC/MS. A quantitative analysis of almost 170 authentic serum samples and more than 50 authentic samples of other matrices like urine, different tissues, and heart blood on cocaine, benzoylecgonine, methadone, morphine, codeine, 6-monoacetylmorphine, dihydrocodeine, and 7-aminoflunitrazepam was conducted with both methods proving that the analytical results are equivalent even near the limits of quantification (low ng/ml range). To our best knowledge, this application is the first one reported in the literature employing this sample preparation system.

  7. Reliability of fully automated versus visually controlled pre- and post-processing of resting-state EEG.

    PubMed

    Hatz, F; Hardmeier, M; Bousleiman, H; Rüegg, S; Schindler, C; Fuhr, P

    2015-02-01

    To compare the reliability of a newly developed Matlab® toolbox for the fully automated, pre- and post-processing of resting state EEG (automated analysis, AA) with the reliability of analysis involving visually controlled pre- and post-processing (VA). 34 healthy volunteers (age: median 38.2 (20-49), 82% female) had three consecutive 256-channel resting-state EEG at one year intervals. Results of frequency analysis of AA and VA were compared with Pearson correlation coefficients, and reliability over time was assessed with intraclass correlation coefficients (ICC). Mean correlation coefficient between AA and VA was 0.94±0.07, mean ICC for AA 0.83±0.05 and for VA 0.84±0.07. AA and VA yield very similar results for spectral EEG analysis and are equally reliable. AA is less time-consuming, completely standardized, and independent of raters and their training. Automated processing of EEG facilitates workflow in quantitative EEG analysis. Copyright © 2014 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

  8. A Meta-Analysis of Factors Influencing the Development of Trust in Automation: Implications for Understanding Autonomy in Future Systems.

    PubMed

    Schaefer, Kristin E; Chen, Jessie Y C; Szalma, James L; Hancock, P A

    2016-05-01

    We used meta-analysis to assess research concerning human trust in automation to understand the foundation upon which future autonomous systems can be built. Trust is increasingly important in the growing need for synergistic human-machine teaming. Thus, we expand on our previous meta-analytic foundation in the field of human-robot interaction to include all of automation interaction. We used meta-analysis to assess trust in automation. Thirty studies provided 164 pairwise effect sizes, and 16 studies provided 63 correlational effect sizes. The overall effect size of all factors on trust development was ḡ = +0.48, and the correlational effect was [Formula: see text]  = +0.34, each of which represented medium effects. Moderator effects were observed for the human-related (ḡ  = +0.49; [Formula: see text] = +0.16) and automation-related (ḡ = +0.53; [Formula: see text] = +0.41) factors. Moderator effects specific to environmental factors proved insufficient in number to calculate at this time. Findings provide a quantitative representation of factors influencing the development of trust in automation as well as identify additional areas of needed empirical research. This work has important implications to the enhancement of current and future human-automation interaction, especially in high-risk or extreme performance environments. © 2016, Human Factors and Ergonomics Society.

  9. Development and implementation of an automated quantitative film digitizer quality control program

    NASA Astrophysics Data System (ADS)

    Fetterly, Kenneth A.; Avula, Ramesh T. V.; Hangiandreou, Nicholas J.

    1999-05-01

    A semi-automated, quantitative film digitizer quality control program that is based on the computer analysis of the image data from a single digitized test film was developed. This program includes measurements of the geometric accuracy, optical density performance, signal to noise ratio, and presampled modulation transfer function. The variability of the measurements was less than plus or minus 5%. Measurements were made on a group of two clinical and two laboratory laser film digitizers during a trial period of approximately four months. Quality control limits were established based on clinical necessity, vendor specifications and digitizer performance. During the trial period, one of the digitizers failed the performance requirements and was corrected by calibration.

  10. Designing automation for human use: empirical studies and quantitative models.

    PubMed

    Parasuraman, R

    2000-07-01

    An emerging knowledge base of human performance research can provide guidelines for designing automation that can be used effectively by human operators of complex systems. Which functions should be automated and to what extent in a given system? A model for types and levels of automation that provides a framework and an objective basis for making such choices is described. The human performance consequences of particular types and levels of automation constitute primary evaluative criteria for automation design when using the model. Four human performance areas are considered--mental workload, situation awareness, complacency and skill degradation. Secondary evaluative criteria include such factors as automation reliability, the risks of decision/action consequences and the ease of systems integration. In addition to this qualitative approach, quantitative models can inform design. Several computational and formal models of human interaction with automation that have been proposed by various researchers are reviewed. An important future research need is the integration of qualitative and quantitative approaches. Application of these models provides an objective basis for designing automation for effective human use.

  11. An automation-assisted generic approach for biological sample preparation and LC-MS/MS method validation.

    PubMed

    Zhang, Jie; Wei, Shimin; Ayres, David W; Smith, Harold T; Tse, Francis L S

    2011-09-01

    Although it is well known that automation can provide significant improvement in the efficiency of biological sample preparation in quantitative LC-MS/MS analysis, it has not been widely implemented in bioanalytical laboratories throughout the industry. This can be attributed to the lack of a sound strategy and practical procedures in working with robotic liquid-handling systems. Several comprehensive automation assisted procedures for biological sample preparation and method validation were developed and qualified using two types of Hamilton Microlab liquid-handling robots. The procedures developed were generic, user-friendly and covered the majority of steps involved in routine sample preparation and method validation. Generic automation procedures were established as a practical approach to widely implement automation into the routine bioanalysis of samples in support of drug-development programs.

  12. Automated magnification calibration in transmission electron microscopy using Fourier analysis of replica images.

    PubMed

    van der Laak, Jeroen A W M; Dijkman, Henry B P M; Pahlplatz, Martin M M

    2006-03-01

    The magnification factor in transmission electron microscopy is not very precise, hampering for instance quantitative analysis of specimens. Calibration of the magnification is usually performed interactively using replica specimens, containing line or grating patterns with known spacing. In the present study, a procedure is described for automated magnification calibration using digital images of a line replica. This procedure is based on analysis of the power spectrum of Fourier transformed replica images, and is compared to interactive measurement in the same images. Images were used with magnification ranging from 1,000 x to 200,000 x. The automated procedure deviated on average 0.10% from interactive measurements. Especially for catalase replicas, the coefficient of variation of automated measurement was considerably smaller (average 0.28%) compared to that of interactive measurement (average 3.5%). In conclusion, calibration of the magnification in digital images from transmission electron microscopy may be performed automatically, using the procedure presented here, with high precision and accuracy.

  13. Associative image analysis: a method for automated quantification of 3D multi-parameter images of brain tissue

    PubMed Central

    Bjornsson, Christopher S; Lin, Gang; Al-Kofahi, Yousef; Narayanaswamy, Arunachalam; Smith, Karen L; Shain, William; Roysam, Badrinath

    2009-01-01

    Brain structural complexity has confounded prior efforts to extract quantitative image-based measurements. We present a systematic ‘divide and conquer’ methodology for analyzing three-dimensional (3D) multi-parameter images of brain tissue to delineate and classify key structures, and compute quantitative associations among them. To demonstrate the method, thick (~100 μm) slices of rat brain tissue were labeled using 3 – 5 fluorescent signals, and imaged using spectral confocal microscopy and unmixing algorithms. Automated 3D segmentation and tracing algorithms were used to delineate cell nuclei, vasculature, and cell processes. From these segmentations, a set of 23 intrinsic and 8 associative image-based measurements was computed for each cell. These features were used to classify astrocytes, microglia, neurons, and endothelial cells. Associations among cells and between cells and vasculature were computed and represented as graphical networks to enable further analysis. The automated results were validated using a graphical interface that permits investigator inspection and corrective editing of each cell in 3D. Nuclear counting accuracy was >89%, and cell classification accuracy ranged from 81–92% depending on cell type. We present a software system named FARSIGHT implementing our methodology. Its output is a detailed XML file containing measurements that may be used for diverse quantitative hypothesis-driven and exploratory studies of the central nervous system. PMID:18294697

  14. Image analysis in cytology: DNA-histogramming versus cervical smear prescreening.

    PubMed

    Bengtsson, E W; Nordin, B

    1993-01-01

    The visual inspection of cellular specimens and histological sections through a light microscope plays an important role in clinical medicine and biomedical research. The human visual system is very good at the recognition of various patterns but less efficient at quantitative assessment of these patterns. Some samples are prepared in great numbers, most notably the screening for cervical cancer, the so-called PAP-smears, which results in hundreds of millions of samples each year, creating a tedious mass inspection task. Numerous attempts have been made over the last 40 years to create systems that solve these two tasks, the quantitative supplement to the human visual system and the automation of mass screening. The most difficult task, the total automation, has received the greatest attention with many large scale projects over the decades. In spite of all these efforts, still no generally accepted automated prescreening device exists on the market. The main reason for this failure is the great pattern recognition capabilities needed to distinguish between cancer cells and all other kinds of objects found in the specimens: cellular clusters, debris, degenerate cells, etc. Improved algorithms, the ever-increasing processing power of computers and progress in biochemical specimen preparation techniques make it likely that eventually useful automated prescreening systems will become available. Meanwhile, much less effort has been put into the development of interactive cell image analysis systems. Still, some such systems have been developed and put into use at thousands of laboratories worldwide. In these the human pattern recognition capability is used to select the fields and objects that are to be analysed while the computational power of the computer is used for the quantitative analysis of cellular DNA content or other relevant markers. Numerous studies have shown that the quantitative information about the distribution of cellular DNA content is of prognostic significance in many types of cancer. Several laboratories are therefore putting these techniques into routine clinical use. The more advanced systems can also study many other markers and cellular features, some known to be of clinical interest, others useful in research. The advances in computer technology are making these systems more generally available through decreasing cost, increasing computational power and improved user interfaces. We have been involved in research and development of both automated and interactive cell analysis systems during the last 20 years. Here some experiences and conclusions from this work will be presented as well as some predictions about what can be expected in the near future.

  15. Automated Analysis of Fluorescence Microscopy Images to Identify Protein-Protein Interactions

    DOE PAGES

    Venkatraman, S.; Doktycz, M. J.; Qi, H.; ...

    2006-01-01

    The identification of protein interactions is important for elucidating biological networks. One obstacle in comprehensive interaction studies is the analyses of large datasets, particularly those containing images. Development of an automated system to analyze an image-based protein interaction dataset is needed. Such an analysis system is described here, to automatically extract features from fluorescence microscopy images obtained from a bacterial protein interaction assay. These features are used to relay quantitative values that aid in the automated scoring of positive interactions. Experimental observations indicate that identifying at least 50% positive cells in an image is sufficient to detect a protein interaction.more » Based on this criterion, the automated system presents 100% accuracy in detecting positive interactions for a dataset of 16 images. Algorithms were implemented using MATLAB and the software developed is available on request from the authors.« less

  16. Accuracy and efficiency of computer-aided anatomical analysis using 3D visualization software based on semi-automated and automated segmentations.

    PubMed

    An, Gao; Hong, Li; Zhou, Xiao-Bing; Yang, Qiong; Li, Mei-Qing; Tang, Xiang-Yang

    2017-03-01

    We investigated and compared the functionality of two 3D visualization software provided by a CT vendor and a third-party vendor, respectively. Using surgical anatomical measurement as baseline, we evaluated the accuracy of 3D visualization and verified their utility in computer-aided anatomical analysis. The study cohort consisted of 50 adult cadavers fixed with the classical formaldehyde method. The computer-aided anatomical analysis was based on CT images (in DICOM format) acquired by helical scan with contrast enhancement, using a CT vendor provided 3D visualization workstation (Syngo) and a third-party 3D visualization software (Mimics) that was installed on a PC. Automated and semi-automated segmentations were utilized in the 3D visualization workstation and software, respectively. The functionality and efficiency of automated and semi-automated segmentation methods were compared. Using surgical anatomical measurement as a baseline, the accuracy of 3D visualization based on automated and semi-automated segmentations was quantitatively compared. In semi-automated segmentation, the Mimics 3D visualization software outperformed the Syngo 3D visualization workstation. No significant difference was observed in anatomical data measurement by the Syngo 3D visualization workstation and the Mimics 3D visualization software (P>0.05). Both the Syngo 3D visualization workstation provided by a CT vendor and the Mimics 3D visualization software by a third-party vendor possessed the needed functionality, efficiency and accuracy for computer-aided anatomical analysis. Copyright © 2016 Elsevier GmbH. All rights reserved.

  17. An automated system for chromosome analysis. Volume 1: Goals, system design, and performance

    NASA Technical Reports Server (NTRS)

    Castleman, K. R.; Melnyk, J. H.

    1975-01-01

    The design, construction, and testing of a complete system to produce karyotypes and chromosome measurement data from human blood samples, and a basis for statistical analysis of quantitative chromosome measurement data is described. The prototype was assembled, tested, and evaluated on clinical material and thoroughly documented.

  18. Quantitative diagnostic performance of myocardial perfusion SPECT with attenuation correction in women.

    PubMed

    Wolak, Arik; Slomka, Piotr J; Fish, Mathews B; Lorenzo, Santiago; Berman, Daniel S; Germano, Guido

    2008-06-01

    Attenuation correction (AC) for myocardial perfusion SPECT (MPS) had not been evaluated separately in women despite specific considerations in this group because of breast photon attenuation. We aimed to evaluate the performance of AC in women by using automated quantitative analysis of MPS to avoid any bias. Consecutive female patients--134 with a low likelihood (LLk) of coronary artery disease (CAD) and 114 with coronary angiography performed within less than 3 mo of MPS--who were referred for rest-stress electrocardiography-gated 99mTc-sestamibi MPS with AC were considered. Imaging data were evaluated for contour quality control. An additional 50 LLk studies in women were used to create equivalent normal limits for studies with AC and with no correction (NC). An experienced technologist unaware of the angiography and other results performed the contour quality control. All other processing was performed in a fully automated manner. Quantitative analysis was performed with the Cedars-Sinai myocardial perfusion analysis package. All automated segmental analyses were performed with the 17-segment, 5-point American Heart Association model. Summed stress scores (SSS) of > or =3 were considered abnormal. CAD (> or =70% stenosis) was present in 69 of 114 patients (60%). The normalcy rates were 93% for both NC and AC studies. The SSS for patients with CAD and without CAD for NC versus AC were 10.0 +/- 9.0 (mean +/- SD) versus 10.2 +/- 8.5 and 1.6 +/- 2.3 versus 1.8 +/- 2.5, respectively; P was not significant (NS) for all comparisons of NC versus AC. The SSS for LLk patients for NC versus AC were 0.51 +/- 1.0 versus 0.6 +/- 1.1, respectively; P was NS. The specificity for both NC and AC was 73%. The sensitivities for NC and AC were 80% and 81%, respectively, and the accuracies for NC and AC were 77% and 78%, respectively; P was NS for both comparisons. There are no significant diagnostic differences between automated quantitative MPS analyses performed in studies processed with and without AC in women.

  19. Automatic analysis of quantitative NMR data of pharmaceutical compound libraries.

    PubMed

    Liu, Xuejun; Kolpak, Michael X; Wu, Jiejun; Leo, Gregory C

    2012-08-07

    In drug discovery, chemical library compounds are usually dissolved in DMSO at a certain concentration and then distributed to biologists for target screening. Quantitative (1)H NMR (qNMR) is the preferred method for the determination of the actual concentrations of compounds because the relative single proton peak areas of two chemical species represent the relative molar concentrations of the two compounds, that is, the compound of interest and a calibrant. Thus, an analyte concentration can be determined using a calibration compound at a known concentration. One particularly time-consuming step in the qNMR analysis of compound libraries is the manual integration of peaks. In this report is presented an automated method for performing this task without prior knowledge of compound structures and by using an external calibration spectrum. The script for automated integration is fast and adaptable to large-scale data sets, eliminating the need for manual integration in ~80% of the cases.

  20. "What else are you worried about?" – Integrating textual responses into quantitative social science research

    PubMed Central

    Brümmer, Martin; Schmukle, Stefan C.; Goebel, Jan; Wagner, Gert G.

    2017-01-01

    Open-ended questions have routinely been included in large-scale survey and panel studies, yet there is some perplexity about how to actually incorporate the answers to such questions into quantitative social science research. Tools developed recently in the domain of natural language processing offer a wide range of options for the automated analysis of such textual data, but their implementation has lagged behind. In this study, we demonstrate straightforward procedures that can be applied to process and analyze textual data for the purposes of quantitative social science research. Using more than 35,000 textual answers to the question “What else are you worried about?” from participants of the German Socio-economic Panel Study (SOEP), we (1) analyzed characteristics of respondents that determined whether they answered the open-ended question, (2) used the textual data to detect relevant topics that were reported by the respondents, and (3) linked the features of the respondents to the worries they reported in their textual data. The potential uses as well as the limitations of the automated analysis of textual data are discussed. PMID:28759628

  1. Image segmentation evaluation for very-large datasets

    NASA Astrophysics Data System (ADS)

    Reeves, Anthony P.; Liu, Shuang; Xie, Yiting

    2016-03-01

    With the advent of modern machine learning methods and fully automated image analysis there is a need for very large image datasets having documented segmentations for both computer algorithm training and evaluation. Current approaches of visual inspection and manual markings do not scale well to big data. We present a new approach that depends on fully automated algorithm outcomes for segmentation documentation, requires no manual marking, and provides quantitative evaluation for computer algorithms. The documentation of new image segmentations and new algorithm outcomes are achieved by visual inspection. The burden of visual inspection on large datasets is minimized by (a) customized visualizations for rapid review and (b) reducing the number of cases to be reviewed through analysis of quantitative segmentation evaluation. This method has been applied to a dataset of 7,440 whole-lung CT images for 6 different segmentation algorithms designed to fully automatically facilitate the measurement of a number of very important quantitative image biomarkers. The results indicate that we could achieve 93% to 99% successful segmentation for these algorithms on this relatively large image database. The presented evaluation method may be scaled to much larger image databases.

  2. "What else are you worried about?" - Integrating textual responses into quantitative social science research.

    PubMed

    Rohrer, Julia M; Brümmer, Martin; Schmukle, Stefan C; Goebel, Jan; Wagner, Gert G

    2017-01-01

    Open-ended questions have routinely been included in large-scale survey and panel studies, yet there is some perplexity about how to actually incorporate the answers to such questions into quantitative social science research. Tools developed recently in the domain of natural language processing offer a wide range of options for the automated analysis of such textual data, but their implementation has lagged behind. In this study, we demonstrate straightforward procedures that can be applied to process and analyze textual data for the purposes of quantitative social science research. Using more than 35,000 textual answers to the question "What else are you worried about?" from participants of the German Socio-economic Panel Study (SOEP), we (1) analyzed characteristics of respondents that determined whether they answered the open-ended question, (2) used the textual data to detect relevant topics that were reported by the respondents, and (3) linked the features of the respondents to the worries they reported in their textual data. The potential uses as well as the limitations of the automated analysis of textual data are discussed.

  3. Automated analysis of individual sperm cells using stain-free interferometric phase microscopy and machine learning.

    PubMed

    Mirsky, Simcha K; Barnea, Itay; Levi, Mattan; Greenspan, Hayit; Shaked, Natan T

    2017-09-01

    Currently, the delicate process of selecting sperm cells to be used for in vitro fertilization (IVF) is still based on the subjective, qualitative analysis of experienced clinicians using non-quantitative optical microscopy techniques. In this work, a method was developed for the automated analysis of sperm cells based on the quantitative phase maps acquired through use of interferometric phase microscopy (IPM). Over 1,400 human sperm cells from 8 donors were imaged using IPM, and an algorithm was designed to digitally isolate sperm cell heads from the quantitative phase maps while taking into consideration both the cell 3D morphology and contents, as well as acquire features describing sperm head morphology. A subset of these features was used to train a support vector machine (SVM) classifier to automatically classify sperm of good and bad morphology. The SVM achieves an area under the receiver operating characteristic curve of 88.59% and an area under the precision-recall curve of 88.67%, as well as precisions of 90% or higher. We believe that our automatic analysis can become the basis for objective and automatic sperm cell selection in IVF. © 2017 International Society for Advancement of Cytometry. © 2017 International Society for Advancement of Cytometry.

  4. Multiplex Quantitative Histologic Analysis of Human Breast Cancer Cell Signaling and Cell Fate

    DTIC Science & Technology

    2010-05-01

    Breast cancer, cell signaling, cell proliferation, histology, image analysis 15. NUMBER OF PAGES - 51 16. PRICE CODE 17. SECURITY CLASSIFICATION...revealed by individual stains in multiplex combinations; and (3) software (FARSIGHT) for automated multispectral image analysis that (i) segments...Task 3. Develop computational algorithms for multispectral immunohistological image analysis FARSIGHT software was developed to quantify intrinsic

  5. An automated machine vision system for the histological grading of cervical intraepithelial neoplasia (CIN).

    PubMed

    Keenan, S J; Diamond, J; McCluggage, W G; Bharucha, H; Thompson, D; Bartels, P H; Hamilton, P W

    2000-11-01

    The histological grading of cervical intraepithelial neoplasia (CIN) remains subjective, resulting in inter- and intra-observer variation and poor reproducibility in the grading of cervical lesions. This study has attempted to develop an objective grading system using automated machine vision. The architectural features of cervical squamous epithelium are quantitatively analysed using a combination of computerized digital image processing and Delaunay triangulation analysis; 230 images digitally captured from cases previously classified by a gynaecological pathologist included normal cervical squamous epithelium (n=30), koilocytosis (n=46), CIN 1 (n=52), CIN 2 (n=56), and CIN 3 (n=46). Intra- and inter-observer variation had kappa values of 0.502 and 0.415, respectively. A machine vision system was developed in KS400 macro programming language to segment and mark the centres of all nuclei within the epithelium. By object-oriented analysis of image components, the positional information of nuclei was used to construct a Delaunay triangulation mesh. Each mesh was analysed to compute triangle dimensions including the mean triangle area, the mean triangle edge length, and the number of triangles per unit area, giving an individual quantitative profile of measurements for each case. Discriminant analysis of the geometric data revealed the significant discriminatory variables from which a classification score was derived. The scoring system distinguished between normal and CIN 3 in 98.7% of cases and between koilocytosis and CIN 1 in 76.5% of cases, but only 62.3% of the CIN cases were classified into the correct group, with the CIN 2 group showing the highest rate of misclassification. Graphical plots of triangulation data demonstrated the continuum of morphological change from normal squamous epithelium to the highest grade of CIN, with overlapping of the groups originally defined by the pathologists. This study shows that automated location of nuclei in cervical biopsies using computerized image analysis is possible. Analysis of positional information enables quantitative evaluation of architectural features in CIN using Delaunay triangulation meshes, which is effective in the objective classification of CIN. This demonstrates the future potential of automated machine vision systems in diagnostic histopathology. Copyright 2000 John Wiley & Sons, Ltd.

  6. Accuracy of a remote quantitative image analysis in the whole slide images.

    PubMed

    Słodkowska, Janina; Markiewicz, Tomasz; Grala, Bartłomiej; Kozłowski, Wojciech; Papierz, Wielisław; Pleskacz, Katarzyna; Murawski, Piotr

    2011-03-30

    The rationale for choosing a remote quantitative method supporting a diagnostic decision requires some empirical studies and knowledge on scenarios including valid telepathology standards. The tumours of the central nervous system [CNS] are graded on the base of the morphological features and the Ki-67 labelling Index [Ki-67 LI]. Various methods have been applied for Ki-67 LI estimation. Recently we have introduced the Computerized Analysis of Medical Images [CAMI] software for an automated Ki-67 LI counting in the digital images. Aims of our study was to explore the accuracy and reliability of a remote assessment of Ki-67 LI with CAMI software applied to the whole slide images [WSI]. The WSI representing CNS tumours: 18 meningiomas and 10 oligodendrogliomas were stored on the server of the Warsaw University of Technology. The digital copies of entire glass slides were created automatically by the Aperio ScanScope CS with objective 20x or 40x. Aperio's Image Scope software provided functionality for a remote viewing of WSI. The Ki-67 LI assessment was carried on within 2 out of 20 selected fields of view (objective 40x) representing the highest labelling areas in each WSI. The Ki-67 LI counting was performed by 3 various methods: 1) the manual reading in the light microscope - LM, 2) the automated counting with CAMI software on the digital images - DI , and 3) the remote quantitation on the WSIs - as WSI method. The quality of WSIs and technical efficiency of the on-line system were analysed. The comparative statistical analysis was performed for the results obtained by 3 methods of Ki-67 LI counting. The preliminary analysis showed that in 18% of WSI the results of Ki-67 LI differed from those obtained in other 2 methods of counting when the quality of the glass slides was below the standard range. The results of our investigations indicate that the remote automated Ki-67 LI analysis performed with the CAMI algorithm on the whole slide images of meningiomas and oligodendrogliomas could be successfully used as an alternative method to the manual reading as well as to the digital images quantitation with CAMI software. According to our observation a need of a remote supervision/consultation and training for the effective use of remote quantitative analysis of WSI is necessary.

  7. Performance Equivalence and Validation of the Soleris Automated System for Quantitative Microbial Content Testing Using Pure Suspension Cultures.

    PubMed

    Limberg, Brian J; Johnstone, Kevin; Filloon, Thomas; Catrenich, Carl

    2016-09-01

    Using United States Pharmacopeia-National Formulary (USP-NF) general method <1223> guidance, the Soleris(®) automated system and reagents (Nonfermenting Total Viable Count for bacteria and Direct Yeast and Mold for yeast and mold) were validated, using a performance equivalence approach, as an alternative to plate counting for total microbial content analysis using five representative microbes: Staphylococcus aureus, Bacillus subtilis, Pseudomonas aeruginosa, Candida albicans, and Aspergillus brasiliensis. Detection times (DTs) in the alternative automated system were linearly correlated to CFU/sample (R(2) = 0.94-0.97) with ≥70% accuracy per USP General Chapter <1223> guidance. The LOD and LOQ of the automated system were statistically similar to the traditional plate count method. This system was significantly more precise than plate counting (RSD 1.2-2.9% for DT, 7.8-40.6% for plate counts), was statistically comparable to plate counting with respect to variations in analyst, vial lots, and instruments, and was robust when variations in the operating detection thresholds (dTs; ±2 units) were used. The automated system produced accurate results, was more precise and less labor-intensive, and met or exceeded criteria for a valid alternative quantitative method, consistent with USP-NF general method <1223> guidance.

  8. In vivo automated quantification of quality of apples during storage using optical coherence tomography images

    NASA Astrophysics Data System (ADS)

    Srivastava, Vishal; Dalal, Devjyoti; Kumar, Anuj; Prakash, Surya; Dalal, Krishna

    2018-06-01

    Moisture content is an important feature of fruits and vegetables. As 80% of apple content is water, so decreasing the moisture content will degrade the quality of apples (Golden Delicious). The computational and texture features of the apples were extracted from optical coherence tomography (OCT) images. A support vector machine with a Gaussian kernel model was used to perform automated classification. To evaluate the quality of wax coated apples during storage in vivo, our proposed method opens up the possibility of fully automated quantitative analysis based on the morphological features of apples. Our results demonstrate that the analysis of the computational and texture features of OCT images may be a good non-destructive method for the assessment of the quality of apples.

  9. Recommendations for Quantitative Analysis of Small Molecules by Matrix-assisted laser desorption ionization mass spectrometry

    PubMed Central

    Wang, Poguang; Giese, Roger W.

    2017-01-01

    Matrix-assisted laser desorption ionization mass spectrometry (MALDI-MS) has been used for quantitative analysis of small molecules for many years. It is usually preceded by an LC separation step when complex samples are tested. With the development several years ago of “modern MALDI” (automation, high repetition laser, high resolution peaks), the ease of use and performance of MALDI as a quantitative technique greatly increased. This review focuses on practical aspects of modern MALDI for quantitation of small molecules conducted in an ordinary way (no special reagents, devices or techniques for the spotting step of MALDI), and includes our ordinary, preferred Methods The review is organized as 18 recommendations with accompanying explanations, criticisms and exceptions. PMID:28118972

  10. Astronomical algorithms for automated analysis of tissue protein expression in breast cancer

    PubMed Central

    Ali, H R; Irwin, M; Morris, L; Dawson, S-J; Blows, F M; Provenzano, E; Mahler-Araujo, B; Pharoah, P D; Walton, N A; Brenton, J D; Caldas, C

    2013-01-01

    Background: High-throughput evaluation of tissue biomarkers in oncology has been greatly accelerated by the widespread use of tissue microarrays (TMAs) and immunohistochemistry. Although TMAs have the potential to facilitate protein expression profiling on a scale to rival experiments of tumour transcriptomes, the bottleneck and imprecision of manually scoring TMAs has impeded progress. Methods: We report image analysis algorithms adapted from astronomy for the precise automated analysis of IHC in all subcellular compartments. The power of this technique is demonstrated using over 2000 breast tumours and comparing quantitative automated scores against manual assessment by pathologists. Results: All continuous automated scores showed good correlation with their corresponding ordinal manual scores. For oestrogen receptor (ER), the correlation was 0.82, P<0.0001, for BCL2 0.72, P<0.0001 and for HER2 0.62, P<0.0001. Automated scores showed excellent concordance with manual scores for the unsupervised assignment of cases to ‘positive' or ‘negative' categories with agreement rates of up to 96%. Conclusion: The adaptation of astronomical algorithms coupled with their application to large annotated study cohorts, constitutes a powerful tool for the realisation of the enormous potential of digital pathology. PMID:23329232

  11. Automated Slide Scanning and Segmentation in Fluorescently-labeled Tissues Using a Widefield High-content Analysis System.

    PubMed

    Poon, Candice C; Ebacher, Vincent; Liu, Katherine; Yong, Voon Wee; Kelly, John James Patrick

    2018-05-03

    Automated slide scanning and segmentation of fluorescently-labeled tissues is the most efficient way to analyze whole slides or large tissue sections. Unfortunately, many researchers spend large amounts of time and resources developing and optimizing workflows that are only relevant to their own experiments. In this article, we describe a protocol that can be used by those with access to a widefield high-content analysis system (WHCAS) to image any slide-mounted tissue, with options for customization within pre-built modules found in the associated software. Not originally intended for slide scanning, the steps detailed in this article make it possible to acquire slide scanning images in the WHCAS which can be imported into the associated software. In this example, the automated segmentation of brain tumor slides is demonstrated, but the automated segmentation of any fluorescently-labeled nuclear or cytoplasmic marker is possible. Furthermore, there are a variety of other quantitative software modules including assays for protein localization/translocation, cellular proliferation/viability/apoptosis, and angiogenesis that can be run. This technique will save researchers time and effort and create an automated protocol for slide analysis.

  12. Combining semi-automated image analysis techniques with machine learning algorithms to accelerate large-scale genetic studies.

    PubMed

    Atkinson, Jonathan A; Lobet, Guillaume; Noll, Manuel; Meyer, Patrick E; Griffiths, Marcus; Wells, Darren M

    2017-10-01

    Genetic analyses of plant root systems require large datasets of extracted architectural traits. To quantify such traits from images of root systems, researchers often have to choose between automated tools (that are prone to error and extract only a limited number of architectural traits) or semi-automated ones (that are highly time consuming). We trained a Random Forest algorithm to infer architectural traits from automatically extracted image descriptors. The training was performed on a subset of the dataset, then applied to its entirety. This strategy allowed us to (i) decrease the image analysis time by 73% and (ii) extract meaningful architectural traits based on image descriptors. We also show that these traits are sufficient to identify the quantitative trait loci that had previously been discovered using a semi-automated method. We have shown that combining semi-automated image analysis with machine learning algorithms has the power to increase the throughput of large-scale root studies. We expect that such an approach will enable the quantification of more complex root systems for genetic studies. We also believe that our approach could be extended to other areas of plant phenotyping. © The Authors 2017. Published by Oxford University Press.

  13. Combining semi-automated image analysis techniques with machine learning algorithms to accelerate large-scale genetic studies

    PubMed Central

    Atkinson, Jonathan A.; Lobet, Guillaume; Noll, Manuel; Meyer, Patrick E.; Griffiths, Marcus

    2017-01-01

    Abstract Genetic analyses of plant root systems require large datasets of extracted architectural traits. To quantify such traits from images of root systems, researchers often have to choose between automated tools (that are prone to error and extract only a limited number of architectural traits) or semi-automated ones (that are highly time consuming). We trained a Random Forest algorithm to infer architectural traits from automatically extracted image descriptors. The training was performed on a subset of the dataset, then applied to its entirety. This strategy allowed us to (i) decrease the image analysis time by 73% and (ii) extract meaningful architectural traits based on image descriptors. We also show that these traits are sufficient to identify the quantitative trait loci that had previously been discovered using a semi-automated method. We have shown that combining semi-automated image analysis with machine learning algorithms has the power to increase the throughput of large-scale root studies. We expect that such an approach will enable the quantification of more complex root systems for genetic studies. We also believe that our approach could be extended to other areas of plant phenotyping. PMID:29020748

  14. Temporal Lobe Epilepsy: Quantitative MR Volumetry in Detection of Hippocampal Atrophy

    PubMed Central

    Farid, Nikdokht; Girard, Holly M.; Kemmotsu, Nobuko; Smith, Michael E.; Magda, Sebastian W.; Lim, Wei Y.; Lee, Roland R.

    2012-01-01

    Purpose: To determine the ability of fully automated volumetric magnetic resonance (MR) imaging to depict hippocampal atrophy (HA) and to help correctly lateralize the seizure focus in patients with temporal lobe epilepsy (TLE). Materials and Methods: This study was conducted with institutional review board approval and in compliance with HIPAA regulations. Volumetric MR imaging data were analyzed for 34 patients with TLE and 116 control subjects. Structural volumes were calculated by using U.S. Food and Drug Administration–cleared software for automated quantitative MR imaging analysis (NeuroQuant). Results of quantitative MR imaging were compared with visual detection of atrophy, and, when available, with histologic specimens. Receiver operating characteristic analyses were performed to determine the optimal sensitivity and specificity of quantitative MR imaging for detecting HA and asymmetry. A linear classifier with cross validation was used to estimate the ability of quantitative MR imaging to help lateralize the seizure focus. Results: Quantitative MR imaging–derived hippocampal asymmetries discriminated patients with TLE from control subjects with high sensitivity (86.7%–89.5%) and specificity (92.2%–94.1%). When a linear classifier was used to discriminate left versus right TLE, hippocampal asymmetry achieved 94% classification accuracy. Volumetric asymmetries of other subcortical structures did not improve classification. Compared with invasive video electroencephalographic recordings, lateralization accuracy was 88% with quantitative MR imaging and 85% with visual inspection of volumetric MR imaging studies but only 76% with visual inspection of clinical MR imaging studies. Conclusion: Quantitative MR imaging can depict the presence and laterality of HA in TLE with accuracy rates that may exceed those achieved with visual inspection of clinical MR imaging studies. Thus, quantitative MR imaging may enhance standard visual analysis, providing a useful and viable means for translating volumetric analysis into clinical practice. © RSNA, 2012 Supplemental material: http://radiology.rsna.org/lookup/suppl/doi:10.1148/radiol.12112638/-/DC1 PMID:22723496

  15. Automated quantitative histology reveals vascular morphodynamics during Arabidopsis hypocotyl secondary growth.

    PubMed

    Sankar, Martial; Nieminen, Kaisa; Ragni, Laura; Xenarios, Ioannis; Hardtke, Christian S

    2014-02-11

    Among various advantages, their small size makes model organisms preferred subjects of investigation. Yet, even in model systems detailed analysis of numerous developmental processes at cellular level is severely hampered by their scale. For instance, secondary growth of Arabidopsis hypocotyls creates a radial pattern of highly specialized tissues that comprises several thousand cells starting from a few dozen. This dynamic process is difficult to follow because of its scale and because it can only be investigated invasively, precluding comprehensive understanding of the cell proliferation, differentiation, and patterning events involved. To overcome such limitation, we established an automated quantitative histology approach. We acquired hypocotyl cross-sections from tiled high-resolution images and extracted their information content using custom high-throughput image processing and segmentation. Coupled with automated cell type recognition through machine learning, we could establish a cellular resolution atlas that reveals vascular morphodynamics during secondary growth, for example equidistant phloem pole formation. DOI: http://dx.doi.org/10.7554/eLife.01567.001.

  16. Semi-automated discrimination of retinal pigmented epithelial cells in two-photon fluorescence images of mouse retinas.

    PubMed

    Alexander, Nathan S; Palczewska, Grazyna; Palczewski, Krzysztof

    2015-08-01

    Automated image segmentation is a critical step toward achieving a quantitative evaluation of disease states with imaging techniques. Two-photon fluorescence microscopy (TPM) has been employed to visualize the retinal pigmented epithelium (RPE) and provide images indicating the health of the retina. However, segmentation of RPE cells within TPM images is difficult due to small differences in fluorescence intensity between cell borders and cell bodies. Here we present a semi-automated method for segmenting RPE cells that relies upon multiple weak features that differentiate cell borders from the remaining image. These features were scored by a search optimization procedure that built up the cell border in segments around a nucleus of interest. With six images used as a test, our method correctly identified cell borders for 69% of nuclei on average. Performance was strongly dependent upon increasing retinosome content in the RPE. TPM image analysis has the potential of providing improved early quantitative assessments of diseases affecting the RPE.

  17. Automated quantitative histology reveals vascular morphodynamics during Arabidopsis hypocotyl secondary growth

    PubMed Central

    Sankar, Martial; Nieminen, Kaisa; Ragni, Laura; Xenarios, Ioannis; Hardtke, Christian S

    2014-01-01

    Among various advantages, their small size makes model organisms preferred subjects of investigation. Yet, even in model systems detailed analysis of numerous developmental processes at cellular level is severely hampered by their scale. For instance, secondary growth of Arabidopsis hypocotyls creates a radial pattern of highly specialized tissues that comprises several thousand cells starting from a few dozen. This dynamic process is difficult to follow because of its scale and because it can only be investigated invasively, precluding comprehensive understanding of the cell proliferation, differentiation, and patterning events involved. To overcome such limitation, we established an automated quantitative histology approach. We acquired hypocotyl cross-sections from tiled high-resolution images and extracted their information content using custom high-throughput image processing and segmentation. Coupled with automated cell type recognition through machine learning, we could establish a cellular resolution atlas that reveals vascular morphodynamics during secondary growth, for example equidistant phloem pole formation. DOI: http://dx.doi.org/10.7554/eLife.01567.001 PMID:24520159

  18. Quantitative analyses for elucidating mechanisms of cell fate commitment in the mouse blastocyst

    NASA Astrophysics Data System (ADS)

    Saiz, Néstor; Kang, Minjung; Puliafito, Alberto; Schrode, Nadine; Xenopoulos, Panagiotis; Lou, Xinghua; Di Talia, Stefano; Hadjantonakis, Anna-Katerina

    2015-03-01

    In recent years we have witnessed a shift from qualitative image analysis towards higher resolution, quantitative analyses of imaging data in developmental biology. This shift has been fueled by technological advances in both imaging and analysis software. We have recently developed a tool for accurate, semi-automated nuclear segmentation of imaging data from early mouse embryos and embryonic stem cells. We have applied this software to the study of the first lineage decisions that take place during mouse development and established analysis pipelines for both static and time-lapse imaging experiments. In this paper we summarize the conclusions from these studies to illustrate how quantitative, single-cell level analysis of imaging data can unveil biological processes that cannot be revealed by traditional qualitative studies.

  19. Inter- and intra-observer agreement of BI-RADS-based subjective visual estimation of amount of fibroglandular breast tissue with magnetic resonance imaging: comparison to automated quantitative assessment.

    PubMed

    Wengert, G J; Helbich, T H; Woitek, R; Kapetas, P; Clauser, P; Baltzer, P A; Vogl, W-D; Weber, M; Meyer-Baese, A; Pinker, Katja

    2016-11-01

    To evaluate the inter-/intra-observer agreement of BI-RADS-based subjective visual estimation of the amount of fibroglandular tissue (FGT) with magnetic resonance imaging (MRI), and to investigate whether FGT assessment benefits from an automated, observer-independent, quantitative MRI measurement by comparing both approaches. Eighty women with no imaging abnormalities (BI-RADS 1 and 2) were included in this institutional review board (IRB)-approved prospective study. All women underwent un-enhanced breast MRI. Four radiologists independently assessed FGT with MRI by subjective visual estimation according to BI-RADS. Automated observer-independent quantitative measurement of FGT with MRI was performed using a previously described measurement system. Inter-/intra-observer agreements of qualitative and quantitative FGT measurements were assessed using Cohen's kappa (k). Inexperienced readers achieved moderate inter-/intra-observer agreement and experienced readers a substantial inter- and perfect intra-observer agreement for subjective visual estimation of FGT. Practice and experience reduced observer-dependency. Automated observer-independent quantitative measurement of FGT was successfully performed and revealed only fair to moderate agreement (k = 0.209-0.497) with subjective visual estimations of FGT. Subjective visual estimation of FGT with MRI shows moderate intra-/inter-observer agreement, which can be improved by practice and experience. Automated observer-independent quantitative measurements of FGT are necessary to allow a standardized risk evaluation. • Subjective FGT estimation with MRI shows moderate intra-/inter-observer agreement in inexperienced readers. • Inter-observer agreement can be improved by practice and experience. • Automated observer-independent quantitative measurements can provide reliable and standardized assessment of FGT with MRI.

  20. Automated reagent-dispensing system for microfluidic cell biology assays.

    PubMed

    Ly, Jimmy; Masterman-Smith, Michael; Ramakrishnan, Ravichandran; Sun, Jing; Kokubun, Brent; van Dam, R Michael

    2013-12-01

    Microscale systems that enable measurements of oncological phenomena at the single-cell level have a great capacity to improve therapeutic strategies and diagnostics. Such measurements can reveal unprecedented insights into cellular heterogeneity and its implications into the progression and treatment of complicated cellular disease processes such as those found in cancer. We describe a novel fluid-delivery platform to interface with low-cost microfluidic chips containing arrays of microchambers. Using multiple pairs of needles to aspirate and dispense reagents, the platform enables automated coating of chambers, loading of cells, and treatment with growth media or other agents (e.g., drugs, fixatives, membrane permeabilizers, washes, stains, etc.). The chips can be quantitatively assayed using standard fluorescence-based immunocytochemistry, microscopy, and image analysis tools, to determine, for example, drug response based on differences in protein expression and/or activation of cellular targets on an individual-cell level. In general, automation of fluid and cell handling increases repeatability, eliminates human error, and enables increased throughput, especially for sophisticated, multistep assays such as multiparameter quantitative immunocytochemistry. We report the design of the automated platform and compare several aspects of its performance to manually-loaded microfluidic chips.

  1. Comparison of different approaches to quantitative adenovirus detection in stool specimens of hematopoietic stem cell transplant recipients.

    PubMed

    Kosulin, K; Dworzak, S; Lawitschka, A; Matthes-Leodolter, S; Lion, T

    2016-12-01

    Adenoviruses almost invariably proliferate in the gastrointestinal tract prior to dissemination, and critical threshold concentrations in stool correlate with the risk of viremia. Monitoring of adenovirus loads in stool may therefore be important for timely initiation of treatment in order to prevent invasive infection. Comparison of a manual DNA extraction kit in combination with a validated in-house PCR assay with automated extraction on the NucliSENS-EasyMAG device coupled with the Adenovirus R-gene kit (bioMérieux) for quantitative adenovirus analysis in stool samples. Stool specimens spiked with adenovirus concentrations in a range from 10E2-10E11 copies/g and 32 adenovirus-positive clinical stool specimens from pediatric stem cell transplant recipients were tested along with appropriate negative controls. Quantitative analysis of viral load in adenovirus-positive stool specimens revealed a median difference of 0.5 logs (range 0.1-2.2) between the detection systems tested and a difference of 0.3 logs (range 0.0-1.7) when the comparison was restricted to the PCR assays only. Spiking experiments showed a detection limit of 10 2 -10 3 adenovirus copies/g stool revealing a somewhat higher sensitivity offered by the automated extraction. The dynamic range of accurate quantitative analysis by both systems investigated was between 10 3 and 10 8 virus copies/g. The differences in quantitative analysis of adenovirus copy numbers between the systems tested were primarily attributable to the DNA extraction method used, while the qPCR assays revealed a high level of concordance. Both systems showed adequate performance for detection and monitoring of adenoviral load in stool specimens. Copyright © 2016 Elsevier B.V. All rights reserved.

  2. Web-based automation of green building rating index and life cycle cost analysis

    NASA Astrophysics Data System (ADS)

    Shahzaib Khan, Jam; Zakaria, Rozana; Aminuddin, Eeydzah; IzieAdiana Abidin, Nur; Sahamir, Shaza Rina; Ahmad, Rosli; Nafis Abas, Darul

    2018-04-01

    Sudden decline in financial markets and economic meltdown has slow down adaptation and lowered interest of investors towards green certified buildings due to their higher initial costs. Similarly, it is essential to fetch investor’s attention towards more development of green buildings through automated tools for the construction projects. Though, historical dearth is found on the automation of green building rating tools that brings up an essential gap to develop an automated analog computerized programming tool. This paper present a proposed research aim to develop an integrated web-based automated analog computerized programming that applies green building rating assessment tool, green technology and life cycle cost analysis. It also emphasizes to identify variables of MyCrest and LCC to be integrated and developed in a framework then transformed into automated analog computerized programming. A mix methodology of qualitative and quantitative survey and its development portray the planned to carry MyCrest-LCC integration to an automated level. In this study, the preliminary literature review enriches better understanding of Green Building Rating Tools (GBRT) integration to LCC. The outcome of this research is a pave way for future researchers to integrate other efficient tool and parameters that contributes towards green buildings and future agendas.

  3. Image segmentation and dynamic lineage analysis in single-cell fluorescence microscopy.

    PubMed

    Wang, Quanli; Niemi, Jarad; Tan, Chee-Meng; You, Lingchong; West, Mike

    2010-01-01

    An increasingly common component of studies in synthetic and systems biology is analysis of dynamics of gene expression at the single-cell level, a context that is heavily dependent on the use of time-lapse movies. Extracting quantitative data on the single-cell temporal dynamics from such movies remains a major challenge. Here, we describe novel methods for automating key steps in the analysis of single-cell, fluorescent images-segmentation and lineage reconstruction-to recognize and track individual cells over time. The automated analysis iteratively combines a set of extended morphological methods for segmentation, and uses a neighborhood-based scoring method for frame-to-frame lineage linking. Our studies with bacteria, budding yeast and human cells, demonstrate the portability and usability of these methods, whether using phase, bright field or fluorescent images. These examples also demonstrate the utility of our integrated approach in facilitating analyses of engineered and natural cellular networks in diverse settings. The automated methods are implemented in freely available, open-source software.

  4. Automated image analysis of placental villi and syncytial knots in histological sections.

    PubMed

    Kidron, Debora; Vainer, Ifat; Fisher, Yael; Sharony, Reuven

    2017-05-01

    Delayed villous maturation and accelerated villous maturation diagnosed in histologic sections are morphologic manifestations of pathophysiological conditions. The inter-observer agreement among pathologists in assessing these conditions is moderate at best. We investigated whether automated image analysis of placental villi and syncytial knots could improve standardization in diagnosing these conditions. Placentas of antepartum fetal death at or near term were diagnosed as normal, delayed or accelerated villous maturation. Histologic sections of 5 cases per group were photographed at ×10 magnification. Automated image analysis of villi and syncytial knots was performed, using ImageJ public domain software. Analysis of hundreds of histologic images was carried out within minutes on a personal computer, using macro commands. Compared to normal placentas, villi from delayed maturation were larger and fewer, with fewer and smaller syncytial knots. Villi from accelerated maturation were smaller. The data were further analyzed according to horizontal placental zones and groups of villous size. Normal placentas can be discriminated from placentas of delayed or accelerated villous maturation using automated image analysis. Automated image analysis of villi and syncytial knots is not equivalent to interpretation by the human eye. Each method has advantages and disadvantages in assessing the 2-dimensional histologic sections representing the complex, 3-dimensional villous tree. Image analysis of placentas provides quantitative data that might help in standardizing and grading of placentas for diagnostic and research purposes. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. A Data-Processing System for Quantitative Analysis in Speech Production. CLCS Occasional Paper No. 17.

    ERIC Educational Resources Information Center

    Chasaide, Ailbhe Ni; Davis, Eugene

    The data processing system used at Trinity College's Centre for Language and Communication Studies (Ireland) enables computer-automated collection and analysis of phonetic data and has many advantages for research on speech production. The system allows accurate handling of large quantities of data, eliminates many of the limitations of manual…

  6. The Focinator v2-0 - Graphical Interface, Four Channels, Colocalization Analysis and Cell Phase Identification.

    PubMed

    Oeck, Sebastian; Malewicz, Nathalie M; Hurst, Sebastian; Al-Refae, Klaudia; Krysztofiak, Adam; Jendrossek, Verena

    2017-07-01

    The quantitative analysis of foci plays an important role in various cell biological methods. In the fields of radiation biology and experimental oncology, the effect of ionizing radiation, chemotherapy or molecularly targeted drugs on DNA damage induction and repair is frequently performed by the analysis of protein clusters or phosphorylated proteins recruited to so called repair foci at DNA damage sites, involving for example γ-H2A.X, 53BP1 or RAD51. We recently developed "The Focinator" as a reliable and fast tool for automated quantitative and qualitative analysis of nuclei and DNA damage foci. The refined software is now even more user-friendly due to a graphical interface and further features. Thus, we included an R-script-based mode for automated image opening, file naming, progress monitoring and an error report. Consequently, the evaluation no longer required the attendance of the operator after initial parameter definition. Moreover, the Focinator v2-0 is now able to perform multi-channel analysis of four channels and evaluation of protein-protein colocalization by comparison of up to three foci channels. This enables for example the quantification of foci in cells of a specific cell cycle phase.

  7. Automated microscopy for high-content RNAi screening

    PubMed Central

    2010-01-01

    Fluorescence microscopy is one of the most powerful tools to investigate complex cellular processes such as cell division, cell motility, or intracellular trafficking. The availability of RNA interference (RNAi) technology and automated microscopy has opened the possibility to perform cellular imaging in functional genomics and other large-scale applications. Although imaging often dramatically increases the content of a screening assay, it poses new challenges to achieve accurate quantitative annotation and therefore needs to be carefully adjusted to the specific needs of individual screening applications. In this review, we discuss principles of assay design, large-scale RNAi, microscope automation, and computational data analysis. We highlight strategies for imaging-based RNAi screening adapted to different library and assay designs. PMID:20176920

  8. Diffusion MRI with Semi-Automated Segmentation Can Serve as a Restricted Predictive Biomarker of the Therapeutic Response of Liver Metastasis

    PubMed Central

    Stephen, Renu M.; Jha, Abhinav K.; Roe, Denise J.; Trouard, Theodore P.; Galons, Jean-Philippe; Kupinski, Matthew A.; Frey, Georgette; Cui, Haiyan; Squire, Scott; Pagel, Mark D.; Rodriguez, Jeffrey J.; Gillies, Robert J.; Stopeck, Alison T.

    2015-01-01

    Purpose To assess the value of semi-automated segmentation applied to diffusion MRI for predicting the therapeutic response of liver metastasis. Methods Conventional diffusion weighted magnetic resonance imaging (MRI) was performed using b-values of 0, 150, 300 and 450 s/mm2 at baseline and days 4, 11 and 39 following initiation of a new chemotherapy regimen in a pilot study with 18 women with 37 liver metastases from primary breast cancer. A semi-automated segmentation approach was used to identify liver metastases. Linear regression analysis was used to assess the relationship between baseline values of the apparent diffusion coefficient (ADC) and change in tumor size by day 39. Results A semi-automated segmentation scheme was critical for obtaining the most reliable ADC measurements. A statistically significant relationship between baseline ADC values and change in tumor size at day 39 was observed for minimally treated patients with metastatic liver lesions measuring 2–5 cm in size (p = 0.002), but not for heavily treated patients with the same tumor size range (p = 0.29), or for tumors of smaller or larger sizes. ROC analysis identified a baseline threshold ADC value of 1.33 μm2/ms as 75% sensitive and 83% specific for identifying non-responding metastases in minimally treated patients with 2–5 cm liver lesions. Conclusion Quantitative imaging can substantially benefit from a semi-automated segmentation scheme. Quantitative diffusion MRI results can be predictive of therapeutic outcome in selected patients with liver metastases, but not for all liver metastases, and therefore should be considered to be a restricted biomarker. PMID:26284600

  9. Diffusion MRI with Semi-Automated Segmentation Can Serve as a Restricted Predictive Biomarker of the Therapeutic Response of Liver Metastasis.

    PubMed

    Stephen, Renu M; Jha, Abhinav K; Roe, Denise J; Trouard, Theodore P; Galons, Jean-Philippe; Kupinski, Matthew A; Frey, Georgette; Cui, Haiyan; Squire, Scott; Pagel, Mark D; Rodriguez, Jeffrey J; Gillies, Robert J; Stopeck, Alison T

    2015-12-01

    To assess the value of semi-automated segmentation applied to diffusion MRI for predicting the therapeutic response of liver metastasis. Conventional diffusion weighted magnetic resonance imaging (MRI) was performed using b-values of 0, 150, 300 and 450s/mm(2) at baseline and days 4, 11 and 39 following initiation of a new chemotherapy regimen in a pilot study with 18 women with 37 liver metastases from primary breast cancer. A semi-automated segmentation approach was used to identify liver metastases. Linear regression analysis was used to assess the relationship between baseline values of the apparent diffusion coefficient (ADC) and change in tumor size by day 39. A semi-automated segmentation scheme was critical for obtaining the most reliable ADC measurements. A statistically significant relationship between baseline ADC values and change in tumor size at day 39 was observed for minimally treated patients with metastatic liver lesions measuring 2-5cm in size (p=0.002), but not for heavily treated patients with the same tumor size range (p=0.29), or for tumors of smaller or larger sizes. ROC analysis identified a baseline threshold ADC value of 1.33μm(2)/ms as 75% sensitive and 83% specific for identifying non-responding metastases in minimally treated patients with 2-5cm liver lesions. Quantitative imaging can substantially benefit from a semi-automated segmentation scheme. Quantitative diffusion MRI results can be predictive of therapeutic outcome in selected patients with liver metastases, but not for all liver metastases, and therefore should be considered to be a restricted biomarker. Copyright © 2015 Elsevier Inc. All rights reserved.

  10. Automated wholeslide analysis of multiplex-brightfield IHC images for cancer cells and carcinoma-associated fibroblasts

    NASA Astrophysics Data System (ADS)

    Lorsakul, Auranuch; Andersson, Emilia; Vega Harring, Suzana; Sade, Hadassah; Grimm, Oliver; Bredno, Joerg

    2017-03-01

    Multiplex-brightfield immunohistochemistry (IHC) staining and quantitative measurement of multiple biomarkers can support therapeutic targeting of carcinoma-associated fibroblasts (CAF). This paper presents an automated digitalpathology solution to simultaneously analyze multiple biomarker expressions within a single tissue section stained with an IHC duplex assay. Our method was verified against ground truth provided by expert pathologists. In the first stage, the automated method quantified epithelial-carcinoma cells expressing cytokeratin (CK) using robust nucleus detection and supervised cell-by-cell classification algorithms with a combination of nucleus and contextual features. Using fibroblast activation protein (FAP) as biomarker for CAFs, the algorithm was trained, based on ground truth obtained from pathologists, to automatically identify tumor-associated stroma using a supervised-generation rule. The algorithm reported distance to nearest neighbor in the populations of tumor cells and activated-stromal fibroblasts as a wholeslide measure of spatial relationships. A total of 45 slides from six indications (breast, pancreatic, colorectal, lung, ovarian, and head-and-neck cancers) were included for training and verification. CK-positive cells detected by the algorithm were verified by a pathologist with good agreement (R2=0.98) to ground-truth count. For the area occupied by FAP-positive cells, the inter-observer agreement between two sets of ground-truth measurements was R2=0.93 whereas the algorithm reproduced the pathologists' areas with R2=0.96. The proposed methodology enables automated image analysis to measure spatial relationships of cells stained in an IHC-multiplex assay. Our proof-of-concept results show an automated algorithm can be trained to reproduce the expert assessment and provide quantitative readouts that potentially support a cutoff determination in hypothesis testing related to CAF-targeting-therapy decisions.

  11. Experimental design and data-analysis in label-free quantitative LC/MS proteomics: A tutorial with MSqRob.

    PubMed

    Goeminne, Ludger J E; Gevaert, Kris; Clement, Lieven

    2018-01-16

    Label-free shotgun proteomics is routinely used to assess proteomes. However, extracting relevant information from the massive amounts of generated data remains difficult. This tutorial provides a strong foundation on analysis of quantitative proteomics data. We provide key statistical concepts that help researchers to design proteomics experiments and we showcase how to analyze quantitative proteomics data using our recent free and open-source R package MSqRob, which was developed to implement the peptide-level robust ridge regression method for relative protein quantification described by Goeminne et al. MSqRob can handle virtually any experimental proteomics design and outputs proteins ordered by statistical significance. Moreover, its graphical user interface and interactive diagnostic plots provide easy inspection and also detection of anomalies in the data and flaws in the data analysis, allowing deeper assessment of the validity of results and a critical review of the experimental design. Our tutorial discusses interactive preprocessing, data analysis and visualization of label-free MS-based quantitative proteomics experiments with simple and more complex designs. We provide well-documented scripts to run analyses in bash mode on GitHub, enabling the integration of MSqRob in automated pipelines on cluster environments (https://github.com/statOmics/MSqRob). The concepts outlined in this tutorial aid in designing better experiments and analyzing the resulting data more appropriately. The two case studies using the MSqRob graphical user interface will contribute to a wider adaptation of advanced peptide-based models, resulting in higher quality data analysis workflows and more reproducible results in the proteomics community. We also provide well-documented scripts for experienced users that aim at automating MSqRob on cluster environments. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. Groping for quantitative digital 3-D image analysis: an approach to quantitative fluorescence in situ hybridization in thick tissue sections of prostate carcinoma.

    PubMed

    Rodenacker, K; Aubele, M; Hutzler, P; Adiga, P S

    1997-01-01

    In molecular pathology numerical chromosome aberrations have been found to be decisive for the prognosis of malignancy in tumours. The existence of such aberrations can be detected by interphase fluorescence in situ hybridization (FISH). The gain or loss of certain base sequences in the desoxyribonucleic acid (DNA) can be estimated by counting the number of FISH signals per cell nucleus. The quantitative evaluation of such events is a necessary condition for a prospective use in diagnostic pathology. To avoid occlusions of signals, the cell nucleus has to be analyzed in three dimensions. Confocal laser scanning microscopy is the means to obtain series of optical thin sections from fluorescence stained or marked material to fulfill the conditions mentioned above. A graphical user interface (GUI) to a software package for display, inspection, count and (semi-)automatic analysis of 3-D images for pathologists is outlined including the underlying methods of 3-D image interaction and segmentation developed. The preparative methods are briefly described. Main emphasis is given to the methodical questions of computer-aided analysis of large 3-D image data sets for pathologists. Several automated analysis steps can be performed for segmentation and succeeding quantification. However tumour material is in contrast to isolated or cultured cells even for visual inspection, a difficult material. For the present a fully automated digital image analysis of 3-D data is not in sight. A semi-automatic segmentation method is thus presented here.

  13. Proteomics wants cRacker: automated standardized data analysis of LC-MS derived proteomic data.

    PubMed

    Zauber, Henrik; Schulze, Waltraud X

    2012-11-02

    The large-scale analysis of thousands of proteins under various experimental conditions or in mutant lines has gained more and more importance in hypothesis-driven scientific research and systems biology in the past years. Quantitative analysis by large scale proteomics using modern mass spectrometry usually results in long lists of peptide ion intensities. The main interest for most researchers, however, is to draw conclusions on the protein level. Postprocessing and combining peptide intensities of a proteomic data set requires expert knowledge, and the often repetitive and standardized manual calculations can be time-consuming. The analysis of complex samples can result in very large data sets (lists with several 1000s to 100,000 entries of different peptides) that cannot easily be analyzed using standard spreadsheet programs. To improve speed and consistency of the data analysis of LC-MS derived proteomic data, we developed cRacker. cRacker is an R-based program for automated downstream proteomic data analysis including data normalization strategies for metabolic labeling and label free quantitation. In addition, cRacker includes basic statistical analysis, such as clustering of data, or ANOVA and t tests for comparison between treatments. Results are presented in editable graphic formats and in list files.

  14. Some selected quantitative methods of thermal image analysis in Matlab.

    PubMed

    Koprowski, Robert

    2016-05-01

    The paper presents a new algorithm based on some selected automatic quantitative methods for analysing thermal images. It shows the practical implementation of these image analysis methods in Matlab. It enables to perform fully automated and reproducible measurements of selected parameters in thermal images. The paper also shows two examples of the use of the proposed image analysis methods for the area of ​​the skin of a human foot and face. The full source code of the developed application is also provided as an attachment. The main window of the program during dynamic analysis of the foot thermal image. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  15. Extracting microtubule networks from superresolution single-molecule localization microscopy data

    PubMed Central

    Zhang, Zhen; Nishimura, Yukako; Kanchanawong, Pakorn

    2017-01-01

    Microtubule filaments form ubiquitous networks that specify spatial organization in cells. However, quantitative analysis of microtubule networks is hampered by their complex architecture, limiting insights into the interplay between their organization and cellular functions. Although superresolution microscopy has greatly facilitated high-resolution imaging of microtubule filaments, extraction of complete filament networks from such data sets is challenging. Here we describe a computational tool for automated retrieval of microtubule filaments from single-molecule-localization–based superresolution microscopy images. We present a user-friendly, graphically interfaced implementation and a quantitative analysis of microtubule network architecture phenotypes in fibroblasts. PMID:27852898

  16. Multiplex Quantitative Histologic Analysis of Human Breast Cancer Cell Signaling and Cell Fate

    DTIC Science & Technology

    2008-05-01

    stains. 15. SUBJECT TERMS Breast cancer, cell signaling, cell proliferation, histology, image analysis 16. SECURITY CLASSIFICATION OF: 17...fluorescence, and these DAPI-stained nuclei are often not counted during subsequent image analysis ). To study two analytes in the same tumor section or...analytes (p-ERK, p-AKT, Ki67) and for epithelial cytokeratin (CK), so that tumor cells may be identified during subsequent automated image analysis (as

  17. Automatic structured grid generation using Gridgen (some restrictions apply)

    NASA Technical Reports Server (NTRS)

    Chawner, John R.; Steinbrenner, John P.

    1995-01-01

    The authors have noticed in the recent grid generation literature an emphasis on the automation of structured grid generation. The motivation behind such work is clear; grid generation is easily the most despised task in the grid-analyze-visualize triad of computational analysis (CA). However, because grid generation is closely coupled to both the design and analysis software and because quantitative measures of grid quality are lacking, 'push button' grid generation usually results in a compromise between speed, control, and quality. Overt emphasis on automation obscures the substantive issues of providing users with flexible tools for generating and modifying high quality grids in a design environment. In support of this paper's tongue-in-cheek title, many features of the Gridgen software are described. Gridgen is by no stretch of the imagination an automatic grid generator. Despite this fact, the code does utilize many automation techniques that permit interesting regenerative features.

  18. Computational Analysis of Behavior.

    PubMed

    Egnor, S E Roian; Branson, Kristin

    2016-07-08

    In this review, we discuss the emerging field of computational behavioral analysis-the use of modern methods from computer science and engineering to quantitatively measure animal behavior. We discuss aspects of experiment design important to both obtaining biologically relevant behavioral data and enabling the use of machine vision and learning techniques for automation. These two goals are often in conflict. Restraining or restricting the environment of the animal can simplify automatic behavior quantification, but it can also degrade the quality or alter important aspects of behavior. To enable biologists to design experiments to obtain better behavioral measurements, and computer scientists to pinpoint fruitful directions for algorithm improvement, we review known effects of artificial manipulation of the animal on behavior. We also review machine vision and learning techniques for tracking, feature extraction, automated behavior classification, and automated behavior discovery, the assumptions they make, and the types of data they work best with.

  19. TRIC: an automated alignment strategy for reproducible protein quantification in targeted proteomics.

    PubMed

    Röst, Hannes L; Liu, Yansheng; D'Agostino, Giuseppe; Zanella, Matteo; Navarro, Pedro; Rosenberger, George; Collins, Ben C; Gillet, Ludovic; Testa, Giuseppe; Malmström, Lars; Aebersold, Ruedi

    2016-09-01

    Next-generation mass spectrometric (MS) techniques such as SWATH-MS have substantially increased the throughput and reproducibility of proteomic analysis, but ensuring consistent quantification of thousands of peptide analytes across multiple liquid chromatography-tandem MS (LC-MS/MS) runs remains a challenging and laborious manual process. To produce highly consistent and quantitatively accurate proteomics data matrices in an automated fashion, we developed TRIC (http://proteomics.ethz.ch/tric/), a software tool that utilizes fragment-ion data to perform cross-run alignment, consistent peak-picking and quantification for high-throughput targeted proteomics. TRIC reduced the identification error compared to a state-of-the-art SWATH-MS analysis without alignment by more than threefold at constant recall while correcting for highly nonlinear chromatographic effects. On a pulsed-SILAC experiment performed on human induced pluripotent stem cells, TRIC was able to automatically align and quantify thousands of light and heavy isotopic peak groups. Thus, TRIC fills a gap in the pipeline for automated analysis of massively parallel targeted proteomics data sets.

  20. Automated segmentation of retinal pigment epithelium cells in fluorescence adaptive optics images.

    PubMed

    Rangel-Fonseca, Piero; Gómez-Vieyra, Armando; Malacara-Hernández, Daniel; Wilson, Mario C; Williams, David R; Rossi, Ethan A

    2013-12-01

    Adaptive optics (AO) imaging methods allow the histological characteristics of retinal cell mosaics, such as photoreceptors and retinal pigment epithelium (RPE) cells, to be studied in vivo. The high-resolution images obtained with ophthalmic AO imaging devices are rich with information that is difficult and/or tedious to quantify using manual methods. Thus, robust, automated analysis tools that can provide reproducible quantitative information about the cellular mosaics under examination are required. Automated algorithms have been developed to detect the position of individual photoreceptor cells; however, most of these methods are not well suited for characterizing the RPE mosaic. We have developed an algorithm for RPE cell segmentation and show its performance here on simulated and real fluorescence AO images of the RPE mosaic. Algorithm performance was compared to manual cell identification and yielded better than 91% correspondence. This method can be used to segment RPE cells for morphometric analysis of the RPE mosaic and speed the analysis of both healthy and diseased RPE mosaics.

  1. Quantitative determination of low-Z elements in single atmospheric particles on boron substrates by automated scanning electron microscopy-energy-dispersive X-ray spectrometry.

    PubMed

    Choël, Marie; Deboudt, Karine; Osán, János; Flament, Pascal; Van Grieken, René

    2005-09-01

    Atmospheric aerosols consist of a complex heterogeneous mixture of particles. Single-particle analysis techniques are known to provide unique information on the size-resolved chemical composition of aerosols. A scanning electron microscope (SEM) combined with a thin-window energy-dispersive X-ray (EDX) detector enables the morphological and elemental analysis of single particles down to 0.1 microm with a detection limit of 1-10 wt %, low-Z elements included. To obtain data statistically representative of the air masses sampled, a computer-controlled procedure can be implemented in order to run hundreds of single-particle analyses (typically 1000-2000) automatically in a relatively short period of time (generally 4-8 h, depending on the setup and on the particle loading). However, automated particle analysis by SEM-EDX raises two practical challenges: the accuracy of the particle recognition and the reliability of the quantitative analysis, especially for micrometer-sized particles with low atomic number contents. Since low-Z analysis is hampered by the use of traditional polycarbonate membranes, an alternate choice of substrate is a prerequisite. In this work, boron is being studied as a promising material for particle microanalysis. As EDX is generally said to probe a volume of approximately 1 microm3, geometry effects arise from the finite size of microparticles. These particle geometry effects must be corrected by means of a robust concentration calculation procedure. Conventional quantitative methods developed for bulk samples generate elemental concentrations considerably in error when applied to microparticles. A new methodology for particle microanalysis, combining the use of boron as the substrate material and a reverse Monte Carlo quantitative program, was tested on standard particles ranging from 0.25 to 10 microm. We demonstrate that the quantitative determination of low-Z elements in microparticles is achievable and that highly accurate results can be obtained using the automatic data processing described here compared to conventional methods.

  2. Automated extraction for the analysis of 11-nor-delta9-tetrahydrocannabinol-9-carboxylic acid (THCCOOH) in urine using a six-head probe Hamilton Microlab 2200 system and gas chromatography-mass spectrometry.

    PubMed

    Whitter, P D; Cary, P L; Leaton, J I; Johnson, J E

    1999-01-01

    An automated extraction scheme for the analysis of 11 -nor-delta9-tetrahydrocannabinol-9-carboxylic acid using the Hamilton Microlab 2200, which was modified for gravity-flow solid-phase extraction, has been evaluated. The Hamilton was fitted with a six-head probe, a modular valve positioner, and a peristaltic pump. The automated method significantly increased sample throughput, improved assay consistency, and reduced the time spent performing the extraction. Extraction recovery for the automated method was > 90%. The limit of detection, limit of quantitation, and upper limit of linearity were equivalent to the manual method: 1.5, 3.0, and 300 ng/mL, respectively. Precision at the 15-ng/mL cut-off was as follows: mean = 14.4, standard deviation = 0.5, coefficient of variation = 3.5%. Comparison of 38 patient samples, extracted by the manual and automated extraction methods, demonstrated the following correlation statistics: r = .991, slope 1.029, and y-intercept -2.895. Carryover was < 0.3% at 1000 ng/mL. Aliquoting/extraction time for the automated method (48 urine samples) was 50 min, and the manual procedure required approximately 2.5 h. The automated aliquoting/extraction method on the Hamilton Microlab 2200 and its use in forensic applications are reviewed.

  3. Deep machine learning provides state-of-the-art performance in image-based plant phenotyping.

    PubMed

    Pound, Michael P; Atkinson, Jonathan A; Townsend, Alexandra J; Wilson, Michael H; Griffiths, Marcus; Jackson, Aaron S; Bulat, Adrian; Tzimiropoulos, Georgios; Wells, Darren M; Murchie, Erik H; Pridmore, Tony P; French, Andrew P

    2017-10-01

    In plant phenotyping, it has become important to be able to measure many features on large image sets in order to aid genetic discovery. The size of the datasets, now often captured robotically, often precludes manual inspection, hence the motivation for finding a fully automated approach. Deep learning is an emerging field that promises unparalleled results on many data analysis problems. Building on artificial neural networks, deep approaches have many more hidden layers in the network, and hence have greater discriminative and predictive power. We demonstrate the use of such approaches as part of a plant phenotyping pipeline. We show the success offered by such techniques when applied to the challenging problem of image-based plant phenotyping and demonstrate state-of-the-art results (>97% accuracy) for root and shoot feature identification and localization. We use fully automated trait identification using deep learning to identify quantitative trait loci in root architecture datasets. The majority (12 out of 14) of manually identified quantitative trait loci were also discovered using our automated approach based on deep learning detection to locate plant features. We have shown deep learning-based phenotyping to have very good detection and localization accuracy in validation and testing image sets. We have shown that such features can be used to derive meaningful biological traits, which in turn can be used in quantitative trait loci discovery pipelines. This process can be completely automated. We predict a paradigm shift in image-based phenotyping bought about by such deep learning approaches, given sufficient training sets. © The Authors 2017. Published by Oxford University Press.

  4. CANEapp: a user-friendly application for automated next generation transcriptomic data analysis.

    PubMed

    Velmeshev, Dmitry; Lally, Patrick; Magistri, Marco; Faghihi, Mohammad Ali

    2016-01-13

    Next generation sequencing (NGS) technologies are indispensable for molecular biology research, but data analysis represents the bottleneck in their application. Users need to be familiar with computer terminal commands, the Linux environment, and various software tools and scripts. Analysis workflows have to be optimized and experimentally validated to extract biologically meaningful data. Moreover, as larger datasets are being generated, their analysis requires use of high-performance servers. To address these needs, we developed CANEapp (application for Comprehensive automated Analysis of Next-generation sequencing Experiments), a unique suite that combines a Graphical User Interface (GUI) and an automated server-side analysis pipeline that is platform-independent, making it suitable for any server architecture. The GUI runs on a PC or Mac and seamlessly connects to the server to provide full GUI control of RNA-sequencing (RNA-seq) project analysis. The server-side analysis pipeline contains a framework that is implemented on a Linux server through completely automated installation of software components and reference files. Analysis with CANEapp is also fully automated and performs differential gene expression analysis and novel noncoding RNA discovery through alternative workflows (Cuffdiff and R packages edgeR and DESeq2). We compared CANEapp to other similar tools, and it significantly improves on previous developments. We experimentally validated CANEapp's performance by applying it to data derived from different experimental paradigms and confirming the results with quantitative real-time PCR (qRT-PCR). CANEapp adapts to any server architecture by effectively using available resources and thus handles large amounts of data efficiently. CANEapp performance has been experimentally validated on various biological datasets. CANEapp is available free of charge at http://psychiatry.med.miami.edu/research/laboratory-of-translational-rna-genomics/CANE-app . We believe that CANEapp will serve both biologists with no computational experience and bioinformaticians as a simple, timesaving but accurate and powerful tool to analyze large RNA-seq datasets and will provide foundations for future development of integrated and automated high-throughput genomics data analysis tools. Due to its inherently standardized pipeline and combination of automated analysis and platform-independence, CANEapp is an ideal for large-scale collaborative RNA-seq projects between different institutions and research groups.

  5. Ranking Quantitative Resistance to Septoria tritici Blotch in Elite Wheat Cultivars Using Automated Image Analysis.

    PubMed

    Karisto, Petteri; Hund, Andreas; Yu, Kang; Anderegg, Jonas; Walter, Achim; Mascher, Fabio; McDonald, Bruce A; Mikaberidze, Alexey

    2018-05-01

    Quantitative resistance is likely to be more durable than major gene resistance for controlling Septoria tritici blotch (STB) on wheat. Earlier studies hypothesized that resistance affecting the degree of host damage, as measured by the percentage of leaf area covered by STB lesions, is distinct from resistance that affects pathogen reproduction, as measured by the density of pycnidia produced within lesions. We tested this hypothesis using a collection of 335 elite European winter wheat cultivars that was naturally infected by a diverse population of Zymoseptoria tritici in a replicated field experiment. We used automated image analysis of 21,420 scanned wheat leaves to obtain quantitative measures of conditional STB intensity that were precise, objective, and reproducible. These measures allowed us to explicitly separate resistance affecting host damage from resistance affecting pathogen reproduction, enabling us to confirm that these resistance traits are largely independent. The cultivar rankings based on host damage were different from the rankings based on pathogen reproduction, indicating that the two forms of resistance should be considered separately in breeding programs aiming to increase STB resistance. We hypothesize that these different forms of resistance are under separate genetic control, enabling them to be recombined to form new cultivars that are highly resistant to STB. We found a significant correlation between rankings based on automated image analysis and rankings based on traditional visual scoring, suggesting that image analysis can complement conventional measurements of STB resistance, based largely on host damage, while enabling a much more precise measure of pathogen reproduction. We showed that measures of pathogen reproduction early in the growing season were the best predictors of host damage late in the growing season, illustrating the importance of breeding for resistance that reduces pathogen reproduction in order to minimize yield losses caused by STB. These data can already be used by breeding programs to choose wheat cultivars that are broadly resistant to naturally diverse Z. tritici populations according to the different classes of resistance.

  6. Quantitative Indicators for Behaviour Drift Detection from Home Automation Data.

    PubMed

    Veronese, Fabio; Masciadri, Andrea; Comai, Sara; Matteucci, Matteo; Salice, Fabio

    2017-01-01

    Smart Homes diffusion provides an opportunity to implement elderly monitoring, extending seniors' independence and avoiding unnecessary assistance costs. Information concerning the inhabitant behaviour is contained in home automation data, and can be extracted by means of quantitative indicators. The application of such approach proves it can evidence behaviour changes.

  7. Quantitative assessment of neurite outgrowth in human embryonic stem cell derived hN2 cells using automated high-content image analysis

    EPA Science Inventory

    Throughout development neurons undergo a number of morphological changes including neurite outgrowth from the cell body. Exposure to neurotoxic chemicals that interfere with this process may result in permanent deficits in nervous system function. Traditionally, rodent primary ne...

  8. Quantitative assessment of neurite outgrowth in human embryonic stem-cell derived neurons using automated high-content image analysis

    EPA Science Inventory

    During development neurons undergo a number of morphological changes including neurite outgrowth from the cell body. Exposure to neurotoxicants that interfere with this process may cause in permanent deficits in nervous system function. While many studies have used rodent primary...

  9. CellSegm - a MATLAB toolbox for high-throughput 3D cell segmentation

    PubMed Central

    2013-01-01

    The application of fluorescence microscopy in cell biology often generates a huge amount of imaging data. Automated whole cell segmentation of such data enables the detection and analysis of individual cells, where a manual delineation is often time consuming, or practically not feasible. Furthermore, compared to manual analysis, automation normally has a higher degree of reproducibility. CellSegm, the software presented in this work, is a Matlab based command line software toolbox providing an automated whole cell segmentation of images showing surface stained cells, acquired by fluorescence microscopy. It has options for both fully automated and semi-automated cell segmentation. Major algorithmic steps are: (i) smoothing, (ii) Hessian-based ridge enhancement, (iii) marker-controlled watershed segmentation, and (iv) feature-based classfication of cell candidates. Using a wide selection of image recordings and code snippets, we demonstrate that CellSegm has the ability to detect various types of surface stained cells in 3D. After detection and outlining of individual cells, the cell candidates can be subject to software based analysis, specified and programmed by the end-user, or they can be analyzed by other software tools. A segmentation of tissue samples with appropriate characteristics is also shown to be resolvable in CellSegm. The command-line interface of CellSegm facilitates scripting of the separate tools, all implemented in Matlab, offering a high degree of flexibility and tailored workflows for the end-user. The modularity and scripting capabilities of CellSegm enable automated workflows and quantitative analysis of microscopic data, suited for high-throughput image based screening. PMID:23938087

  10. CellSegm - a MATLAB toolbox for high-throughput 3D cell segmentation.

    PubMed

    Hodneland, Erlend; Kögel, Tanja; Frei, Dominik Michael; Gerdes, Hans-Hermann; Lundervold, Arvid

    2013-08-09

    : The application of fluorescence microscopy in cell biology often generates a huge amount of imaging data. Automated whole cell segmentation of such data enables the detection and analysis of individual cells, where a manual delineation is often time consuming, or practically not feasible. Furthermore, compared to manual analysis, automation normally has a higher degree of reproducibility. CellSegm, the software presented in this work, is a Matlab based command line software toolbox providing an automated whole cell segmentation of images showing surface stained cells, acquired by fluorescence microscopy. It has options for both fully automated and semi-automated cell segmentation. Major algorithmic steps are: (i) smoothing, (ii) Hessian-based ridge enhancement, (iii) marker-controlled watershed segmentation, and (iv) feature-based classfication of cell candidates. Using a wide selection of image recordings and code snippets, we demonstrate that CellSegm has the ability to detect various types of surface stained cells in 3D. After detection and outlining of individual cells, the cell candidates can be subject to software based analysis, specified and programmed by the end-user, or they can be analyzed by other software tools. A segmentation of tissue samples with appropriate characteristics is also shown to be resolvable in CellSegm. The command-line interface of CellSegm facilitates scripting of the separate tools, all implemented in Matlab, offering a high degree of flexibility and tailored workflows for the end-user. The modularity and scripting capabilities of CellSegm enable automated workflows and quantitative analysis of microscopic data, suited for high-throughput image based screening.

  11. A Reproducible Computerized Method for Quantitation of Capillary Density using Nailfold Capillaroscopy.

    PubMed

    Cheng, Cynthia; Lee, Chadd W; Daskalakis, Constantine

    2015-10-27

    Capillaroscopy is a non-invasive, efficient, relatively inexpensive and easy to learn methodology for directly visualizing the microcirculation. The capillaroscopy technique can provide insight into a patient's microvascular health, leading to a variety of potentially valuable dermatologic, ophthalmologic, rheumatologic and cardiovascular clinical applications. In addition, tumor growth may be dependent on angiogenesis, which can be quantitated by measuring microvessel density within the tumor. However, there is currently little to no standardization of techniques, and only one publication to date reports the reliability of a currently available, complex computer based algorithms for quantitating capillaroscopy data.(1) This paper describes a new, simpler, reliable, standardized capillary counting algorithm for quantitating nailfold capillaroscopy data. A simple, reproducible computerized capillaroscopy algorithm such as this would facilitate more widespread use of the technique among researchers and clinicians. Many researchers currently analyze capillaroscopy images by hand, promoting user fatigue and subjectivity of the results. This paper describes a novel, easy-to-use automated image processing algorithm in addition to a reproducible, semi-automated counting algorithm. This algorithm enables analysis of images in minutes while reducing subjectivity; only a minimal amount of training time (in our experience, less than 1 hr) is needed to learn the technique.

  12. A Reproducible Computerized Method for Quantitation of Capillary Density using Nailfold Capillaroscopy

    PubMed Central

    Daskalakis, Constantine

    2015-01-01

    Capillaroscopy is a non-invasive, efficient, relatively inexpensive and easy to learn methodology for directly visualizing the microcirculation. The capillaroscopy technique can provide insight into a patient’s microvascular health, leading to a variety of potentially valuable dermatologic, ophthalmologic, rheumatologic and cardiovascular clinical applications. In addition, tumor growth may be dependent on angiogenesis, which can be quantitated by measuring microvessel density within the tumor. However, there is currently little to no standardization of techniques, and only one publication to date reports the reliability of a currently available, complex computer based algorithms for quantitating capillaroscopy data.1 This paper describes a new, simpler, reliable, standardized capillary counting algorithm for quantitating nailfold capillaroscopy data. A simple, reproducible computerized capillaroscopy algorithm such as this would facilitate more widespread use of the technique among researchers and clinicians. Many researchers currently analyze capillaroscopy images by hand, promoting user fatigue and subjectivity of the results. This paper describes a novel, easy-to-use automated image processing algorithm in addition to a reproducible, semi-automated counting algorithm. This algorithm enables analysis of images in minutes while reducing subjectivity; only a minimal amount of training time (in our experience, less than 1 hr) is needed to learn the technique. PMID:26554744

  13. Quantitative IR microscopy and spectromics open the way to 3D digital pathology.

    PubMed

    Bobroff, Vladimir; Chen, Hsiang-Hsin; Delugin, Maylis; Javerzat, Sophie; Petibois, Cyril

    2017-04-01

    Currently, only mass-spectrometry (MS) microscopy brings a quantitative analysis of chemical contents of tissue samples in 3D. Here, the reconstruction of a 3D quantitative chemical images of a biological tissue by FTIR spectro-microscopy is reported. An automated curve-fitting method is developed to extract all intense absorption bands constituting IR spectra. This innovation benefits from three critical features: (1) the correction of raw IR spectra to make them quantitatively comparable; (2) the automated and iterative data treatment allowing to transfer the IR-absorption spectrum into a IR-band spectrum; (3) the reconstruction of an 3D IR-band matrix (x, y, z for voxel position and a 4 th dimension with all IR-band parameters). Spectromics, which is a new method for exploiting spectral data for tissue metadata reconstruction, is proposed to further translate the related chemical information in 3D, as biochemical and anatomical tissue parameters. An example is given with oxidative stress distribution and the reconstruction of blood vessels in tissues. The requirements of IR microscopy instrumentation to propose 3D digital histology as a clinical routine technology is briefly discussed. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  14. Quantitative Determination of NTA and Other Chelating Agents in Detergents by Potentiometric Titration with Copper Ion Selective Electrode.

    PubMed

    Ito, Sana; Morita, Masaki

    2016-01-01

    Quantitative analysis of nitrilotriacetate (NTA) in detergents by titration with Cu 2+ solution using a copper ion selective electrode was achieved. This method tolerates a wide range of pH and ingredients in detergents. In addition to NTA, other chelating agents, having relatively lower stability constants toward Cu 2+ , were also qualified with sufficient accuracy by this analytical method for model detergent formulations. The titration process was automated by automatic titrating systems available commercially.

  15. Automated sample preparation using membrane microtiter extraction for bioanalytical mass spectrometry.

    PubMed

    Janiszewski, J; Schneider, P; Hoffmaster, K; Swyden, M; Wells, D; Fouda, H

    1997-01-01

    The development and application of membrane solid phase extraction (SPE) in 96-well microtiter plate format is described for the automated analysis of drugs in biological fluids. The small bed volume of the membrane allows elution of the analyte in a very small solvent volume, permitting direct HPLC injection and negating the need for the time consuming solvent evaporation step. A programmable liquid handling station (Quadra 96) was modified to automate all SPE steps. To avoid drying of the SPE bed and to enhance the analytical precision a novel protocol for performing the condition, load and wash steps in rapid succession was utilized. A block of 96 samples can now be extracted in 10 min., about 30 times faster than manual solvent extraction or single cartridge SPE methods. This processing speed complements the high-throughput speed of contemporary high performance liquid chromatography mass spectrometry (HPLC/MS) analysis. The quantitative analysis of a test analyte (Ziprasidone) in plasma demonstrates the utility and throughput of membrane SPE in combination with HPLC/MS. The results obtained with the current automated procedure compare favorably with those obtained using solvent and traditional solid phase extraction methods. The method has been used for the analysis of numerous drug prototypes in biological fluids to support drug discovery efforts.

  16. Early detection of glaucoma using fully automated disparity analysis of the optic nerve head (ONH) from stereo fundus images

    NASA Astrophysics Data System (ADS)

    Sharma, Archie; Corona, Enrique; Mitra, Sunanda; Nutter, Brian S.

    2006-03-01

    Early detection of structural damage to the optic nerve head (ONH) is critical in diagnosis of glaucoma, because such glaucomatous damage precedes clinically identifiable visual loss. Early detection of glaucoma can prevent progression of the disease and consequent loss of vision. Traditional early detection techniques involve observing changes in the ONH through an ophthalmoscope. Stereo fundus photography is also routinely used to detect subtle changes in the ONH. However, clinical evaluation of stereo fundus photographs suffers from inter- and intra-subject variability. Even the Heidelberg Retina Tomograph (HRT) has not been found to be sufficiently sensitive for early detection. A semi-automated algorithm for quantitative representation of the optic disc and cup contours by computing accumulated disparities in the disc and cup regions from stereo fundus image pairs has already been developed using advanced digital image analysis methodologies. A 3-D visualization of the disc and cup is achieved assuming camera geometry. High correlation among computer-generated and manually segmented cup to disc ratios in a longitudinal study involving 159 stereo fundus image pairs has already been demonstrated. However, clinical usefulness of the proposed technique can only be tested by a fully automated algorithm. In this paper, we present a fully automated algorithm for segmentation of optic cup and disc contours from corresponding stereo disparity information. Because this technique does not involve human intervention, it eliminates subjective variability encountered in currently used clinical methods and provides ophthalmologists with a cost-effective and quantitative method for detection of ONH structural damage for early detection of glaucoma.

  17. An Automated Solar Synoptic Analysis Software System

    NASA Astrophysics Data System (ADS)

    Hong, S.; Lee, S.; Oh, S.; Kim, J.; Lee, J.; Kim, Y.; Lee, J.; Moon, Y.; Lee, D.

    2012-12-01

    We have developed an automated software system of identifying solar active regions, filament channels, and coronal holes, those are three major solar sources causing the space weather. Space weather forecasters of NOAA Space Weather Prediction Center produce the solar synoptic drawings as a daily basis to predict solar activities, i.e., solar flares, filament eruptions, high speed solar wind streams, and co-rotating interaction regions as well as their possible effects to the Earth. As an attempt to emulate this process with a fully automated and consistent way, we developed a software system named ASSA(Automated Solar Synoptic Analysis). When identifying solar active regions, ASSA uses high-resolution SDO HMI intensitygram and magnetogram as inputs and providing McIntosh classification and Mt. Wilson magnetic classification of each active region by applying appropriate image processing techniques such as thresholding, morphology extraction, and region growing. At the same time, it also extracts morphological and physical properties of active regions in a quantitative way for the short-term prediction of flares and CMEs. When identifying filament channels and coronal holes, images of global H-alpha network and SDO AIA 193 are used for morphological identification and also SDO HMI magnetograms for quantitative verification. The output results of ASSA are routinely checked and validated against NOAA's daily SRS(Solar Region Summary) and UCOHO(URSIgram code for coronal hole information). A couple of preliminary scientific results are to be presented using available output results. ASSA will be deployed at the Korean Space Weather Center and serve its customers in an operational status by the end of 2012.

  18. sFIDA automation yields sub-femtomolar limit of detection for Aβ aggregates in body fluids.

    PubMed

    Herrmann, Yvonne; Kulawik, Andreas; Kühbach, Katja; Hülsemann, Maren; Peters, Luriano; Bujnicki, Tuyen; Kravchenko, Kateryna; Linnartz, Christina; Willbold, Johannes; Zafiu, Christian; Bannach, Oliver; Willbold, Dieter

    2017-03-01

    Alzheimer's disease (AD) is a neurodegenerative disorder with yet non-existent therapeutic and limited diagnostic options. Reliable biomarker-based AD diagnostics are of utmost importance for the development and application of therapeutic substances. We have previously introduced a platform technology designated 'sFIDA' for the quantitation of amyloid β peptide (Aβ) aggregates as AD biomarker. In this study we implemented the sFIDA assay on an automated platform to enhance robustness and performance of the assay. In sFIDA (surface-based fluorescence intensity distribution analysis) Aβ species are immobilized by a capture antibody to a glass surface. Aβ aggregates are then multiply loaded with fluorescent antibodies and quantitated by high resolution fluorescence microscopy. As a model system for Aβ aggregates, we used Aβ-conjugated silica nanoparticles (Aβ-SiNaPs) diluted in PBS buffer and cerebrospinal fluid, respectively. Automation of the assay was realized on a liquid handling system in combination with a microplate washer. The automation of the sFIDA assay results in improved intra-assay precision, linearity and sensitivity in comparison to the manual application, and achieved a limit of detection in the sub-femtomolar range. Automation improves the precision and sensitivity of the sFIDA assay, which is a prerequisite for high-throughput measurements and future application of the technology in routine AD diagnostics. Copyright © 2016 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.

  19. Quantification of Pulmonary Fibrosis in a Bleomycin Mouse Model Using Automated Histological Image Analysis.

    PubMed

    Gilhodes, Jean-Claude; Julé, Yvon; Kreuz, Sebastian; Stierstorfer, Birgit; Stiller, Detlef; Wollin, Lutz

    2017-01-01

    Current literature on pulmonary fibrosis induced in animal models highlights the need of an accurate, reliable and reproducible histological quantitative analysis. One of the major limits of histological scoring concerns the fact that it is observer-dependent and consequently subject to variability, which may preclude comparative studies between different laboratories. To achieve a reliable and observer-independent quantification of lung fibrosis we developed an automated software histological image analysis performed from digital image of entire lung sections. This automated analysis was compared to standard evaluation methods with regard to its validation as an end-point measure of fibrosis. Lung fibrosis was induced in mice by intratracheal administration of bleomycin (BLM) at 0.25, 0.5, 0.75 and 1 mg/kg. A detailed characterization of BLM-induced fibrosis was performed 14 days after BLM administration using lung function testing, micro-computed tomography and Ashcroft scoring analysis. Quantification of fibrosis by automated analysis was assessed based on pulmonary tissue density measured from thousands of micro-tiles processed from digital images of entire lung sections. Prior to analysis, large bronchi and vessels were manually excluded from the original images. Measurement of fibrosis has been expressed by two indexes: the mean pulmonary tissue density and the high pulmonary tissue density frequency. We showed that tissue density indexes gave access to a very accurate and reliable quantification of morphological changes induced by BLM even for the lowest concentration used (0.25 mg/kg). A reconstructed 2D-image of the entire lung section at high resolution (3.6 μm/pixel) has been performed from tissue density values allowing the visualization of their distribution throughout fibrotic and non-fibrotic regions. A significant correlation (p<0.0001) was found between automated analysis and the above standard evaluation methods. This correlation establishes automated analysis as a novel end-point measure of BLM-induced lung fibrosis in mice, which will be very valuable for future preclinical drug explorations.

  20. Quantification of Pulmonary Fibrosis in a Bleomycin Mouse Model Using Automated Histological Image Analysis

    PubMed Central

    Gilhodes, Jean-Claude; Kreuz, Sebastian; Stierstorfer, Birgit; Stiller, Detlef; Wollin, Lutz

    2017-01-01

    Current literature on pulmonary fibrosis induced in animal models highlights the need of an accurate, reliable and reproducible histological quantitative analysis. One of the major limits of histological scoring concerns the fact that it is observer-dependent and consequently subject to variability, which may preclude comparative studies between different laboratories. To achieve a reliable and observer-independent quantification of lung fibrosis we developed an automated software histological image analysis performed from digital image of entire lung sections. This automated analysis was compared to standard evaluation methods with regard to its validation as an end-point measure of fibrosis. Lung fibrosis was induced in mice by intratracheal administration of bleomycin (BLM) at 0.25, 0.5, 0.75 and 1 mg/kg. A detailed characterization of BLM-induced fibrosis was performed 14 days after BLM administration using lung function testing, micro-computed tomography and Ashcroft scoring analysis. Quantification of fibrosis by automated analysis was assessed based on pulmonary tissue density measured from thousands of micro-tiles processed from digital images of entire lung sections. Prior to analysis, large bronchi and vessels were manually excluded from the original images. Measurement of fibrosis has been expressed by two indexes: the mean pulmonary tissue density and the high pulmonary tissue density frequency. We showed that tissue density indexes gave access to a very accurate and reliable quantification of morphological changes induced by BLM even for the lowest concentration used (0.25 mg/kg). A reconstructed 2D-image of the entire lung section at high resolution (3.6 μm/pixel) has been performed from tissue density values allowing the visualization of their distribution throughout fibrotic and non-fibrotic regions. A significant correlation (p<0.0001) was found between automated analysis and the above standard evaluation methods. This correlation establishes automated analysis as a novel end-point measure of BLM-induced lung fibrosis in mice, which will be very valuable for future preclinical drug explorations. PMID:28107543

  1. Performing Quantitative Imaging Acquisition, Analysis and Visualization Using the Best of Open Source and Commercial Software Solutions.

    PubMed

    Shenoy, Shailesh M

    2016-07-01

    A challenge in any imaging laboratory, especially one that uses modern techniques, is to achieve a sustainable and productive balance between using open source and commercial software to perform quantitative image acquisition, analysis and visualization. In addition to considering the expense of software licensing, one must consider factors such as the quality and usefulness of the software's support, training and documentation. Also, one must consider the reproducibility with which multiple people generate results using the same software to perform the same analysis, how one may distribute their methods to the community using the software and the potential for achieving automation to improve productivity.

  2. Semi-automated 96-well liquid-liquid extraction for quantitation of drugs in biological fluids.

    PubMed

    Zhang, N; Hoffman, K L; Li, W; Rossi, D T

    2000-02-01

    A semi-automated liquid-liquid extraction (LLE) technique for biological fluid sample preparation was introduced for the quantitation of four drugs in rat plasma. All liquid transferring during the sample preparation was automated using a Tomtec Quadra 96 Model 320 liquid handling robot, which processed up to 96 samples in parallel. The samples were either in 96-deep-well plate or tube-rack format. One plate of samples can be prepared in approximately 1.5 h, and the 96-well plate is directly compatible with the autosampler of an LC/MS system. Selection of organic solvents and recoveries are discussed. Also, precision, relative error, linearity and quantitation of the semi automated LLE method are estimated for four example drugs using LC/MS/MS with a multiple reaction monitoring (MRM) approach. The applicability of this method and future directions are evaluated.

  3. Tool development in threat assessment: syntax regularization and correlative analysis. Final report Task I and Task II, November 21, 1977-May 21, 1978. [Linguistic analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miron, M.S.; Christopher, C.; Hirshfield, S.

    1978-05-01

    Psycholinguistics provides crisis managers in nuclear threat incidents with a quantitative methodology which can aid in the determination of threat credibility, authorship identification and perpetrator apprehension. The objective of this contract is to improve and enhance present psycholinguistic software systems by means of newly-developed, computer-automated techniques which significantly extend the technology of automated content and stylistic analysis of nuclear threat. In accordance with this overall objective, the first two contract Tasks have been completed and are reported on in this document. The first Task specifies the development of software support for the purpose of syntax regularization of vocabulary to rootmore » form. The second calls for the exploration and development of alternative approaches to correlative analysis of vocabulary usage.« less

  4. Oufti: An integrated software package for high-accuracy, high-throughput quantitative microscopy analysis

    PubMed Central

    Paintdakhi, Ahmad; Parry, Bradley; Campos, Manuel; Irnov, Irnov; Elf, Johan; Surovtsev, Ivan; Jacobs-Wagner, Christine

    2016-01-01

    Summary With the realization that bacteria display phenotypic variability among cells and exhibit complex subcellular organization critical for cellular function and behavior, microscopy has re-emerged as a primary tool in bacterial research during the last decade. However, the bottleneck in today’s single-cell studies is quantitative image analysis of cells and fluorescent signals. Here, we address current limitations through the development of Oufti, a stand-alone, open-source software package for automated measurements of microbial cells and fluorescence signals from microscopy images. Oufti provides computational solutions for tracking touching cells in confluent samples, handles various cell morphologies, offers algorithms for quantitative analysis of both diffraction and non-diffraction-limited fluorescence signals, and is scalable for high-throughput analysis of massive datasets, all with subpixel precision. All functionalities are integrated in a single package. The graphical user interface, which includes interactive modules for segmentation, image analysis, and post-processing analysis, makes the software broadly accessible to users irrespective of their computational skills. PMID:26538279

  5. Automated fluorescent miscroscopic image analysis of PTBP1 expression in glioma

    PubMed Central

    Becker, Aline; Elder, Brad; Puduvalli, Vinay; Winter, Jessica; Gurcan, Metin

    2017-01-01

    Multiplexed immunofluorescent testing has not entered into diagnostic neuropathology due to the presence of several technical barriers, amongst which includes autofluorescence. This study presents the implementation of a methodology capable of overcoming the visual challenges of fluorescent microscopy for diagnostic neuropathology by using automated digital image analysis, with long term goal of providing unbiased quantitative analyses of multiplexed biomarkers for solid tissue neuropathology. In this study, we validated PTBP1, a putative biomarker for glioma, and tested the extent to which immunofluorescent microscopy combined with automated and unbiased image analysis would permit the utility of PTBP1 as a biomarker to distinguish diagnostically challenging surgical biopsies. As a paradigm, we utilized second resections from patients diagnosed either with reactive brain changes (pseudoprogression) and recurrent glioblastoma (true progression). Our image analysis workflow was capable of removing background autofluorescence and permitted quantification of DAPI-PTBP1 positive cells. PTBP1-positive nuclei, and the mean intensity value of PTBP1 signal in cells. Traditional pathological interpretation was unable to distinguish between groups due to unacceptably high discordance rates amongst expert neuropathologists. Our data demonstrated that recurrent glioblastoma showed more DAPI-PTBP1 positive cells and a higher mean intensity value of PTBP1 signal compared to resections from second surgeries that showed only reactive gliosis. Our work demonstrates the potential of utilizing automated image analysis to overcome the challenges of implementing fluorescent microscopy in diagnostic neuropathology. PMID:28282372

  6. Quantitative mass spectrometry methods for pharmaceutical analysis

    PubMed Central

    Loos, Glenn; Van Schepdael, Ann

    2016-01-01

    Quantitative pharmaceutical analysis is nowadays frequently executed using mass spectrometry. Electrospray ionization coupled to a (hybrid) triple quadrupole mass spectrometer is generally used in combination with solid-phase extraction and liquid chromatography. Furthermore, isotopically labelled standards are often used to correct for ion suppression. The challenges in producing sensitive but reliable quantitative data depend on the instrumentation, sample preparation and hyphenated techniques. In this contribution, different approaches to enhance the ionization efficiencies using modified source geometries and improved ion guidance are provided. Furthermore, possibilities to minimize, assess and correct for matrix interferences caused by co-eluting substances are described. With the focus on pharmaceuticals in the environment and bioanalysis, different separation techniques, trends in liquid chromatography and sample preparation methods to minimize matrix effects and increase sensitivity are discussed. Although highly sensitive methods are generally aimed for to provide automated multi-residue analysis, (less sensitive) miniaturized set-ups have a great potential due to their ability for in-field usage. This article is part of the themed issue ‘Quantitative mass spectrometry’. PMID:27644982

  7. Building quantitative, three-dimensional atlases of gene expression and morphology at cellular resolution.

    PubMed

    Knowles, David W; Biggin, Mark D

    2013-01-01

    Animals comprise dynamic three-dimensional arrays of cells that express gene products in intricate spatial and temporal patterns that determine cellular differentiation and morphogenesis. A rigorous understanding of these developmental processes requires automated methods that quantitatively record and analyze complex morphologies and their associated patterns of gene expression at cellular resolution. Here we summarize light microscopy-based approaches to establish permanent, quantitative datasets-atlases-that record this information. We focus on experiments that capture data for whole embryos or large areas of tissue in three dimensions, often at multiple time points. We compare and contrast the advantages and limitations of different methods and highlight some of the discoveries made. We emphasize the need for interdisciplinary collaborations and integrated experimental pipelines that link sample preparation, image acquisition, image analysis, database design, visualization, and quantitative analysis. Copyright © 2013 Wiley Periodicals, Inc.

  8. Spotsizer: High-throughput quantitative analysis of microbial growth.

    PubMed

    Bischof, Leanne; Převorovský, Martin; Rallis, Charalampos; Jeffares, Daniel C; Arzhaeva, Yulia; Bähler, Jürg

    2016-10-01

    Microbial colony growth can serve as a useful readout in assays for studying complex genetic interactions or the effects of chemical compounds. Although computational tools for acquiring quantitative measurements of microbial colonies have been developed, their utility can be compromised by inflexible input image requirements, non-trivial installation procedures, or complicated operation. Here, we present the Spotsizer software tool for automated colony size measurements in images of robotically arrayed microbial colonies. Spotsizer features a convenient graphical user interface (GUI), has both single-image and batch-processing capabilities, and works with multiple input image formats and different colony grid types. We demonstrate how Spotsizer can be used for high-throughput quantitative analysis of fission yeast growth. The user-friendly Spotsizer tool provides rapid, accurate, and robust quantitative analyses of microbial growth in a high-throughput format. Spotsizer is freely available at https://data.csiro.au/dap/landingpage?pid=csiro:15330 under a proprietary CSIRO license.

  9. MetaFluxNet: the management of metabolic reaction information and quantitative metabolic flux analysis.

    PubMed

    Lee, Dong-Yup; Yun, Hongsoek; Park, Sunwon; Lee, Sang Yup

    2003-11-01

    MetaFluxNet is a program package for managing information on the metabolic reaction network and for quantitatively analyzing metabolic fluxes in an interactive and customized way. It allows users to interpret and examine metabolic behavior in response to genetic and/or environmental modifications. As a result, quantitative in silico simulations of metabolic pathways can be carried out to understand the metabolic status and to design the metabolic engineering strategies. The main features of the program include a well-developed model construction environment, user-friendly interface for metabolic flux analysis (MFA), comparative MFA of strains having different genotypes under various environmental conditions, and automated pathway layout creation. http://mbel.kaist.ac.kr/ A manual for MetaFluxNet is available as PDF file.

  10. Automated measurement of uptake in cerebellum, liver, and aortic arch in full-body FDG PET/CT scans.

    PubMed

    Bauer, Christian; Sun, Shanhui; Sun, Wenqing; Otis, Justin; Wallace, Audrey; Smith, Brian J; Sunderland, John J; Graham, Michael M; Sonka, Milan; Buatti, John M; Beichel, Reinhard R

    2012-06-01

    The purpose of this work was to develop and validate fully automated methods for uptake measurement of cerebellum, liver, and aortic arch in full-body PET/CT scans. Such measurements are of interest in the context of uptake normalization for quantitative assessment of metabolic activity and/or automated image quality control. Cerebellum, liver, and aortic arch regions were segmented with different automated approaches. Cerebella were segmented in PET volumes by means of a robust active shape model (ASM) based method. For liver segmentation, a largest possible hyperellipsoid was fitted to the liver in PET scans. The aortic arch was first segmented in CT images of a PET/CT scan by a tubular structure analysis approach, and the segmented result was then mapped to the corresponding PET scan. For each of the segmented structures, the average standardized uptake value (SUV) was calculated. To generate an independent reference standard for method validation, expert image analysts were asked to segment several cross sections of each of the three structures in 134 F-18 fluorodeoxyglucose (FDG) PET/CT scans. For each case, the true average SUV was estimated by utilizing statistical models and served as the independent reference standard. For automated aorta and liver SUV measurements, no statistically significant scale or shift differences were observed between automated results and the independent standard. In the case of the cerebellum, the scale and shift were not significantly different, if measured in the same cross sections that were utilized for generating the reference. In contrast, automated results were scaled 5% lower on average although not shifted, if FDG uptake was calculated from the whole segmented cerebellum volume. The estimated reduction in total SUV measurement error ranged between 54.7% and 99.2%, and the reduction was found to be statistically significant for cerebellum and aortic arch. With the proposed methods, the authors have demonstrated that automated SUV uptake measurements in cerebellum, liver, and aortic arch agree with expert-defined independent standards. The proposed methods were found to be accurate and showed less intra- and interobserver variability, compared to manual analysis. The approach provides an alternative to manual uptake quantification, which is time-consuming. Such an approach will be important for application of quantitative PET imaging to large scale clinical trials. © 2012 American Association of Physicists in Medicine.

  11. GLS-Finder: An Automated Data-Mining System for Fast Profiling Glucosinolates and its Application in Brassica Vegetables

    USDA-ARS?s Scientific Manuscript database

    A rapid computer-aided program for profiling glucosinolates, “GLS-Finder", was developed. GLS-Finder is a Matlab script based expert system that is capable for qualitative and semi-quantitative analysis of glucosinolates in samples using data generated by ultra-high performance liquid chromatograph...

  12. Automatic detection of cone photoreceptors in split detector adaptive optics scanning light ophthalmoscope images.

    PubMed

    Cunefare, David; Cooper, Robert F; Higgins, Brian; Katz, David F; Dubra, Alfredo; Carroll, Joseph; Farsiu, Sina

    2016-05-01

    Quantitative analysis of the cone photoreceptor mosaic in the living retina is potentially useful for early diagnosis and prognosis of many ocular diseases. Non-confocal split detector based adaptive optics scanning light ophthalmoscope (AOSLO) imaging reveals the cone photoreceptor inner segment mosaics often not visualized on confocal AOSLO imaging. Despite recent advances in automated cone segmentation algorithms for confocal AOSLO imagery, quantitative analysis of split detector AOSLO images is currently a time-consuming manual process. In this paper, we present the fully automatic adaptive filtering and local detection (AFLD) method for detecting cones in split detector AOSLO images. We validated our algorithm on 80 images from 10 subjects, showing an overall mean Dice's coefficient of 0.95 (standard deviation 0.03), when comparing our AFLD algorithm to an expert grader. This is comparable to the inter-observer Dice's coefficient of 0.94 (standard deviation 0.04). To the best of our knowledge, this is the first validated, fully-automated segmentation method which has been applied to split detector AOSLO images.

  13. Autoreject: Automated artifact rejection for MEG and EEG data.

    PubMed

    Jas, Mainak; Engemann, Denis A; Bekhti, Yousra; Raimondo, Federico; Gramfort, Alexandre

    2017-10-01

    We present an automated algorithm for unified rejection and repair of bad trials in magnetoencephalography (MEG) and electroencephalography (EEG) signals. Our method capitalizes on cross-validation in conjunction with a robust evaluation metric to estimate the optimal peak-to-peak threshold - a quantity commonly used for identifying bad trials in M/EEG. This approach is then extended to a more sophisticated algorithm which estimates this threshold for each sensor yielding trial-wise bad sensors. Depending on the number of bad sensors, the trial is then repaired by interpolation or by excluding it from subsequent analysis. All steps of the algorithm are fully automated thus lending itself to the name Autoreject. In order to assess the practical significance of the algorithm, we conducted extensive validation and comparisons with state-of-the-art methods on four public datasets containing MEG and EEG recordings from more than 200 subjects. The comparisons include purely qualitative efforts as well as quantitatively benchmarking against human supervised and semi-automated preprocessing pipelines. The algorithm allowed us to automate the preprocessing of MEG data from the Human Connectome Project (HCP) going up to the computation of the evoked responses. The automated nature of our method minimizes the burden of human inspection, hence supporting scalability and reliability demanded by data analysis in modern neuroscience. Copyright © 2017 Elsevier Inc. All rights reserved.

  14. Automated cell disruption is a reliable and effective method of isolating RNA from fresh snap-frozen normal and malignant oral mucosa samples.

    PubMed

    Van der Vorst, Sébastien; Dekairelle, Anne-France; Irenge, Léonid; Hamoir, Marc; Robert, Annie; Gala, Jean-Luc

    2009-01-01

    This study compared automated vs. manual tissue grinding in terms of RNA yield obtained from oral mucosa biopsies. A total of 20 patients undergoing uvulectomy for sleep-related disorders and 10 patients undergoing biopsy for head and neck squamous cell carcinoma were enrolled in the study. Samples were collected, snap-frozen in liquid nitrogen, and divided into two parts of similar weight. Sample grinding was performed on one sample from each pair, either manually or using an automated cell disruptor. The performance and efficacy of each homogenization approach was compared in terms of total RNA yield (spectrophotometry, fluorometry), mRNA quantity [densitometry of specific TP53 amplicons and TP53 quantitative reverse-transcribed real-time PCR (qRT-PCR)], and mRNA quality (functional analysis of separated alleles in yeast). Although spectrophotometry and fluorometry results were comparable for both homogenization methods, TP53 expression values obtained by amplicon densitometry and qRT-PCR were significantly and consistently better after automated homogenization (p<0.005) for both uvula and tumor samples. Functional analysis of separated alleles in yeast results was better with the automated technique for tumor samples. Automated tissue homogenization appears to be a versatile, quick, and reliable method of cell disruption and is especially useful in the case of small malignant samples, which show unreliable results when processed by manual homogenization.

  15. Quantitative Medical Image Analysis for Clinical Development of Therapeutics

    NASA Astrophysics Data System (ADS)

    Analoui, Mostafa

    There has been significant progress in development of therapeutics for prevention and management of several disease areas in recent years, leading to increased average life expectancy, as well as of quality of life, globally. However, due to complexity of addressing a number of medical needs and financial burden of development of new class of therapeutics, there is a need for better tools for decision making and validation of efficacy and safety of new compounds. Numerous biological markers (biomarkers) have been proposed either as adjunct to current clinical endpoints or as surrogates. Imaging biomarkers are among rapidly increasing biomarkers, being examined to expedite effective and rational drug development. Clinical imaging often involves a complex set of multi-modality data sets that require rapid and objective analysis, independent of reviewer's bias and training. In this chapter, an overview of imaging biomarkers for drug development is offered, along with challenges that necessitate quantitative and objective image analysis. Examples of automated and semi-automated analysis approaches are provided, along with technical review of such methods. These examples include the use of 3D MRI for osteoarthritis, ultrasound vascular imaging, and dynamic contrast enhanced MRI for oncology. Additionally, a brief overview of regulatory requirements is discussed. In conclusion, this chapter highlights key challenges and future directions in this area.

  16. Dexterity: A MATLAB-based analysis software suite for processing and visualizing data from tasks that measure arm or forelimb function.

    PubMed

    Butensky, Samuel D; Sloan, Andrew P; Meyers, Eric; Carmel, Jason B

    2017-07-15

    Hand function is critical for independence, and neurological injury often impairs dexterity. To measure hand function in people or forelimb function in animals, sensors are employed to quantify manipulation. These sensors make assessment easier and more quantitative and allow automation of these tasks. While automated tasks improve objectivity and throughput, they also produce large amounts of data that can be burdensome to analyze. We created software called Dexterity that simplifies data analysis of automated reaching tasks. Dexterity is MATLAB software that enables quick analysis of data from forelimb tasks. Through a graphical user interface, files are loaded and data are identified and analyzed. These data can be annotated or graphed directly. Analysis is saved, and the graph and corresponding data can be exported. For additional analysis, Dexterity provides access to custom scripts created by other users. To determine the utility of Dexterity, we performed a study to evaluate the effects of task difficulty on the degree of impairment after injury. Dexterity analyzed two months of data and allowed new users to annotate the experiment, visualize results, and save and export data easily. Previous analysis of tasks was performed with custom data analysis, requiring expertise with analysis software. Dexterity made the tools required to analyze, visualize and annotate data easy to use by investigators without data science experience. Dexterity increases accessibility to automated tasks that measure dexterity by making analysis of large data intuitive, robust, and efficient. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. Automation of fluorescent differential display with digital readout.

    PubMed

    Meade, Jonathan D; Cho, Yong-Jig; Fisher, Jeffrey S; Walden, Jamie C; Guo, Zhen; Liang, Peng

    2006-01-01

    Since its invention in 1992, differential display (DD) has become the most commonly used technique for identifying differentially expressed genes because of its many advantages over competing technologies such as DNA microarray, serial analysis of gene expression (SAGE), and subtractive hybridization. Despite the great impact of the method on biomedical research, there has been a lack of automation of DD technology to increase its throughput and accuracy for systematic gene expression analysis. Most of previous DD work has taken a "shot-gun" approach of identifying one gene at a time, with a limited number of polymerase chain reaction (PCR) reactions set up manually, giving DD a low-tech and low-throughput image. We have optimized the DD process with a new platform that incorporates fluorescent digital readout, automated liquid handling, and large-format gels capable of running entire 96-well plates. The resulting streamlined fluorescent DD (FDD) technology offers an unprecedented accuracy, sensitivity, and throughput in comprehensive and quantitative analysis of gene expression. These major improvements will allow researchers to find differentially expressed genes of interest, both known and novel, quickly and easily.

  18. Automated Software Analysis of Fetal Movement Recorded during a Pregnant Woman's Sleep at Home.

    PubMed

    Nishihara, Kyoko; Ohki, Noboru; Kamata, Hideo; Ryo, Eiji; Horiuchi, Shigeko

    2015-01-01

    Fetal movement is an important biological index of fetal well-being. Since 2008, we have been developing an original capacitive acceleration sensor and device that a pregnant woman can easily use to record fetal movement by herself at home during sleep. In this study, we report a newly developed automated software system for analyzing recorded fetal movement. This study will introduce the system and compare its results to those of a manual analysis of the same fetal movement signals (Experiment I). We will also demonstrate an appropriate way to use the system (Experiment II). In Experiment I, fetal movement data reported previously for six pregnant women at 28-38 gestational weeks were used. We evaluated the agreement of the manual and automated analyses for the same 10-sec epochs using prevalence-adjusted bias-adjusted kappa (PABAK) including quantitative indicators for prevalence and bias. The mean PABAK value was 0.83, which can be considered almost perfect. In Experiment II, twelve pregnant women at 24-36 gestational weeks recorded fetal movement at night once every four weeks. Overall, mean fetal movement counts per hour during maternal sleep significantly decreased along with gestational weeks, though individual differences in fetal development were noted. This newly developed automated analysis system can provide important data throughout late pregnancy.

  19. Automated Software Analysis of Fetal Movement Recorded during a Pregnant Woman’s Sleep at Home

    PubMed Central

    Nishihara, Kyoko; Ohki, Noboru; Kamata, Hideo; Ryo, Eiji; Horiuchi, Shigeko

    2015-01-01

    Fetal movement is an important biological index of fetal well-being. Since 2008, we have been developing an original capacitive acceleration sensor and device that a pregnant woman can easily use to record fetal movement by herself at home during sleep. In this study, we report a newly developed automated software system for analyzing recorded fetal movement. This study will introduce the system and compare its results to those of a manual analysis of the same fetal movement signals (Experiment I). We will also demonstrate an appropriate way to use the system (Experiment II). In Experiment I, fetal movement data reported previously for six pregnant women at 28-38 gestational weeks were used. We evaluated the agreement of the manual and automated analyses for the same 10-sec epochs using prevalence-adjusted bias-adjusted kappa (PABAK) including quantitative indicators for prevalence and bias. The mean PABAK value was 0.83, which can be considered almost perfect. In Experiment II, twelve pregnant women at 24-36 gestational weeks recorded fetal movement at night once every four weeks. Overall, mean fetal movement counts per hour during maternal sleep significantly decreased along with gestational weeks, though individual differences in fetal development were noted. This newly developed automated analysis system can provide important data throughout late pregnancy. PMID:26083422

  20. A high-throughput urinalysis of abused drugs based on a SPE-LC-MS/MS method coupled with an in-house developed post-analysis data treatment system.

    PubMed

    Cheng, Wing-Chi; Yau, Tsan-Sang; Wong, Ming-Kei; Chan, Lai-Ping; Mok, Vincent King-Kuen

    2006-10-16

    A rapid urinalysis system based on SPE-LC-MS/MS with an in-house post-analysis data management system has been developed for the simultaneous identification and semi-quantitation of opiates (morphine, codeine), methadone, amphetamines (amphetamine, methylamphetamine (MA), 3,4-methylenedioxyamphetamine (MDA) and 3,4-methylenedioxymethamphetamine (MDMA)), 11-benzodiazepines or their metabolites and ketamine. The urine samples are subjected to automated solid phase extraction prior to analysis by LC-MS (Finnigan Surveyor LC connected to a Finnigan LCQ Advantage) fitted with an Alltech Rocket Platinum EPS C-18 column. With a single point calibration at the cut-off concentration for each analyte, simultaneous identification and semi-quantitation for the above mentioned drugs can be achieved in a 10 min run per urine sample. A computer macro-program package was developed to automatically retrieve appropriate data from the analytical data files, compare results with preset values (such as cut-off concentrations, MS matching scores) of each drug being analyzed and generate user-defined Excel reports to indicate all positive and negative results in batch-wise manner for ease of checking. The final analytical results are automatically copied into an Access database for report generation purposes. Through the use of automation in sample preparation, simultaneous identification and semi-quantitation by LC-MS/MS and a tailored made post-analysis data management system, this new urinalysis system significantly improves the quality of results, reduces the post-data treatment time, error due to data transfer and is suitable for high-throughput laboratory in batch-wise operation.

  1. Quantifying biodiversity using digital cameras and automated image analysis.

    NASA Astrophysics Data System (ADS)

    Roadknight, C. M.; Rose, R. J.; Barber, M. L.; Price, M. C.; Marshall, I. W.

    2009-04-01

    Monitoring the effects on biodiversity of extensive grazing in complex semi-natural habitats is labour intensive. There are also concerns about the standardization of semi-quantitative data collection. We have chosen to focus initially on automating the most time consuming aspect - the image analysis. The advent of cheaper and more sophisticated digital camera technology has lead to a sudden increase in the number of habitat monitoring images and information that is being collected. We report on the use of automated trail cameras (designed for the game hunting market) to continuously capture images of grazer activity in a variety of habitats at Moor House National Nature Reserve, which is situated in the North of England at an average altitude of over 600m. Rainfall is high, and in most areas the soil consists of deep peat (1m to 3m), populated by a mix of heather, mosses and sedges. The cameras have been continuously in operation over a 6 month period, daylight images are in full colour and night images (IR flash) are black and white. We have developed artificial intelligence based methods to assist in the analysis of the large number of images collected, generating alert states for new or unusual image conditions. This paper describes the data collection techniques, outlines the quantitative and qualitative data collected and proposes online and offline systems that can reduce the manpower overheads and increase focus on important subsets in the collected data. By converting digital image data into statistical composite data it can be handled in a similar way to other biodiversity statistics thus improving the scalability of monitoring experiments. Unsupervised feature detection methods and supervised neural methods were tested and offered solutions to simplifying the process. Accurate (85 to 95%) categorization of faunal content can be obtained, requiring human intervention for only those images containing rare animals or unusual (undecidable) conditions, and enabling automatic deletion of images generated by erroneous triggering (e.g. cloud movements). This is the first step to a hierarchical image processing framework, where situation subclasses such as birds or climatic conditions can be fed into more appropriate automated or semi-automated data mining software.

  2. Automated retinal image quality assessment on the UK Biobank dataset for epidemiological studies.

    PubMed

    Welikala, R A; Fraz, M M; Foster, P J; Whincup, P H; Rudnicka, A R; Owen, C G; Strachan, D P; Barman, S A

    2016-04-01

    Morphological changes in the retinal vascular network are associated with future risk of many systemic and vascular diseases. However, uncertainty over the presence and nature of some of these associations exists. Analysis of data from large population based studies will help to resolve these uncertainties. The QUARTZ (QUantitative Analysis of Retinal vessel Topology and siZe) retinal image analysis system allows automated processing of large numbers of retinal images. However, an image quality assessment module is needed to achieve full automation. In this paper, we propose such an algorithm, which uses the segmented vessel map to determine the suitability of retinal images for use in the creation of vessel morphometric data suitable for epidemiological studies. This includes an effective 3-dimensional feature set and support vector machine classification. A random subset of 800 retinal images from UK Biobank (a large prospective study of 500,000 middle aged adults; where 68,151 underwent retinal imaging) was used to examine the performance of the image quality algorithm. The algorithm achieved a sensitivity of 95.33% and a specificity of 91.13% for the detection of inadequate images. The strong performance of this image quality algorithm will make rapid automated analysis of vascular morphometry feasible on the entire UK Biobank dataset (and other large retinal datasets), with minimal operator involvement, and at low cost. Copyright © 2016 Elsevier Ltd. All rights reserved.

  3. A sensitive and rapid assay for 4-aminophenol in paracetamol drug and tablet formulation, by flow injection analysis with spectrophotometric detection.

    PubMed

    Bloomfield, M S

    2002-12-06

    4-Aminophenol (4AP) is the primary degradation product of paracetamol which is limited at a low level (50 ppm or 0.005% w/w) in the drug substance by the European, United States, British and German Pharmacopoeias, employing a manual colourimetric limit test. The 4AP limit is widened to 1000 ppm or 0.1% w/w for the tablet product monographs, which quote the use of a less sensitive automated HPLC method. The lower drug substance specification limit is applied to our products, (50 ppm, equivalent to 25 mug 4AP in a tablet containing 500-mg paracetamol) and the pharmacopoeial HPLC assay was not suitable at this low level due to matrix interference. For routine analysis a rapid, automated assay was required. This paper presents a highly sensitive, precise and automated method employing the technique of Flow Injection (FI) analysis to quantitatively assay low levels of this degradant. A solution of the drug substance, or an extract of the tablets, containing 4AP and paracetamol is injected into a solvent carrier stream and merged on-line with alkaline sodium nitroprusside reagent, to form a specific blue derivative which is detected spectrophotometrically at 710 nm. Standard HPLC equipment is used throughout. The procedure is fully quantitative and has been optimised for sensitivity and robustness using a multivariate experimental design (multi-level 'Central Composite' response surface) model. The method has been fully validated and is linear down to 0.01 mug ml(-1). The approach should be applicable to a range of paracetamol products.

  4. A comparative study of quantitative immunohistochemistry and quantum dot immunohistochemistry for mutation carrier identification in Lynch syndrome.

    PubMed

    Barrow, Emma; Evans, D Gareth; McMahon, Ray; Hill, James; Byers, Richard

    2011-03-01

    Lynch Syndrome is caused by mutations in DNA mismatch repair (MMR) genes. Mutation carrier identification is facilitated by immunohistochemical detection of the MMR proteins MHL1 and MSH2 in tumour tissue and is desirable as colonoscopic screening reduces mortality. However, protein detection by conventional immunohistochemistry (IHC) is subjective, and quantitative techniques are required. Quantum dots (QDs) are novel fluorescent labels that enable quantitative multiplex staining. This study compared their use with quantitative 3,3'-diaminobenzidine (DAB) IHC for the diagnosis of Lynch Syndrome. Tumour sections from 36 mutation carriers and six controls were obtained. These were stained with DAB on an automated platform using antibodies against MLH1 and MSH2. Multiplex QD immunofluorescent staining of the sections was performed using antibodies against MLH1, MSH2 and smooth muscle actin (SMA). Multispectral analysis of the slides was performed. The staining intensity of DAB and QDs was measured in multiple colonic crypts, and the mean intensity scores calculated. Receiver operating characteristic (ROC) curves of staining performance for the identification of mutation carriers were evaluated. For quantitative DAB IHC, the area under the MLH1 ROC curve was 0.872 (95% CI 0.763 to 0.981), and the area under the MSH2 ROC curve was 0.832 (95% CI 0.704 to 0.960). For quantitative QD IHC, the area under the MLH1 ROC curve was 0.812 (95% CI 0.681 to 0.943), and the area under the MSH2 ROC curve was 0.598 (95% CI 0.418 to 0.777). Despite the advantage of QD staining to enable several markers to be measured simultaneously, it is of lower utility than DAB IHC for the identification of MMR mutation carriers. Automated DAB IHC staining and quantitative slide analysis may enable high-throughput IHC.

  5. Automated analysis of timber access road alternatives.

    Treesearch

    Doyle Burke

    1974-01-01

    The evaluation of timber access road alternatives is one of the primary tasks in timber harvest planning and design. During the planning stages, it is also one of the most difficult to accomplish quantitatively because a basis for comparison is related to such values as grade, length, horizontal and vertical curvature, and volumes of excavation and embankment. Within...

  6. An Analytical System Designed to Measure Multiple Malodorous Compounds Related to Kraft Mill Activities.

    ERIC Educational Resources Information Center

    Mulik, J. D.; And Others

    Reported upon in this research study is the development of two automated chromatographs equipped with flame photometric detectors for the qualitative and quantitative analysis of both low- and high-molecular weight sulfur compounds in kraft mill effluents. In addition the study sought to determine the relationship between total gaseous sulfur and…

  7. Sulfonium Ion Derivatization, Isobaric Stable Isotope Labeling and Data Dependent CID- and ETD-MS/MS for Enhanced Phosphopeptide Quantitation, Identification and Phosphorylation Site Characterization

    PubMed Central

    Lu, Yali; Zhou, Xiao; Stemmer, Paul M.; Reid, Gavin E.

    2014-01-01

    An amine specific peptide derivatization strategy involving the use of novel isobaric stable isotope encoded ‘fixed charge’ sulfonium ion reagents, coupled with an analysis strategy employing capillary HPLC, ESI-MS, and automated data dependent ion trap CID-MS/MS, -MS3, and/or ETD-MS/MS, has been developed for the improved quantitative analysis of protein phosphorylation, and for identification and characterization of their site(s) of modification. Derivatization of 50 synthetic phosphopeptides with S,S′-dimethylthiobutanoylhydroxysuccinimide ester iodide (DMBNHS), followed by analysis using capillary HPLC-ESI-MS, yielded an average 2.5-fold increase in ionization efficiencies and a significant increase in the presence and/or abundance of higher charge state precursor ions compared to the non-derivatized phosphopeptides. Notably, 44% of the phosphopeptides (22 of 50) in their underivatized states yielded precursor ions whose maximum charge states corresponded to +2, while only 8% (4 of 50) remained at this maximum charge state following DMBNHS derivatization. Quantitative analysis was achieved by measuring the abundances of the diagnostic product ions corresponding to the neutral losses of ‘light’ (S(CH3)2) and ‘heavy’ (S(CD3)2) dimethylsulfide exclusively formed upon CID-MS/MS of isobaric stable isotope labeled forms of the DMBNHS derivatized phosphopeptides. Under these conditions, the phosphate group stayed intact. Access for a greater number of peptides to provide enhanced phosphopeptide sequence identification and phosphorylation site characterization was achieved via automated data-dependent CID-MS3 or ETD-MS/MS analysis due to the formation of the higher charge state precursor ions. Importantly, improved sequence coverage was observed using ETD-MS/MS following introduction of the sulfonium ion fixed charge, but with no detrimental effects on ETD fragmentation efficiency. PMID:21952753

  8. Designing Domain-Specific HUMS Architectures: An Automated Approach

    NASA Technical Reports Server (NTRS)

    Mukkamala, Ravi; Agarwal, Neha; Kumar, Pramod; Sundaram, Parthiban

    2004-01-01

    The HUMS automation system automates the design of HUMS architectures. The automated design process involves selection of solutions from a large space of designs as well as pure synthesis of designs. Hence the whole objective is to efficiently search for or synthesize designs or parts of designs in the database and to integrate them to form the entire system design. The automation system adopts two approaches in order to produce the designs: (a) Bottom-up approach and (b) Top down approach. Both the approaches are endowed with a Suite of quantitative and quantitative techniques that enable a) the selection of matching component instances, b) the determination of design parameters, c) the evaluation of candidate designs at component-level and at system-level, d) the performance of cost-benefit analyses, e) the performance of trade-off analyses, etc. In short, the automation system attempts to capitalize on the knowledge developed from years of experience in engineering, system design and operation of the HUMS systems in order to economically produce the most optimal and domain-specific designs.

  9. A high-throughput screening system for barley/powdery mildew interactions based on automated analysis of light micrographs.

    PubMed

    Ihlow, Alexander; Schweizer, Patrick; Seiffert, Udo

    2008-01-23

    To find candidate genes that potentially influence the susceptibility or resistance of crop plants to powdery mildew fungi, an assay system based on transient-induced gene silencing (TIGS) as well as transient over-expression in single epidermal cells of barley has been developed. However, this system relies on quantitative microscopic analysis of the barley/powdery mildew interaction and will only become a high-throughput tool of phenomics upon automation of the most time-consuming steps. We have developed a high-throughput screening system based on a motorized microscope which evaluates the specimens fully automatically. A large-scale double-blind verification of the system showed an excellent agreement of manual and automated analysis and proved the system to work dependably. Furthermore, in a series of bombardment experiments an RNAi construct targeting the Mlo gene was included, which is expected to phenocopy resistance mediated by recessive loss-of-function alleles such as mlo5. In most cases, the automated analysis system recorded a shift towards resistance upon RNAi of Mlo, thus providing proof of concept for its usefulness in detecting gene-target effects. Besides saving labor and enabling a screening of thousands of candidate genes, this system offers continuous operation of expensive laboratory equipment and provides a less subjective analysis as well as a complete and enduring documentation of the experimental raw data in terms of digital images. In general, it proves the concept of enabling available microscope hardware to handle challenging screening tasks fully automatically.

  10. Flexible automated approach for quantitative liquid handling of complex biological samples.

    PubMed

    Palandra, Joe; Weller, David; Hudson, Gary; Li, Jeff; Osgood, Sarah; Hudson, Emily; Zhong, Min; Buchholz, Lisa; Cohen, Lucinda H

    2007-11-01

    A fully automated protein precipitation technique for biological sample preparation has been developed for the quantitation of drugs in various biological matrixes. All liquid handling during sample preparation was automated using a Hamilton MicroLab Star Robotic workstation, which included the preparation of standards and controls from a Watson laboratory information management system generated work list, shaking of 96-well plates, and vacuum application. Processing time is less than 30 s per sample or approximately 45 min per 96-well plate, which is then immediately ready for injection onto an LC-MS/MS system. An overview of the process workflow is discussed, including the software development. Validation data are also provided, including specific liquid class data as well as comparative data of automated vs manual preparation using both quality controls and actual sample data. The efficiencies gained from this automated approach are described.

  11. The surveillance state of behavioral automation

    PubMed Central

    Schaefer, Andreas T; Claridge-Chang, Adam

    2012-01-01

    Genetics’ demand for increased throughput is driving automatization of behavior analysis far beyond experimental workhorses like circadian monitors and the operant conditioning box. However, the new automation is not just faster: it is also allowing new kinds of experiments, many of which erase the boundaries of the traditional neuroscience disciplines (psychology, ethology and physiology) while producing insight into problems that were otherwise opaque. Ironically, a central theme of current automatization is to improve observation of animals in increasingly naturalistic environments. This is not just a return to 19th century priorities: the new observational methods provide unprecedented quantitation of actions and ever-closer integration with experimentation. PMID:22119142

  12. Automated quantitative muscle biopsy analysis system

    NASA Technical Reports Server (NTRS)

    Castleman, Kenneth R. (Inventor)

    1980-01-01

    An automated system to aid the diagnosis of neuromuscular diseases by producing fiber size histograms utilizing histochemically stained muscle biopsy tissue. Televised images of the microscopic fibers are processed electronically by a multi-microprocessor computer, which isolates, measures, and classifies the fibers and displays the fiber size distribution. The architecture of the multi-microprocessor computer, which is iterated to any required degree of complexity, features a series of individual microprocessors P.sub.n each receiving data from a shared memory M.sub.n-1 and outputing processed data to a separate shared memory M.sub.n+1 under control of a program stored in dedicated memory M.sub.n.

  13. Automated solid-phase extraction of herbicides from water for gas chromatographic-mass spectrometric analysis

    USGS Publications Warehouse

    Meyer, M.T.; Mills, M.S.; Thurman, E.M.

    1993-01-01

    An automated solid-phase extraction (SPE) method was developed for the pre-concentration of chloroacetanilide and triazine herbicides, and two triazine metabolites from 100-ml water samples. Breakthrough experiments for the C18 SPE cartridge show that the two triazine metabolites are not fully retained and that increasing flow-rate decreases their retention. Standard curve r2 values of 0.998-1.000 for each compound were consistently obtained and a quantitation level of 0.05 ??g/l was achieved for each compound tested. More than 10,000 surface and ground water samples have been analyzed by this method.

  14. Extended Field Laser Confocal Microscopy (EFLCM): Combining automated Gigapixel image capture with in silico virtual microscopy

    PubMed Central

    Flaberg, Emilie; Sabelström, Per; Strandh, Christer; Szekely, Laszlo

    2008-01-01

    Background Confocal laser scanning microscopy has revolutionized cell biology. However, the technique has major limitations in speed and sensitivity due to the fact that a single laser beam scans the sample, allowing only a few microseconds signal collection for each pixel. This limitation has been overcome by the introduction of parallel beam illumination techniques in combination with cold CCD camera based image capture. Methods Using the combination of microlens enhanced Nipkow spinning disc confocal illumination together with fully automated image capture and large scale in silico image processing we have developed a system allowing the acquisition, presentation and analysis of maximum resolution confocal panorama images of several Gigapixel size. We call the method Extended Field Laser Confocal Microscopy (EFLCM). Results We show using the EFLCM technique that it is possible to create a continuous confocal multi-colour mosaic from thousands of individually captured images. EFLCM can digitize and analyze histological slides, sections of entire rodent organ and full size embryos. It can also record hundreds of thousands cultured cells at multiple wavelength in single event or time-lapse fashion on fixed slides, in live cell imaging chambers or microtiter plates. Conclusion The observer independent image capture of EFLCM allows quantitative measurements of fluorescence intensities and morphological parameters on a large number of cells. EFLCM therefore bridges the gap between the mainly illustrative fluorescence microscopy and purely quantitative flow cytometry. EFLCM can also be used as high content analysis (HCA) instrument for automated screening processes. PMID:18627634

  15. Extended Field Laser Confocal Microscopy (EFLCM): combining automated Gigapixel image capture with in silico virtual microscopy.

    PubMed

    Flaberg, Emilie; Sabelström, Per; Strandh, Christer; Szekely, Laszlo

    2008-07-16

    Confocal laser scanning microscopy has revolutionized cell biology. However, the technique has major limitations in speed and sensitivity due to the fact that a single laser beam scans the sample, allowing only a few microseconds signal collection for each pixel. This limitation has been overcome by the introduction of parallel beam illumination techniques in combination with cold CCD camera based image capture. Using the combination of microlens enhanced Nipkow spinning disc confocal illumination together with fully automated image capture and large scale in silico image processing we have developed a system allowing the acquisition, presentation and analysis of maximum resolution confocal panorama images of several Gigapixel size. We call the method Extended Field Laser Confocal Microscopy (EFLCM). We show using the EFLCM technique that it is possible to create a continuous confocal multi-colour mosaic from thousands of individually captured images. EFLCM can digitize and analyze histological slides, sections of entire rodent organ and full size embryos. It can also record hundreds of thousands cultured cells at multiple wavelength in single event or time-lapse fashion on fixed slides, in live cell imaging chambers or microtiter plates. The observer independent image capture of EFLCM allows quantitative measurements of fluorescence intensities and morphological parameters on a large number of cells. EFLCM therefore bridges the gap between the mainly illustrative fluorescence microscopy and purely quantitative flow cytometry. EFLCM can also be used as high content analysis (HCA) instrument for automated screening processes.

  16. Automated Video Based Facial Expression Analysis of Neuropsychiatric Disorders

    PubMed Central

    Wang, Peng; Barrett, Frederick; Martin, Elizabeth; Milanova, Marina; Gur, Raquel E.; Gur, Ruben C.; Kohler, Christian; Verma, Ragini

    2008-01-01

    Deficits in emotional expression are prominent in several neuropsychiatric disorders, including schizophrenia. Available clinical facial expression evaluations provide subjective and qualitative measurements, which are based on static 2D images that do not capture the temporal dynamics and subtleties of expression changes. Therefore, there is a need for automated, objective and quantitative measurements of facial expressions captured using videos. This paper presents a computational framework that creates probabilistic expression profiles for video data and can potentially help to automatically quantify emotional expression differences between patients with neuropsychiatric disorders and healthy controls. Our method automatically detects and tracks facial landmarks in videos, and then extracts geometric features to characterize facial expression changes. To analyze temporal facial expression changes, we employ probabilistic classifiers that analyze facial expressions in individual frames, and then propagate the probabilities throughout the video to capture the temporal characteristics of facial expressions. The applications of our method to healthy controls and case studies of patients with schizophrenia and Asperger’s syndrome demonstrate the capability of the video-based expression analysis method in capturing subtleties of facial expression. Such results can pave the way for a video based method for quantitative analysis of facial expressions in clinical research of disorders that cause affective deficits. PMID:18045693

  17. Automated Detection of Electroencephalography Artifacts in Human, Rodent and Canine Subjects using Machine Learning.

    PubMed

    Levitt, Joshua; Nitenson, Adam; Koyama, Suguru; Heijmans, Lonne; Curry, James; Ross, Jason T; Kamerling, Steven; Saab, Carl Y

    2018-06-23

    Electroencephalography (EEG) invariably contains extra-cranial artifacts that are commonly dealt with based on qualitative and subjective criteria. Failure to account for EEG artifacts compromises data interpretation. We have developed a quantitative and automated support vector machine (SVM)-based algorithm to accurately classify artifactual EEG epochs in awake rodent, canine and humans subjects. An embodiment of this method also enables the determination of 'eyes open/closed' states in human subjects. The levels of SVM accuracy for artifact classification in humans, Sprague Dawley rats and beagle dogs were 94.17%, 83.68%, and 85.37%, respectively, whereas 'eyes open/closed' states in humans were labeled with 88.60% accuracy. Each of these results was significantly higher than chance. Comparison with Existing Methods: Other existing methods, like those dependent on Independent Component Analysis, have not been tested in non-human subjects, and require full EEG montages, instead of only single channels, as this method does. We conclude that our EEG artifact detection algorithm provides a valid and practical solution to a common problem in the quantitative analysis and assessment of EEG in pre-clinical research settings across evolutionary spectra. Copyright © 2018. Published by Elsevier B.V.

  18. Automated classification and quantitative analysis of arterial and venous vessels in fundus images

    NASA Astrophysics Data System (ADS)

    Alam, Minhaj; Son, Taeyoon; Toslak, Devrim; Lim, Jennifer I.; Yao, Xincheng

    2018-02-01

    It is known that retinopathies may affect arteries and veins differently. Therefore, reliable differentiation of arteries and veins is essential for computer-aided analysis of fundus images. The purpose of this study is to validate one automated method for robust classification of arteries and veins (A-V) in digital fundus images. We combine optical density ratio (ODR) analysis and blood vessel tracking algorithm to classify arteries and veins. A matched filtering method is used to enhance retinal blood vessels. Bottom hat filtering and global thresholding are used to segment the vessel and skeleton individual blood vessels. The vessel tracking algorithm is used to locate the optic disk and to identify source nodes of blood vessels in optic disk area. Each node can be identified as vein or artery using ODR information. Using the source nodes as starting point, the whole vessel trace is then tracked and classified as vein or artery using vessel curvature and angle information. 50 color fundus images from diabetic retinopathy patients were used to test the algorithm. Sensitivity, specificity, and accuracy metrics were measured to assess the validity of the proposed classification method compared to ground truths created by two independent observers. The algorithm demonstrated 97.52% accuracy in identifying blood vessels as vein or artery. A quantitative analysis upon A-V classification showed that average A-V ratio of width for NPDR subjects with hypertension decreased significantly (43.13%).

  19. Fully Automated Quantitative Estimation of Volumetric Breast Density from Digital Breast Tomosynthesis Images: Preliminary Results and Comparison with Digital Mammography and MR Imaging.

    PubMed

    Pertuz, Said; McDonald, Elizabeth S; Weinstein, Susan P; Conant, Emily F; Kontos, Despina

    2016-04-01

    To assess a fully automated method for volumetric breast density (VBD) estimation in digital breast tomosynthesis (DBT) and to compare the findings with those of full-field digital mammography (FFDM) and magnetic resonance (MR) imaging. Bilateral DBT images, FFDM images, and sagittal breast MR images were retrospectively collected from 68 women who underwent breast cancer screening from October 2011 to September 2012 with institutional review board-approved, HIPAA-compliant protocols. A fully automated computer algorithm was developed for quantitative estimation of VBD from DBT images. FFDM images were processed with U.S. Food and Drug Administration-cleared software, and the MR images were processed with a previously validated automated algorithm to obtain corresponding VBD estimates. Pearson correlation and analysis of variance with Tukey-Kramer post hoc correction were used to compare the multimodality VBD estimates. Estimates of VBD from DBT were significantly correlated with FFDM-based and MR imaging-based estimates with r = 0.83 (95% confidence interval [CI]: 0.74, 0.90) and r = 0.88 (95% CI: 0.82, 0.93), respectively (P < .001). The corresponding correlation between FFDM and MR imaging was r = 0.84 (95% CI: 0.76, 0.90). However, statistically significant differences after post hoc correction (α = 0.05) were found among VBD estimates from FFDM (mean ± standard deviation, 11.1% ± 7.0) relative to MR imaging (16.6% ± 11.2) and DBT (19.8% ± 16.2). Differences between VDB estimates from DBT and MR imaging were not significant (P = .26). Fully automated VBD estimates from DBT, FFDM, and MR imaging are strongly correlated but show statistically significant differences. Therefore, absolute differences in VBD between FFDM, DBT, and MR imaging should be considered in breast cancer risk assessment.

  20. Automated Selection of Hotspots (ASH): enhanced automated segmentation and adaptive step finding for Ki67 hotspot detection in adrenal cortical cancer.

    PubMed

    Lu, Hao; Papathomas, Thomas G; van Zessen, David; Palli, Ivo; de Krijger, Ronald R; van der Spek, Peter J; Dinjens, Winand N M; Stubbs, Andrew P

    2014-11-25

    In prognosis and therapeutics of adrenal cortical carcinoma (ACC), the selection of the most active areas in proliferative rate (hotspots) within a slide and objective quantification of immunohistochemical Ki67 Labelling Index (LI) are of critical importance. In addition to intratumoral heterogeneity in proliferative rate i.e. levels of Ki67 expression within a given ACC, lack of uniformity and reproducibility in the method of quantification of Ki67 LI may confound an accurate assessment of Ki67 LI. We have implemented an open source toolset, Automated Selection of Hotspots (ASH), for automated hotspot detection and quantification of Ki67 LI. ASH utilizes NanoZoomer Digital Pathology Image (NDPI) splitter to convert the specific NDPI format digital slide scanned from the Hamamatsu instrument into a conventional tiff or jpeg format image for automated segmentation and adaptive step finding hotspots detection algorithm. Quantitative hotspot ranking is provided by the functionality from the open source application ImmunoRatio as part of the ASH protocol. The output is a ranked set of hotspots with concomitant quantitative values based on whole slide ranking. We have implemented an open source automated detection quantitative ranking of hotspots to support histopathologists in selecting the 'hottest' hotspot areas in adrenocortical carcinoma. To provide wider community easy access to ASH we implemented a Galaxy virtual machine (VM) of ASH which is available from http://bioinformatics.erasmusmc.nl/wiki/Automated_Selection_of_Hotspots . The virtual slide(s) for this article can be found here: http://www.diagnosticpathology.diagnomx.eu/vs/13000_2014_216.

  1. Practical considerations of image analysis and quantification of signal transduction IHC staining.

    PubMed

    Grunkin, Michael; Raundahl, Jakob; Foged, Niels T

    2011-01-01

    The dramatic increase in computer processing power in combination with the availability of high-quality digital cameras during the last 10 years has fertilized the grounds for quantitative microscopy based on digital image analysis. With the present introduction of robust scanners for whole slide imaging in both research and routine, the benefits of automation and objectivity in the analysis of tissue sections will be even more obvious. For in situ studies of signal transduction, the combination of tissue microarrays, immunohistochemistry, digital imaging, and quantitative image analysis will be central operations. However, immunohistochemistry is a multistep procedure including a lot of technical pitfalls leading to intra- and interlaboratory variability of its outcome. The resulting variations in staining intensity and disruption of original morphology are an extra challenge for the image analysis software, which therefore preferably should be dedicated to the detection and quantification of histomorphometrical end points.

  2. Immunohistochemical Expression of Matrix Metalloproteinase-7 in Human Colorectal Adenomas Using Specified Automated Cellular Image Analysis System: A Clinicopathological Study

    PubMed Central

    Qasim, Ban J.; Ali, Hussam H.; Hussein, Alaa G.

    2013-01-01

    Background/Aim: To evaluate the immunohistochemical expression of matrix metalloproteinase-7 (MMP-7) in colorectal adenomas, and to correlate this expression with different clinicopathological parameters. Patients and Methods: The study was retrospectively designed. Thirty three paraffin blocks from patients with colorectal adenoma and 20 samples of non-tumerous colonic tissue taken as control group were included in the study. MMP-7 expression was assessed by immunohistochemistry method. The scoring of immunohistochemical staining was conducted utilizing a specified automated cellular image analysis system (Digimizer). Results: The frequency of positive immunohistochemical expression of MMP-7 was significantly higher in adenoma than control group (45.45% versus 10%) (P value < 0.001). Strong MMP-7 staining was mainly seen in adenoma cases (30.30%) in comparison with control (0%) the difference is significant (P < 0.001). The three digital parameters of MMP-7 immunohistochemical expression (Area (A), Number of objects (N), and intensity (I)) were significantly higher in adenoma than control. Mean (A and I) of MMP-7 showed a significant correlation with large sized adenoma (≥ 1cm) (P < 0.05), also a significant positive correlation of the three digital parameters (A, N, and I) of MMP-7 expression with villous configuration and severe dysplasia in colorectal adenoma had been identified (P < 0.05). Conclusion: MMP-7 plays an important role in the growth and malignant conversion of colorectal adenomas as it is more likely to be expressed in advanced colorectal adenomatous polyps with large size, severe dysplasia and villous histology. The use of automated cellular image analysis system (Digmizer) to quantify immunohistochemical staining yields more consistent assay results, converts semi-quantitative assay to a truly quantitative assay, and improves assay objectivity and reproducibility. PMID:23319034

  3. An Energy-Based Three-Dimensional Segmentation Approach for the Quantitative Interpretation of Electron Tomograms

    PubMed Central

    Bartesaghi, Alberto; Sapiro, Guillermo; Subramaniam, Sriram

    2006-01-01

    Electron tomography allows for the determination of the three-dimensional structures of cells and tissues at resolutions significantly higher than that which is possible with optical microscopy. Electron tomograms contain, in principle, vast amounts of information on the locations and architectures of large numbers of subcellular assemblies and organelles. The development of reliable quantitative approaches for the analysis of features in tomograms is an important problem, and a challenging prospect due to the low signal-to-noise ratios that are inherent to biological electron microscopic images. This is, in part, a consequence of the tremendous complexity of biological specimens. We report on a new method for the automated segmentation of HIV particles and selected cellular compartments in electron tomograms recorded from fixed, plastic-embedded sections derived from HIV-infected human macrophages. Individual features in the tomogram are segmented using a novel robust algorithm that finds their boundaries as global minimal surfaces in a metric space defined by image features. The optimization is carried out in a transformed spherical domain with the center an interior point of the particle of interest, providing a proper setting for the fast and accurate minimization of the segmentation energy. This method provides tools for the semi-automated detection and statistical evaluation of HIV particles at different stages of assembly in the cells and presents opportunities for correlation with biochemical markers of HIV infection. The segmentation algorithm developed here forms the basis of the automated analysis of electron tomograms and will be especially useful given the rapid increases in the rate of data acquisition. It could also enable studies of much larger data sets, such as those which might be obtained from the tomographic analysis of HIV-infected cells from studies of large populations. PMID:16190467

  4. An Automated Method for High-Throughput Screening of Arabidopsis Rosette Growth in Multi-Well Plates and Its Validation in Stress Conditions.

    PubMed

    De Diego, Nuria; Fürst, Tomáš; Humplík, Jan F; Ugena, Lydia; Podlešáková, Kateřina; Spíchal, Lukáš

    2017-01-01

    High-throughput plant phenotyping platforms provide new possibilities for automated, fast scoring of several plant growth and development traits, followed over time using non-invasive sensors. Using Arabidops is as a model offers important advantages for high-throughput screening with the opportunity to extrapolate the results obtained to other crops of commercial interest. In this study we describe the development of a highly reproducible high-throughput Arabidopsis in vitro bioassay established using our OloPhen platform, suitable for analysis of rosette growth in multi-well plates. This method was successfully validated on example of multivariate analysis of Arabidopsis rosette growth in different salt concentrations and the interaction with varying nutritional composition of the growth medium. Several traits such as changes in the rosette area, relative growth rate, survival rate and homogeneity of the population are scored using fully automated RGB imaging and subsequent image analysis. The assay can be used for fast screening of the biological activity of chemical libraries, phenotypes of transgenic or recombinant inbred lines, or to search for potential quantitative trait loci. It is especially valuable for selecting genotypes or growth conditions that improve plant stress tolerance.

  5. [Establishment of Automation System for Detection of Alcohol in Blood].

    PubMed

    Tian, L L; Shen, Lei; Xue, J F; Liu, M M; Liang, L J

    2017-02-01

    To establish an automation system for detection of alcohol content in blood. The determination was performed by automated workstation of extraction-headspace gas chromatography (HS-GC). The blood collection with negative pressure, sealing time of headspace bottle and sample needle were checked and optimized in the abstraction of automation system. The automatic sampling was compared with the manual sampling. The quantitative data obtained by the automated workstation of extraction-HS-GC for alcohol was stable. The relative differences of two parallel samples were less than 5%. The automated extraction was superior to the manual extraction. A good linear relationship was obtained at the alcohol concentration range of 0.1-3.0 mg/mL ( r ≥0.999) with good repeatability. The method is simple and quick, with more standard experiment process and accurate experimental data. It eliminates the error from the experimenter and has good repeatability, which can be applied to the qualitative and quantitative detections of alcohol in blood. Copyright© by the Editorial Department of Journal of Forensic Medicine

  6. Platform for Automated Real-Time High Performance Analytics on Medical Image Data.

    PubMed

    Allen, William J; Gabr, Refaat E; Tefera, Getaneh B; Pednekar, Amol S; Vaughn, Matthew W; Narayana, Ponnada A

    2018-03-01

    Biomedical data are quickly growing in volume and in variety, providing clinicians an opportunity for better clinical decision support. Here, we demonstrate a robust platform that uses software automation and high performance computing (HPC) resources to achieve real-time analytics of clinical data, specifically magnetic resonance imaging (MRI) data. We used the Agave application programming interface to facilitate communication, data transfer, and job control between an MRI scanner and an off-site HPC resource. In this use case, Agave executed the graphical pipeline tool GRAphical Pipeline Environment (GRAPE) to perform automated, real-time, quantitative analysis of MRI scans. Same-session image processing will open the door for adaptive scanning and real-time quality control, potentially accelerating the discovery of pathologies and minimizing patient callbacks. We envision this platform can be adapted to other medical instruments, HPC resources, and analytics tools.

  7. The effects of total laboratory automation on the management of a clinical chemistry laboratory. Retrospective analysis of 36 years.

    PubMed

    Sarkozi, Laszlo; Simson, Elkin; Ramanathan, Lakshmi

    2003-03-01

    Thirty-six years of data and history of laboratory practice at our institution has enabled us to follow the effects of analytical automation, then recently pre-analytical and post-analytical automation on productivity, cost reduction and enhanced quality of service. In 1998, we began the operation of a pre- and post-analytical automation system (robotics), together with an advanced laboratory information system to process specimens prior to analysis, deliver them to various automated analytical instruments, specimen outlet racks and finally to refrigerated stockyards. By the end of 3 years of continuous operation, we compared the chemistry part of the system with the prior 33 years and quantitated the financial impact of the various stages of automation. Between 1965 and 2000, the Consumer Price Index increased by a factor of 5.5 in the United States. During the same 36 years, at our institution's Chemistry Department the productivity (indicated as the number of reported test results/employee/year) increased from 10,600 to 104,558 (9.3-fold). When expressed in constant 1965 dollars, the total cost per test decreased from 0.79 dollars to 0.15 dollars. Turnaround time for availability of results on patient units decreased to the extent that Stat specimens requiring a turnaround time of <1 h do not need to be separately prepared or prioritized on the system. Our experience shows that the introduction of a robotics system for perianalytical automation has brought a large improvement in productivity together with decreased operational cost. It enabled us to significantly increase our workload together with a reduction of personnel. In addition, stats are handled easily and there are benefits such as safer working conditions and improved sample identification, which are difficult to quantify at this stage.

  8. Quantifying Vocal Mimicry in the Greater Racket-Tailed Drongo: A Comparison of Automated Methods and Human Assessment

    PubMed Central

    Agnihotri, Samira; Sundeep, P. V. D. S.; Seelamantula, Chandra Sekhar; Balakrishnan, Rohini

    2014-01-01

    Objective identification and description of mimicked calls is a primary component of any study on avian vocal mimicry but few studies have adopted a quantitative approach. We used spectral feature representations commonly used in human speech analysis in combination with various distance metrics to distinguish between mimicked and non-mimicked calls of the greater racket-tailed drongo, Dicrurus paradiseus and cross-validated the results with human assessment of spectral similarity. We found that the automated method and human subjects performed similarly in terms of the overall number of correct matches of mimicked calls to putative model calls. However, the two methods also misclassified different subsets of calls and we achieved a maximum accuracy of ninety five per cent only when we combined the results of both the methods. This study is the first to use Mel-frequency Cepstral Coefficients and Relative Spectral Amplitude - filtered Linear Predictive Coding coefficients to quantify vocal mimicry. Our findings also suggest that in spite of several advances in automated methods of song analysis, corresponding cross-validation by humans remains essential. PMID:24603717

  9. Quantitative phase-digital holographic microscopy: a new imaging modality to identify original cellular biomarkers of diseases

    NASA Astrophysics Data System (ADS)

    Marquet, P.; Rothenfusser, K.; Rappaz, B.; Depeursinge, C.; Jourdain, P.; Magistretti, P. J.

    2016-03-01

    Quantitative phase microscopy (QPM) has recently emerged as a powerful label-free technique in the field of living cell imaging allowing to non-invasively measure with a nanometric axial sensitivity cell structure and dynamics. Since the phase retardation of a light wave when transmitted through the observed cells, namely the quantitative phase signal (QPS), is sensitive to both cellular thickness and intracellular refractive index related to the cellular content, its accurate analysis allows to derive various cell parameters and monitor specific cell processes, which are very likely to identify new cell biomarkers. Specifically, quantitative phase-digital holographic microscopy (QP-DHM), thanks to its numerical flexibility facilitating parallelization and automation processes, represents an appealing imaging modality to both identify original cellular biomarkers of diseases as well to explore the underlying pathophysiological processes.

  10. Automated thermal mapping techniques using chromatic image analysis

    NASA Technical Reports Server (NTRS)

    Buck, Gregory M.

    1989-01-01

    Thermal imaging techniques are introduced using a chromatic image analysis system and temperature sensitive coatings. These techniques are used for thermal mapping and surface heat transfer measurements on aerothermodynamic test models in hypersonic wind tunnels. Measurements are made on complex vehicle configurations in a timely manner and at minimal expense. The image analysis system uses separate wavelength filtered images to analyze surface spectral intensity data. The system was initially developed for quantitative surface temperature mapping using two-color thermographic phosphors but was found useful in interpreting phase change paint and liquid crystal data as well.

  11. Automated analysis of high-content microscopy data with deep learning.

    PubMed

    Kraus, Oren Z; Grys, Ben T; Ba, Jimmy; Chong, Yolanda; Frey, Brendan J; Boone, Charles; Andrews, Brenda J

    2017-04-18

    Existing computational pipelines for quantitative analysis of high-content microscopy data rely on traditional machine learning approaches that fail to accurately classify more than a single dataset without substantial tuning and training, requiring extensive analysis. Here, we demonstrate that the application of deep learning to biological image data can overcome the pitfalls associated with conventional machine learning classifiers. Using a deep convolutional neural network (DeepLoc) to analyze yeast cell images, we show improved performance over traditional approaches in the automated classification of protein subcellular localization. We also demonstrate the ability of DeepLoc to classify highly divergent image sets, including images of pheromone-arrested cells with abnormal cellular morphology, as well as images generated in different genetic backgrounds and in different laboratories. We offer an open-source implementation that enables updating DeepLoc on new microscopy datasets. This study highlights deep learning as an important tool for the expedited analysis of high-content microscopy data. © 2017 The Authors. Published under the terms of the CC BY 4.0 license.

  12. An automated gas chromatography time-of-flight mass spectrometry instrument for the quantitative analysis of halocarbons in air

    NASA Astrophysics Data System (ADS)

    Obersteiner, F.; Bönisch, H.; Engel, A.

    2016-01-01

    We present the characterization and application of a new gas chromatography time-of-flight mass spectrometry instrument (GC-TOFMS) for the quantitative analysis of halocarbons in air samples. The setup comprises three fundamental enhancements compared to our earlier work (Hoker et al., 2015): (1) full automation, (2) a mass resolving power R = m/Δm of the TOFMS (Tofwerk AG, Switzerland) increased up to 4000 and (3) a fully accessible data format of the mass spectrometric data. Automation in combination with the accessible data allowed an in-depth characterization of the instrument. Mass accuracy was found to be approximately 5 ppm in mean after automatic recalibration of the mass axis in each measurement. A TOFMS configuration giving R = 3500 was chosen to provide an R-to-sensitivity ratio suitable for our purpose. Calculated detection limits are as low as a few femtograms by means of the accurate mass information. The precision for substance quantification was 0.15 % at the best for an individual measurement and in general mainly determined by the signal-to-noise ratio of the chromatographic peak. Detector non-linearity was found to be insignificant up to a mixing ratio of roughly 150 ppt at 0.5 L sampled volume. At higher concentrations, non-linearities of a few percent were observed (precision level: 0.2 %) but could be attributed to a potential source within the detection system. A straightforward correction for those non-linearities was applied in data processing, again by exploiting the accurate mass information. Based on the overall characterization results, the GC-TOFMS instrument was found to be very well suited for the task of quantitative halocarbon trace gas observation and a big step forward compared to scanning, quadrupole MS with low mass resolving power and a TOFMS technique reported to be non-linear and restricted by a small dynamical range.

  13. Computerized image analysis for quantitative neuronal phenotyping in zebrafish.

    PubMed

    Liu, Tianming; Lu, Jianfeng; Wang, Ye; Campbell, William A; Huang, Ling; Zhu, Jinmin; Xia, Weiming; Wong, Stephen T C

    2006-06-15

    An integrated microscope image analysis pipeline is developed for automatic analysis and quantification of phenotypes in zebrafish with altered expression of Alzheimer's disease (AD)-linked genes. We hypothesize that a slight impairment of neuronal integrity in a large number of zebrafish carrying the mutant genotype can be detected through the computerized image analysis method. Key functionalities of our zebrafish image processing pipeline include quantification of neuron loss in zebrafish embryos due to knockdown of AD-linked genes, automatic detection of defective somites, and quantitative measurement of gene expression levels in zebrafish with altered expression of AD-linked genes or treatment with a chemical compound. These quantitative measurements enable the archival of analyzed results and relevant meta-data. The structured database is organized for statistical analysis and data modeling to better understand neuronal integrity and phenotypic changes of zebrafish under different perturbations. Our results show that the computerized analysis is comparable to manual counting with equivalent accuracy and improved efficacy and consistency. Development of such an automated data analysis pipeline represents a significant step forward to achieve accurate and reproducible quantification of neuronal phenotypes in large scale or high-throughput zebrafish imaging studies.

  14. A novel image processing technique for 3D volumetric analysis of severely resorbed alveolar sockets with CBCT.

    PubMed

    Manavella, Valeria; Romano, Federica; Garrone, Federica; Terzini, Mara; Bignardi, Cristina; Aimetti, Mario

    2017-06-01

    The aim of this study was to present and validate a novel procedure for the quantitative volumetric assessment of extraction sockets that combines cone-beam computed tomography (CBCT) and image processing techniques. The CBCT dataset of 9 severely resorbed extraction sockets was analyzed by means of two image processing software, Image J and Mimics, using manual and automated segmentation techniques. They were also applied on 5-mm spherical aluminum markers of known volume and on a polyvinyl chloride model of one alveolar socket scanned with Micro-CT to test the accuracy. Statistical differences in alveolar socket volume were found between the different methods of volumetric analysis (P<0.0001). The automated segmentation using Mimics was the most reliable and accurate method with a relative error of 1.5%, considerably smaller than the error of 7% and of 10% introduced by the manual method using Mimics and by the automated method using ImageJ. The currently proposed automated segmentation protocol for the three-dimensional rendering of alveolar sockets showed more accurate results, excellent inter-observer similarity and increased user friendliness. The clinical application of this method enables a three-dimensional evaluation of extraction socket healing after the reconstructive procedures and during the follow-up visits.

  15. An Automated Self-Learning Quantification System to Identify Visible Areas in Capsule Endoscopy Images.

    PubMed

    Hashimoto, Shinichi; Ogihara, Hiroyuki; Suenaga, Masato; Fujita, Yusuke; Terai, Shuji; Hamamoto, Yoshihiko; Sakaida, Isao

    2017-08-01

    Visibility in capsule endoscopic images is presently evaluated through intermittent analysis of frames selected by a physician. It is thus subjective and not quantitative. A method to automatically quantify the visibility on capsule endoscopic images has not been reported. Generally, when designing automated image recognition programs, physicians must provide a training image; this process is called supervised learning. We aimed to develop a novel automated self-learning quantification system to identify visible areas on capsule endoscopic images. The technique was developed using 200 capsule endoscopic images retrospectively selected from each of three patients. The rate of detection of visible areas on capsule endoscopic images between a supervised learning program, using training images labeled by a physician, and our novel automated self-learning program, using unlabeled training images without intervention by a physician, was compared. The rate of detection of visible areas was equivalent for the supervised learning program and for our automatic self-learning program. The visible areas automatically identified by self-learning program correlated to the areas identified by an experienced physician. We developed a novel self-learning automated program to identify visible areas in capsule endoscopic images.

  16. Automated segmentation and reconstruction of patient-specific cardiac anatomy and pathology from in vivo MRI*

    NASA Astrophysics Data System (ADS)

    Ringenberg, Jordan; Deo, Makarand; Devabhaktuni, Vijay; Filgueiras-Rama, David; Pizarro, Gonzalo; Ibañez, Borja; Berenfeld, Omer; Boyers, Pamela; Gold, Jeffrey

    2012-12-01

    This paper presents an automated method to segment left ventricle (LV) tissues from functional and delayed-enhancement (DE) cardiac magnetic resonance imaging (MRI) scans using a sequential multi-step approach. First, a region of interest (ROI) is computed to create a subvolume around the LV using morphological operations and image arithmetic. From the subvolume, the myocardial contours are automatically delineated using difference of Gaussians (DoG) filters and GSV snakes. These contours are used as a mask to identify pathological tissues, such as fibrosis or scar, within the DE-MRI. The presented automated technique is able to accurately delineate the myocardium and identify the pathological tissue in patient sets. The results were validated by two expert cardiologists, and in one set the automated results are quantitatively and qualitatively compared with expert manual delineation. Furthermore, the method is patient-specific, performed on an entire patient MRI series. Thus, in addition to providing a quick analysis of individual MRI scans, the fully automated segmentation method is used for effectively tagging regions in order to reconstruct computerized patient-specific 3D cardiac models. These models can then be used in electrophysiological studies and surgical strategy planning.

  17. Corpus Callosum Area and Brain Volume in Autism Spectrum Disorder: Quantitative Analysis of Structural MRI from the ABIDE Database

    ERIC Educational Resources Information Center

    Kucharsky Hiess, R.; Alter, R.; Sojoudi, S.; Ardekani, B. A.; Kuzniecky, R.; Pardoe, H. R.

    2015-01-01

    Reduced corpus callosum area and increased brain volume are two commonly reported findings in autism spectrum disorder (ASD). We investigated these two correlates in ASD and healthy controls using T1-weighted MRI scans from the Autism Brain Imaging Data Exchange (ABIDE). Automated methods were used to segment the corpus callosum and intracranial…

  18. Automated Detection of P. falciparum Using Machine Learning Algorithms with Quantitative Phase Images of Unstained Cells.

    PubMed

    Park, Han Sang; Rinehart, Matthew T; Walzer, Katelyn A; Chi, Jen-Tsan Ashley; Wax, Adam

    2016-01-01

    Malaria detection through microscopic examination of stained blood smears is a diagnostic challenge that heavily relies on the expertise of trained microscopists. This paper presents an automated analysis method for detection and staging of red blood cells infected by the malaria parasite Plasmodium falciparum at trophozoite or schizont stage. Unlike previous efforts in this area, this study uses quantitative phase images of unstained cells. Erythrocytes are automatically segmented using thresholds of optical phase and refocused to enable quantitative comparison of phase images. Refocused images are analyzed to extract 23 morphological descriptors based on the phase information. While all individual descriptors are highly statistically different between infected and uninfected cells, each descriptor does not enable separation of populations at a level satisfactory for clinical utility. To improve the diagnostic capacity, we applied various machine learning techniques, including linear discriminant classification (LDC), logistic regression (LR), and k-nearest neighbor classification (NNC), to formulate algorithms that combine all of the calculated physical parameters to distinguish cells more effectively. Results show that LDC provides the highest accuracy of up to 99.7% in detecting schizont stage infected cells compared to uninfected RBCs. NNC showed slightly better accuracy (99.5%) than either LDC (99.0%) or LR (99.1%) for discriminating late trophozoites from uninfected RBCs. However, for early trophozoites, LDC produced the best accuracy of 98%. Discrimination of infection stage was less accurate, producing high specificity (99.8%) but only 45.0%-66.8% sensitivity with early trophozoites most often mistaken for late trophozoite or schizont stage and late trophozoite and schizont stage most often confused for each other. Overall, this methodology points to a significant clinical potential of using quantitative phase imaging to detect and stage malaria infection without staining or expert analysis.

  19. Automated Detection of P. falciparum Using Machine Learning Algorithms with Quantitative Phase Images of Unstained Cells

    PubMed Central

    Park, Han Sang; Rinehart, Matthew T.; Walzer, Katelyn A.; Chi, Jen-Tsan Ashley; Wax, Adam

    2016-01-01

    Malaria detection through microscopic examination of stained blood smears is a diagnostic challenge that heavily relies on the expertise of trained microscopists. This paper presents an automated analysis method for detection and staging of red blood cells infected by the malaria parasite Plasmodium falciparum at trophozoite or schizont stage. Unlike previous efforts in this area, this study uses quantitative phase images of unstained cells. Erythrocytes are automatically segmented using thresholds of optical phase and refocused to enable quantitative comparison of phase images. Refocused images are analyzed to extract 23 morphological descriptors based on the phase information. While all individual descriptors are highly statistically different between infected and uninfected cells, each descriptor does not enable separation of populations at a level satisfactory for clinical utility. To improve the diagnostic capacity, we applied various machine learning techniques, including linear discriminant classification (LDC), logistic regression (LR), and k-nearest neighbor classification (NNC), to formulate algorithms that combine all of the calculated physical parameters to distinguish cells more effectively. Results show that LDC provides the highest accuracy of up to 99.7% in detecting schizont stage infected cells compared to uninfected RBCs. NNC showed slightly better accuracy (99.5%) than either LDC (99.0%) or LR (99.1%) for discriminating late trophozoites from uninfected RBCs. However, for early trophozoites, LDC produced the best accuracy of 98%. Discrimination of infection stage was less accurate, producing high specificity (99.8%) but only 45.0%-66.8% sensitivity with early trophozoites most often mistaken for late trophozoite or schizont stage and late trophozoite and schizont stage most often confused for each other. Overall, this methodology points to a significant clinical potential of using quantitative phase imaging to detect and stage malaria infection without staining or expert analysis. PMID:27636719

  20. Avoiding hard chromatographic segmentation: A moving window approach for the automated resolution of gas chromatography-mass spectrometry-based metabolomics signals by multivariate methods.

    PubMed

    Domingo-Almenara, Xavier; Perera, Alexandre; Brezmes, Jesus

    2016-11-25

    Gas chromatography-mass spectrometry (GC-MS) produces large and complex datasets characterized by co-eluted compounds and at trace levels, and with a distinct compound ion-redundancy as a result of the high fragmentation by the electron impact ionization. Compounds in GC-MS can be resolved by taking advantage of the multivariate nature of GC-MS data by applying multivariate resolution methods. However, multivariate methods have to be applied in small regions of the chromatogram, and therefore chromatograms are segmented prior to the application of the algorithms. The automation of this segmentation process is a challenging task as it implies separating between informative data and noise from the chromatogram. This study demonstrates the capabilities of independent component analysis-orthogonal signal deconvolution (ICA-OSD) and multivariate curve resolution-alternating least squares (MCR-ALS) with an overlapping moving window implementation to avoid the typical hard chromatographic segmentation. Also, after being resolved, compounds are aligned across samples by an automated alignment algorithm. We evaluated the proposed methods through a quantitative analysis of GC-qTOF MS data from 25 serum samples. The quantitative performance of both moving window ICA-OSD and MCR-ALS-based implementations was compared with the quantification of 33 compounds by the XCMS package. Results shown that most of the R 2 coefficients of determination exhibited a high correlation (R 2 >0.90) in both ICA-OSD and MCR-ALS moving window-based approaches. Copyright © 2016 Elsevier B.V. All rights reserved.

  1. Automated 3D renal segmentation based on image partitioning

    NASA Astrophysics Data System (ADS)

    Yeghiazaryan, Varduhi; Voiculescu, Irina D.

    2016-03-01

    Despite several decades of research into segmentation techniques, automated medical image segmentation is barely usable in a clinical context, and still at vast user time expense. This paper illustrates unsupervised organ segmentation through the use of a novel automated labelling approximation algorithm followed by a hypersurface front propagation method. The approximation stage relies on a pre-computed image partition forest obtained directly from CT scan data. We have implemented all procedures to operate directly on 3D volumes, rather than slice-by-slice, because our algorithms are dimensionality-independent. The results picture segmentations which identify kidneys, but can easily be extrapolated to other body parts. Quantitative analysis of our automated segmentation compared against hand-segmented gold standards indicates an average Dice similarity coefficient of 90%. Results were obtained over volumes of CT data with 9 kidneys, computing both volume-based similarity measures (such as the Dice and Jaccard coefficients, true positive volume fraction) and size-based measures (such as the relative volume difference). The analysis considered both healthy and diseased kidneys, although extreme pathological cases were excluded from the overall count. Such cases are difficult to segment both manually and automatically due to the large amplitude of Hounsfield unit distribution in the scan, and the wide spread of the tumorous tissue inside the abdomen. In the case of kidneys that have maintained their shape, the similarity range lies around the values obtained for inter-operator variability. Whilst the procedure is fully automated, our tools also provide a light level of manual editing.

  2. Automated segmentation of blood-flow regions in large thoracic arteries using 3D-cine PC-MRI measurements.

    PubMed

    van Pelt, Roy; Nguyen, Huy; ter Haar Romeny, Bart; Vilanova, Anna

    2012-03-01

    Quantitative analysis of vascular blood flow, acquired by phase-contrast MRI, requires accurate segmentation of the vessel lumen. In clinical practice, 2D-cine velocity-encoded slices are inspected, and the lumen is segmented manually. However, segmentation of time-resolved volumetric blood-flow measurements is a tedious and time-consuming task requiring automation. Automated segmentation of large thoracic arteries, based solely on the 3D-cine phase-contrast MRI (PC-MRI) blood-flow data, was done. An active surface model, which is fast and topologically stable, was used. The active surface model requires an initial surface, approximating the desired segmentation. A method to generate this surface was developed based on a voxel-wise temporal maximum of blood-flow velocities. The active surface model balances forces, based on the surface structure and image features derived from the blood-flow data. The segmentation results were validated using volunteer studies, including time-resolved 3D and 2D blood-flow data. The segmented surface was intersected with a velocity-encoded PC-MRI slice, resulting in a cross-sectional contour of the lumen. These cross-sections were compared to reference contours that were manually delineated on high-resolution 2D-cine slices. The automated approach closely approximates the manual blood-flow segmentations, with error distances on the order of the voxel size. The initial surface provides a close approximation of the desired luminal geometry. This improves the convergence time of the active surface and facilitates parametrization. An active surface approach for vessel lumen segmentation was developed, suitable for quantitative analysis of 3D-cine PC-MRI blood-flow data. As opposed to prior thresholding and level-set approaches, the active surface model is topologically stable. A method to generate an initial approximate surface was developed, and various features that influence the segmentation model were evaluated. The active surface segmentation results were shown to closely approximate manual segmentations.

  3. Automated measurement of estrogen receptor in breast cancer: a comparison of fluorescent and chromogenic methods of measurement

    PubMed Central

    Zarrella, Elizabeth; Coulter, Madeline; Welsh, Allison; Carvajal, Daniel; Schalper, Kurt; Harigopal, Malini; Rimm, David; Neumeister, Veronique

    2016-01-01

    While FDA approved methods of assessment of Estrogen Receptor (ER) are “fit for purpose”, they represent a 30-year-old technology. New quantitative methods, both chromogenic and fluorescent, have been developed and studies have shown that these methods increase the accuracy of assessment of ER. Here, we compare three methods of ER detection and assessment on two retrospective tissue microarray cohorts of breast cancer patients: estimates of percent nuclei positive by pathologists and by Aperio’s nuclear algorithm (standard chromogenic immunostaining), and immunofluorescence as quantified with the AQUA® method of quantitative immunofluorescence (QIF). Reproducibility was excellent (R2 > 0.95) between users for both automated analysis methods, and the Aperio and QIF scoring results were also highly correlated, despite the different detection systems. The subjective readings show lower levels of reproducibility and a discontinuous, bimodal distribution of scores not seen by either mechanized method. Kaplan-Meier analysis of 10-year disease-free survival was significant for each method (Pathologist, P=0.0019; Aperio, P=0.0053, AQUA, P=0.0026), but there were discrepancies in patient classification in 19 out of 233 cases analyzed. Out of these, 11 were visually positive by both chromogenic and fluorescent detection. In 10 cases, the Aperio nuclear algorithm labeled the nuclei as negative, in 1 case, the AQUA score was just under the cutoff for positivity (determined by an Index TMA). In contrast, 8 out of 19 discrepant cases had clear nuclear positivity by fluorescence that was unable to be visualized by chromogenic detection, perhaps due to low positivity masked by the hematoxylin counterstain. These results demonstrate that automated systems enable objective, precise quantification of ER. Furthermore immunofluorescence detection offers the additional advantage of a signal that cannot be masked by a counterstaining agent. These data support the usage of automated methods for measurement of this and other biomarkers that may be used in companion diagnostic tests. PMID:27348626

  4. Meeting Report: Tissue-based Image Analysis.

    PubMed

    Saravanan, Chandra; Schumacher, Vanessa; Brown, Danielle; Dunstan, Robert; Galarneau, Jean-Rene; Odin, Marielle; Mishra, Sasmita

    2017-10-01

    Quantitative image analysis (IA) is a rapidly evolving area of digital pathology. Although not a new concept, the quantification of histological features on photomicrographs used to be cumbersome, resource-intensive, and limited to specialists and specialized laboratories. Recent technological advances like highly efficient automated whole slide digitizer (scanner) systems, innovative IA platforms, and the emergence of pathologist-friendly image annotation and analysis systems mean that quantification of features on histological digital images will become increasingly prominent in pathologists' daily professional lives. The added value of quantitative IA in pathology includes confirmation of equivocal findings noted by a pathologist, increasing the sensitivity of feature detection, quantification of signal intensity, and improving efficiency. There is no denying that quantitative IA is part of the future of pathology; however, there are also several potential pitfalls when trying to estimate volumetric features from limited 2-dimensional sections. This continuing education session on quantitative IA offered a broad overview of the field; a hands-on toxicologic pathologist experience with IA principles, tools, and workflows; a discussion on how to apply basic stereology principles in order to minimize bias in IA; and finally, a reflection on the future of IA in the toxicologic pathology field.

  5. New Methods for Analysis of Spatial Distribution and Coaggregation of Microbial Populations in Complex Biofilms

    PubMed Central

    Almstrand, Robert; Daims, Holger; Persson, Frank; Sörensson, Fred

    2013-01-01

    In biofilms, microbial activities form gradients of substrates and electron acceptors, creating a complex landscape of microhabitats, often resulting in structured localization of the microbial populations present. To understand the dynamic interplay between and within these populations, quantitative measurements and statistical analysis of their localization patterns within the biofilms are necessary, and adequate automated tools for such analyses are needed. We have designed and applied new methods for fluorescence in situ hybridization (FISH) and digital image analysis of directionally dependent (anisotropic) multispecies biofilms. A sequential-FISH approach allowed multiple populations to be detected in a biofilm sample. This was combined with an automated tool for vertical-distribution analysis by generating in silico biofilm slices and the recently developed Inflate algorithm for coaggregation analysis of microbial populations in anisotropic biofilms. As a proof of principle, we show distinct stratification patterns of the ammonia oxidizers Nitrosomonas oligotropha subclusters I and II and the nitrite oxidizer Nitrospira sublineage I in three different types of wastewater biofilms, suggesting niche differentiation between the N. oligotropha subclusters, which could explain their coexistence in the same biofilms. Coaggregation analysis showed that N. oligotropha subcluster II aggregated closer to Nitrospira than did N. oligotropha subcluster I in a pilot plant nitrifying trickling filter (NTF) and a moving-bed biofilm reactor (MBBR), but not in a full-scale NTF, indicating important ecophysiological differences between these phylogenetically closely related subclusters. By using high-resolution quantitative methods applicable to any multispecies biofilm in general, the ecological interactions of these complex ecosystems can be understood in more detail. PMID:23892743

  6. Textural Maturity Analysis and Sedimentary Environment Discrimination Based on Grain Shape Data

    NASA Astrophysics Data System (ADS)

    Tunwal, M.; Mulchrone, K. F.; Meere, P. A.

    2017-12-01

    Morphological analysis of clastic sedimentary grains is an important source of information regarding the processes involved in their formation, transportation and deposition. However, a standardised approach for quantitative grain shape analysis is generally lacking. In this contribution we report on a study where fully automated image analysis techniques were applied to loose sediment samples collected from glacial, aeolian, beach and fluvial environments. A range of shape parameters are evaluated for their usefulness in textural characterisation of populations of grains. The utility of grain shape data in ranking textural maturity of samples within a given sedimentary environment is evaluated. Furthermore, discrimination of sedimentary environment on the basis of grain shape information is explored. The data gathered demonstrates a clear progression in textural maturity in terms of roundness, angularity, irregularity, fractal dimension, convexity, solidity and rectangularity. Textural maturity can be readily categorised using automated grain shape parameter analysis. However, absolute discrimination between different depositional environments on the basis of shape parameters alone is less certain. For example, the aeolian environment is quite distinct whereas fluvial, glacial and beach samples are inherently variable and tend to overlap each other in terms of textural maturity. This is most likely due to a collection of similar processes and sources operating within these environments. This study strongly demonstrates the merit of quantitative population-based shape parameter analysis of texture and indicates that it can play a key role in characterising both loose and consolidated sediments. This project is funded by the Irish Petroleum Infrastructure Programme (www.pip.ie)

  7. Clinical Severity Classification using Automated Conjunctival Hyperemia Analysis Software in Patients with Superior Limbic Keratoconjunctivitis.

    PubMed

    Kurita, Junki; Shoji, Jun; Inada, Noriko; Yoneda, Tsuyoshi; Sumi, Tamaki; Kobayashi, Masahiko; Hoshikawa, Yasuhiro; Fukushima, Atsuki; Yamagami, Satoru

    2018-06-01

    Digitization of clinical observation is necessary for assessing the severity of superior limbic keratoconjunctivitis (SLK). This study aimed to use a novel quantitative marker to examine hyperemia in patients with SLK. We included six eyes of six patients with both dry eye disease and SLK (SLK group) and eight eyes of eight patients with Sjögren syndrome (SS group). We simultaneously obtained the objective finding scores by using slit-lamp examination and calculated the superior hyperemia index (SHI) with an automated conjunctival hyperemia analysis software by using photographs of the anterior segment. Three objective finding scores, including papillary formation of the superior palpebral conjunctiva, superior limbal hyperemia and swelling, and superior corneal epitheliopathy, were determined. The SHI was calculated as the superior/temporal ratio of bulbar conjunctival hyperemia by using the software. Fisher's exact test was used to compare a high SHI (≥1.07) ratio between the SLK and SS groups. P-Values < 0.05 were considered statistically significant. The SHI (mean ± standard deviation) in the SLK and SS groups was 1.19 ± 0.50 and 0.69 ± 0.24, respectively. The number of patients with a high SHI (≥1.07) was significantly higher in the SLK group than in the SS group (p < 0.05). The sensitivity and specificity of the SHI in the differential diagnosis between SS and SLK were 66.7% and 87.5%, respectively. An analysis of the association between the objective finding scores and SHI showed that the SHI had a tendency to indicate the severity of superior limbal hyperemia and swelling score in the SLK group. The SHI calculated using the automated conjunctival hyperemia analysis software could successfully quantify superior bulbar conjunctival hyperemia and may be a useful tool for the differential diagnosis between SS and SLK and for the quantitative follow-up of patients with SLK.

  8. Flightdeck Automation Problems (FLAP) Model for Safety Technology Portfolio Assessment

    NASA Technical Reports Server (NTRS)

    Ancel, Ersin; Shih, Ann T.

    2014-01-01

    NASA's Aviation Safety Program (AvSP) develops and advances methodologies and technologies to improve air transportation safety. The Safety Analysis and Integration Team (SAIT) conducts a safety technology portfolio assessment (PA) to analyze the program content, to examine the benefits and risks of products with respect to program goals, and to support programmatic decision making. The PA process includes systematic identification of current and future safety risks as well as tracking several quantitative and qualitative metrics to ensure the program goals are addressing prominent safety risks accurately and effectively. One of the metrics within the PA process involves using quantitative aviation safety models to gauge the impact of the safety products. This paper demonstrates the role of aviation safety modeling by providing model outputs and evaluating a sample of portfolio elements using the Flightdeck Automation Problems (FLAP) model. The model enables not only ranking of the quantitative relative risk reduction impact of all portfolio elements, but also highlighting the areas with high potential impact via sensitivity and gap analyses in support of the program office. Although the model outputs are preliminary and products are notional, the process shown in this paper is essential to a comprehensive PA of NASA's safety products in the current program and future programs/projects.

  9. Quantitative analysis of ex vivo colorectal epithelium using an automated feature extraction algorithm for microendoscopy image data

    PubMed Central

    Prieto, Sandra P.; Lai, Keith K.; Laryea, Jonathan A.; Mizell, Jason S.; Muldoon, Timothy J.

    2016-01-01

    Abstract. Qualitative screening for colorectal polyps via fiber bundle microendoscopy imaging has shown promising results, with studies reporting high rates of sensitivity and specificity, as well as low interobserver variability with trained clinicians. A quantitative image quality control and image feature extraction algorithm (QFEA) was designed to lessen the burden of training and provide objective data for improved clinical efficacy of this method. After a quantitative image quality control step, QFEA extracts field-of-view area, crypt area, crypt circularity, and crypt number per image. To develop and validate this QFEA, a training set of microendoscopy images was collected from freshly resected porcine colon epithelium. The algorithm was then further validated on ex vivo image data collected from eight human subjects, selected from clinically normal appearing regions distant from grossly visible tumor in surgically resected colorectal tissue. QFEA has proven flexible in application to both mosaics and individual images, and its automated crypt detection sensitivity ranges from 71 to 94% despite intensity and contrast variation within the field of view. It also demonstrates the ability to detect and quantify differences in grossly normal regions among different subjects, suggesting the potential efficacy of this approach in detecting occult regions of dysplasia. PMID:27335893

  10. The Microphenotron: a robotic miniaturized plant phenotyping platform with diverse applications in chemical biology.

    PubMed

    Burrell, Thomas; Fozard, Susan; Holroyd, Geoff H; French, Andrew P; Pound, Michael P; Bigley, Christopher J; James Taylor, C; Forde, Brian G

    2017-01-01

    Chemical genetics provides a powerful alternative to conventional genetics for understanding gene function. However, its application to plants has been limited by the lack of a technology that allows detailed phenotyping of whole-seedling development in the context of a high-throughput chemical screen. We have therefore sought to develop an automated micro-phenotyping platform that would allow both root and shoot development to be monitored under conditions where the phenotypic effects of large numbers of small molecules can be assessed. The 'Microphenotron' platform uses 96-well microtitre plates to deliver chemical treatments to seedlings of Arabidopsis thaliana L. and is based around four components: (a) the 'Phytostrip', a novel seedling growth device that enables chemical treatments to be combined with the automated capture of images of developing roots and shoots; (b) an illuminated robotic platform that uses a commercially available robotic manipulator to capture images of developing shoots and roots; (c) software to control the sequence of robotic movements and integrate these with the image capture process; (d) purpose-made image analysis software for automated extraction of quantitative phenotypic data. Imaging of each plate (representing 80 separate assays) takes 4 min and can easily be performed daily for time-course studies. As currently configured, the Microphenotron has a capacity of 54 microtitre plates in a growth room footprint of 2.1 m 2 , giving a potential throughput of up to 4320 chemical treatments in a typical 10 days experiment. The Microphenotron has been validated by using it to screen a collection of 800 natural compounds for qualitative effects on root development and to perform a quantitative analysis of the effects of a range of concentrations of nitrate and ammonium on seedling development. The Microphenotron is an automated screening platform that for the first time is able to combine large numbers of individual chemical treatments with a detailed analysis of whole-seedling development, and particularly root system development. The Microphenotron should provide a powerful new tool for chemical genetics and for wider chemical biology applications, including the development of natural and synthetic chemical products for improved agricultural sustainability.

  11. Automated DNA extraction platforms offer solutions to challenges of assessing microbial biofouling in oil production facilities.

    PubMed

    Oldham, Athenia L; Drilling, Heather S; Stamps, Blake W; Stevenson, Bradley S; Duncan, Kathleen E

    2012-11-20

    The analysis of microbial assemblages in industrial, marine, and medical systems can inform decisions regarding quality control or mitigation. Modern molecular approaches to detect, characterize, and quantify microorganisms provide rapid and thorough measures unbiased by the need for cultivation. The requirement of timely extraction of high quality nucleic acids for molecular analysis is faced with specific challenges when used to study the influence of microorganisms on oil production. Production facilities are often ill equipped for nucleic acid extraction techniques, making the preservation and transportation of samples off-site a priority. As a potential solution, the possibility of extracting nucleic acids on-site using automated platforms was tested. The performance of two such platforms, the Fujifilm QuickGene-Mini80™ and Promega Maxwell®16 was compared to a widely used manual extraction kit, MOBIO PowerBiofilm™ DNA Isolation Kit, in terms of ease of operation, DNA quality, and microbial community composition. Three pipeline biofilm samples were chosen for these comparisons; two contained crude oil and corrosion products and the third transported seawater. Overall, the two more automated extraction platforms produced higher DNA yields than the manual approach. DNA quality was evaluated for amplification by quantitative PCR (qPCR) and end-point PCR to generate 454 pyrosequencing libraries for 16S rRNA microbial community analysis. Microbial community structure, as assessed by DGGE analysis and pyrosequencing, was comparable among the three extraction methods. Therefore, the use of automated extraction platforms should enhance the feasibility of rapidly evaluating microbial biofouling at remote locations or those with limited resources.

  12. Automated DNA extraction platforms offer solutions to challenges of assessing microbial biofouling in oil production facilities

    PubMed Central

    2012-01-01

    The analysis of microbial assemblages in industrial, marine, and medical systems can inform decisions regarding quality control or mitigation. Modern molecular approaches to detect, characterize, and quantify microorganisms provide rapid and thorough measures unbiased by the need for cultivation. The requirement of timely extraction of high quality nucleic acids for molecular analysis is faced with specific challenges when used to study the influence of microorganisms on oil production. Production facilities are often ill equipped for nucleic acid extraction techniques, making the preservation and transportation of samples off-site a priority. As a potential solution, the possibility of extracting nucleic acids on-site using automated platforms was tested. The performance of two such platforms, the Fujifilm QuickGene-Mini80™ and Promega Maxwell®16 was compared to a widely used manual extraction kit, MOBIO PowerBiofilm™ DNA Isolation Kit, in terms of ease of operation, DNA quality, and microbial community composition. Three pipeline biofilm samples were chosen for these comparisons; two contained crude oil and corrosion products and the third transported seawater. Overall, the two more automated extraction platforms produced higher DNA yields than the manual approach. DNA quality was evaluated for amplification by quantitative PCR (qPCR) and end-point PCR to generate 454 pyrosequencing libraries for 16S rRNA microbial community analysis. Microbial community structure, as assessed by DGGE analysis and pyrosequencing, was comparable among the three extraction methods. Therefore, the use of automated extraction platforms should enhance the feasibility of rapidly evaluating microbial biofouling at remote locations or those with limited resources. PMID:23168231

  13. The iFly Tracking System for an Automated Locomotor and Behavioural Analysis of Drosophila melanogaster

    PubMed Central

    Kohlhoff, Kai J.; Jahn, Thomas R.; Lomas, David A.; Dobson, Christopher M.; Crowther, Damian C.; Vendruscolo, Michele

    2016-01-01

    The use of animal models in medical research provides insights into molecular and cellular mechanisms of human disease, and helps identify and test novel therapeutic strategies. Drosophila melanogaster – the common fruit fly – is one of the most established model organisms, as its study can be performed more readily and with far less expense than for other model animal systems, such as mice, fish, or indeed primates. In the case of fruit flies, standard assays are based on the analysis of longevity and basic locomotor functions. Here we present the iFly tracking system, which enables to increase the amount of quantitative information that can be extracted from these studies, and to reduce significantly the duration and costs associated with them. The iFly system uses a single camera to simultaneously track the trajectories of up to 20 individual flies with about 100μm spatial and 33ms temporal resolution. The statistical analysis of fly movements recorded with such accuracy makes it possible to perform a rapid and fully automated quantitative analysis of locomotor changes in response to a range of different stimuli. We anticipate that the iFly method will reduce very considerably the costs and the duration of the testing of genetic and pharmacological interventions in Drosophila models, including an earlier detection of behavioural changes and a large increase in throughput compared to current longevity and locomotor assays. PMID:21698336

  14. Confirmatory and quantitative analysis of fatty acid esters of hydroxy fatty acids in serum by solid phase extraction coupled to liquid chromatography tandem mass spectrometry.

    PubMed

    López-Bascón, María Asunción; Calderón-Santiago, Mónica; Priego-Capote, Feliciano

    2016-11-02

    A novel class of endogenous mammalian lipids endowed with antidiabetic and anti-inflammatory properties has been recently discovered. These are fatty acid esters of hydroxy fatty acids (FAHFAs) formed by condensation between a hydroxy fatty acid and a fatty acid. FAHFAs are present in human serum and tissues at low nanomolar concentrations. Therefore, high sensitivity and selectivity profiling analysis of these compounds in clinical samples is demanded. An automated qualitative and quantitative method based on on-line coupling between solid phase extraction and liquid chromatography-tandem mass spectrometry has been here developed for determination of FAHFAs in serum with the required sensitivity and selectivity. Matrix effects were evaluated by preparation of calibration models in serum and methanol. Recovery factors ranged between 73.8 and 100% in serum. The within-day variability ranged from 7.1 to 13.8%, and the between-days variability varied from 9.3 to 21.6%, which are quite acceptable values taking into account the low concentration levels at which the target analytes are found. The method has been applied to a cohort of human serum samples to estimate the concentrations profiles as a function of the glycaemic state and obesity. Statistical analysis revealed three FAHFAs with levels significantly different depending on the glycaemic state or the body mass index. This automated method could be implemented in high-throughput analysis with minimum user assistance. Copyright © 2016 Elsevier B.V. All rights reserved.

  15. What computational non-targeted mass spectrometry-based metabolomics can gain from shotgun proteomics.

    PubMed

    Hamzeiy, Hamid; Cox, Jürgen

    2017-02-01

    Computational workflows for mass spectrometry-based shotgun proteomics and untargeted metabolomics share many steps. Despite the similarities, untargeted metabolomics is lagging behind in terms of reliable fully automated quantitative data analysis. We argue that metabolomics will strongly benefit from the adaptation of successful automated proteomics workflows to metabolomics. MaxQuant is a popular platform for proteomics data analysis and is widely considered to be superior in achieving high precursor mass accuracies through advanced nonlinear recalibration, usually leading to five to ten-fold better accuracy in complex LC-MS/MS runs. This translates to a sharp decrease in the number of peptide candidates per measured feature, thereby strongly improving the coverage of identified peptides. We argue that similar strategies can be applied to untargeted metabolomics, leading to equivalent improvements in metabolite identification. Copyright © 2016 The Author(s). Published by Elsevier Ltd.. All rights reserved.

  16. Automated fiber tracking and tissue characterization of the anterior cruciate ligament with optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Balasubramanian, Priya S.; Guo, Jiaqi; Yao, Xinwen; Qu, Dovina; Lu, Helen H.; Hendon, Christine P.

    2017-02-01

    The directionality of collagen fibers across the anterior cruciate ligament (ACL) as well as the insertion of this key ligament into bone are important for understanding the mechanical integrity and functionality of this complex tissue. Quantitative analysis of three-dimensional fiber directionality is of particular interest due to the physiological, mechanical, and biological heterogeneity inherent across the ACL-to-bone junction, the behavior of the ligament under mechanical stress, and the usefulness of this information in designing tissue engineered grafts. We have developed an algorithm to characterize Optical Coherence Tomography (OCT) image volumes of the ACL. We present an automated algorithm for measuring ligamentous fiber angles, and extracting attenuation and backscattering coefficients of ligament, interface, and bone regions within mature and immature bovine ACL insertion samples. Future directions include translating this algorithm for real time processing to allow three-dimensional volumetric analysis within dynamically moving samples.

  17. Fast and accurate metrology of multi-layered ceramic materials by an automated boundary detection algorithm developed for optical coherence tomography data

    PubMed Central

    Ekberg, Peter; Su, Rong; Chang, Ernest W.; Yun, Seok Hyun; Mattsson, Lars

    2014-01-01

    Optical coherence tomography (OCT) is useful for materials defect analysis and inspection with the additional possibility of quantitative dimensional metrology. Here, we present an automated image-processing algorithm for OCT analysis of roll-to-roll multilayers in 3D manufacturing of advanced ceramics. It has the advantage of avoiding filtering and preset modeling, and will, thus, introduce a simplification. The algorithm is validated for its capability of measuring the thickness of ceramic layers, extracting the boundaries of embedded features with irregular shapes, and detecting the geometric deformations. The accuracy of the algorithm is very high, and the reliability is better than 1 µm when evaluating with the OCT images using the same gauge block step height reference. The method may be suitable for industrial applications to the rapid inspection of manufactured samples with high accuracy and robustness. PMID:24562018

  18. Printing 2-dimentional droplet array for single-cell reverse transcription quantitative PCR assay with a microfluidic robot.

    PubMed

    Zhu, Ying; Zhang, Yun-Xia; Liu, Wen-Wen; Ma, Yan; Fang, Qun; Yao, Bo

    2015-04-01

    This paper describes a nanoliter droplet array-based single-cell reverse transcription quantitative PCR (RT-qPCR) assay method for quantifying gene expression in individual cells. By sequentially printing nanoliter-scale droplets on microchip using a microfluidic robot, all liquid-handling operations including cell encapsulation, lysis, reverse transcription, and quantitative PCR with real-time fluorescence detection, can be automatically achieved. The inhibition effect of cell suspension buffer on RT-PCR assay was comprehensively studied to achieve high-sensitivity gene quantification. The present system was applied in the quantitative measurement of expression level of mir-122 in single Huh-7 cells. A wide distribution of mir-122 expression in single cells from 3061 copies/cell to 79998 copies/cell was observed, showing a high level of cell heterogeneity. With the advantages of full-automation in liquid-handling, simple system structure, and flexibility in achieving multi-step operations, the present method provides a novel liquid-handling mode for single cell gene expression analysis, and has significant potentials in transcriptional identification and rare cell analysis.

  19. Printing 2-Dimentional Droplet Array for Single-Cell Reverse Transcription Quantitative PCR Assay with a Microfluidic Robot

    PubMed Central

    Zhu, Ying; Zhang, Yun-Xia; Liu, Wen-Wen; Ma, Yan; Fang, Qun; Yao, Bo

    2015-01-01

    This paper describes a nanoliter droplet array-based single-cell reverse transcription quantitative PCR (RT-qPCR) assay method for quantifying gene expression in individual cells. By sequentially printing nanoliter-scale droplets on microchip using a microfluidic robot, all liquid-handling operations including cell encapsulation, lysis, reverse transcription, and quantitative PCR with real-time fluorescence detection, can be automatically achieved. The inhibition effect of cell suspension buffer on RT-PCR assay was comprehensively studied to achieve high-sensitivity gene quantification. The present system was applied in the quantitative measurement of expression level of mir-122 in single Huh-7 cells. A wide distribution of mir-122 expression in single cells from 3061 copies/cell to 79998 copies/cell was observed, showing a high level of cell heterogeneity. With the advantages of full-automation in liquid-handling, simple system structure, and flexibility in achieving multi-step operations, the present method provides a novel liquid-handling mode for single cell gene expression analysis, and has significant potentials in transcriptional identification and rare cell analysis. PMID:25828383

  20. A rapid, automated approach to optimisation of multiple reaction monitoring conditions for quantitative bioanalytical mass spectrometry.

    PubMed

    Higton, D M

    2001-01-01

    An improvement to the procedure for the rapid optimisation of mass spectrometry (PROMS), for the development of multiple reaction methods (MRM) for quantitative bioanalytical liquid chromatography/tandem mass spectrometry (LC/MS/MS), is presented. PROMS is an automated protocol that uses flow-injection analysis (FIA) and AppleScripts to create methods and acquire the data for optimisation. The protocol determines the optimum orifice potential, the MRM conditions for each compound, and finally creates the MRM methods needed for sample analysis. The sensitivities of the MRM methods created by PROMS approach those created manually. MRM method development using PROMS currently takes less than three minutes per compound compared to at least fifteen minutes manually. To further enhance throughput, approaches to MRM optimisation using one injection per compound, two injections per pool of five compounds and one injection per pool of five compounds have been investigated. No significant difference in the optimised instrumental parameters for MRM methods were found between the original PROMS approach and these new methods, which are up to ten times faster. The time taken for an AppleScript to determine the optimum conditions and build the MRM methods is the same with all approaches. Copyright 2001 John Wiley & Sons, Ltd.

  1. HyphArea--automated analysis of spatiotemporal fungal patterns.

    PubMed

    Baum, Tobias; Navarro-Quezada, Aura; Knogge, Wolfgang; Douchkov, Dimitar; Schweizer, Patrick; Seiffert, Udo

    2011-01-01

    In phytopathology quantitative measurements are rarely used to assess crop plant disease symptoms. Instead, a qualitative valuation by eye is often the method of choice. In order to close the gap between subjective human inspection and objective quantitative results, the development of an automated analysis system that is capable of recognizing and characterizing the growth patterns of fungal hyphae in micrograph images was developed. This system should enable the efficient screening of different host-pathogen combinations (e.g., barley-Blumeria graminis, barley-Rhynchosporium secalis) using different microscopy technologies (e.g., bright field, fluorescence). An image segmentation algorithm was developed for gray-scale image data that achieved good results with several microscope imaging protocols. Furthermore, adaptability towards different host-pathogen systems was obtained by using a classification that is based on a genetic algorithm. The developed software system was named HyphArea, since the quantification of the area covered by a hyphal colony is the basic task and prerequisite for all further morphological and statistical analyses in this context. By means of a typical use case the utilization and basic properties of HyphArea could be demonstrated. It was possible to detect statistically significant differences between the growth of an R. secalis wild-type strain and a virulence mutant. Copyright © 2010 Elsevier GmbH. All rights reserved.

  2. Automation and validation of the Transflour technology: a universal screening assay for G protein-coupled receptors

    NASA Astrophysics Data System (ADS)

    Hudson, Christine C.; Oakley, Robert H.; Cruickshank, Rachael D.; Rhem, Shay M.; Loomis, Carson R.

    2002-06-01

    G protein-coupled receptors (GPCRs) are historically the richest targets for drug discovery, accounting for nearly 60 percent of prescription drugs. The ligands and functions of only 200 out of possibly 1000 GPCRs are known. Screening methods that directly and accurately measure GPCR activation and inhibition are required to identify ligands for orphan receptors and cultivate superior drugs for known GPCRs. Norak Biosciences utilizes the redistribution of a fluorescently-labeled protein, arrestin, as a novel screen for monitoring GPCR activation. In contrast to the present methods of analyzing GPCR function, the power of the Transfluor technology is in its simplicity, large signal to noise ratio, and applicability to all GPCRs. Here, we demonstrate that the Transfluor technology can be automated and quantitated on high throughput image analysis systems. Cells transfected with an arrestin-green fluorescent protein conjugate and the neurokinin-1 GPCR were seeded on 96-well plates. Activation of the NK-1 receptor with Substance P induced translocation of arrestin-GFP from the cytosol to the receptor. Image quantitation of the arrestin-GFP translocation was used to generate dose dependent curves. These results reveal that the Transfluor technology combined with an image analysis system forms a universal platform capable of measuring ligand-receptor interactions for all GPCRs.

  3. Astronomical data analysis software and systems I; Proceedings of the 1st Annual Conference, Tucson, AZ, Nov. 6-8, 1991

    NASA Technical Reports Server (NTRS)

    Worrall, Diana M. (Editor); Biemesderfer, Chris (Editor); Barnes, Jeannette (Editor)

    1992-01-01

    Consideration is given to a definition of a distribution format for X-ray data, the Einstein on-line system, the NASA/IPAC extragalactic database, COBE astronomical databases, Cosmic Background Explorer astronomical databases, the ADAM software environment, the Groningen Image Processing System, search for a common data model for astronomical data analysis systems, deconvolution for real and synthetic apertures, pitfalls in image reconstruction, a direct method for spectral and image restoration, and a discription of a Poisson imagery super resolution algorithm. Also discussed are multivariate statistics on HI and IRAS images, a faint object classification using neural networks, a matched filter for improving SNR of radio maps, automated aperture photometry of CCD images, interactive graphics interpreter, the ROSAT extreme ultra-violet sky survey, a quantitative study of optimal extraction, an automated analysis of spectra, applications of synthetic photometry, an algorithm for extra-solar planet system detection and data reduction facilities for the William Herschel telescope.

  4. Automated image alignment for 2D gel electrophoresis in a high-throughput proteomics pipeline.

    PubMed

    Dowsey, Andrew W; Dunn, Michael J; Yang, Guang-Zhong

    2008-04-01

    The quest for high-throughput proteomics has revealed a number of challenges in recent years. Whilst substantial improvements in automated protein separation with liquid chromatography and mass spectrometry (LC/MS), aka 'shotgun' proteomics, have been achieved, large-scale open initiatives such as the Human Proteome Organization (HUPO) Brain Proteome Project have shown that maximal proteome coverage is only possible when LC/MS is complemented by 2D gel electrophoresis (2-DE) studies. Moreover, both separation methods require automated alignment and differential analysis to relieve the bioinformatics bottleneck and so make high-throughput protein biomarker discovery a reality. The purpose of this article is to describe a fully automatic image alignment framework for the integration of 2-DE into a high-throughput differential expression proteomics pipeline. The proposed method is based on robust automated image normalization (RAIN) to circumvent the drawbacks of traditional approaches. These use symbolic representation at the very early stages of the analysis, which introduces persistent errors due to inaccuracies in modelling and alignment. In RAIN, a third-order volume-invariant B-spline model is incorporated into a multi-resolution schema to correct for geometric and expression inhomogeneity at multiple scales. The normalized images can then be compared directly in the image domain for quantitative differential analysis. Through evaluation against an existing state-of-the-art method on real and synthetically warped 2D gels, the proposed analysis framework demonstrates substantial improvements in matching accuracy and differential sensitivity. High-throughput analysis is established through an accelerated GPGPU (general purpose computation on graphics cards) implementation. Supplementary material, software and images used in the validation are available at http://www.proteomegrid.org/rain/.

  5. Glaciated valleys in Europe and western Asia

    PubMed Central

    Prasicek, Günther; Otto, Jan-Christoph; Montgomery, David R.; Schrott, Lothar

    2015-01-01

    In recent years, remote sensing, morphometric analysis, and other computational concepts and tools have invigorated the field of geomorphological mapping. Automated interpretation of digital terrain data based on impartial rules holds substantial promise for large dataset processing and objective landscape classification. However, the geomorphological realm presents tremendous complexity and challenges in the translation of qualitative descriptions into geomorphometric semantics. Here, the simple, conventional distinction of V-shaped fluvial and U-shaped glacial valleys was analyzed quantitatively using multi-scale curvature and a novel morphometric variable termed Difference of Minimum Curvature (DMC). We used this automated terrain analysis approach to produce a raster map at a scale of 1:6,000,000 showing the distribution of glaciated valleys across Europe and western Asia. The data set has a cell size of 3 arc seconds and consists of more than 40 billion grid cells. Glaciated U-shaped valleys commonly associated with erosion by warm-based glaciers are abundant in the alpine regions of mid Europe and western Asia but also occur at the margins of mountain ice sheets in Scandinavia. The high-level correspondence with field mapping and the fully transferable semantics validate this approach for automated analysis of yet unexplored terrain around the globe and qualify for potential applications on other planetary bodies like Mars. PMID:27019665

  6. Statistical analysis to assess automated level of suspicion scoring methods in breast ultrasound

    NASA Astrophysics Data System (ADS)

    Galperin, Michael

    2003-05-01

    A well-defined rule-based system has been developed for scoring 0-5 the Level of Suspicion (LOS) based on qualitative lexicon describing the ultrasound appearance of breast lesion. The purposes of the research are to asses and select one of the automated LOS scoring quantitative methods developed during preliminary studies in benign biopsies reduction. The study has used Computer Aided Imaging System (CAIS) to improve the uniformity and accuracy of applying the LOS scheme by automatically detecting, analyzing and comparing breast masses. The overall goal is to reduce biopsies on the masses with lower levels of suspicion, rather that increasing the accuracy of diagnosis of cancers (will require biopsy anyway). On complex cysts and fibroadenoma cases experienced radiologists were up to 50% less certain in true negatives than CAIS. Full correlation analysis was applied to determine which of the proposed LOS quantification methods serves CAIS accuracy the best. This paper presents current results of applying statistical analysis for automated LOS scoring quantification for breast masses with known biopsy results. It was found that First Order Ranking method yielded most the accurate results. The CAIS system (Image Companion, Data Companion software) is developed by Almen Laboratories and was used to achieve the results.

  7. Quantitative Analysis of Mouse Retinal Layers Using Automated Segmentation of Spectral Domain Optical Coherence Tomography Images

    PubMed Central

    Dysli, Chantal; Enzmann, Volker; Sznitman, Raphael; Zinkernagel, Martin S.

    2015-01-01

    Purpose Quantification of retinal layers using automated segmentation of optical coherence tomography (OCT) images allows for longitudinal studies of retinal and neurological disorders in mice. The purpose of this study was to compare the performance of automated retinal layer segmentation algorithms with data from manual segmentation in mice using the Spectralis OCT. Methods Spectral domain OCT images from 55 mice from three different mouse strains were analyzed in total. The OCT scans from 22 C57Bl/6, 22 BALBc, and 11 C3A.Cg-Pde6b+Prph2Rd2/J mice were automatically segmented using three commercially available automated retinal segmentation algorithms and compared to manual segmentation. Results Fully automated segmentation performed well in mice and showed coefficients of variation (CV) of below 5% for the total retinal volume. However, all three automated segmentation algorithms yielded much thicker total retinal thickness values compared to manual segmentation data (P < 0.0001) due to segmentation errors in the basement membrane. Conclusions Whereas the automated retinal segmentation algorithms performed well for the inner layers, the retinal pigmentation epithelium (RPE) was delineated within the sclera, leading to consistently thicker measurements of the photoreceptor layer and the total retina. Translational Relevance The introduction of spectral domain OCT allows for accurate imaging of the mouse retina. Exact quantification of retinal layer thicknesses in mice is important to study layers of interest under various pathological conditions. PMID:26336634

  8. GiA Roots: software for the high throughput analysis of plant root system architecture.

    PubMed

    Galkovskyi, Taras; Mileyko, Yuriy; Bucksch, Alexander; Moore, Brad; Symonova, Olga; Price, Charles A; Topp, Christopher N; Iyer-Pascuzzi, Anjali S; Zurek, Paul R; Fang, Suqin; Harer, John; Benfey, Philip N; Weitz, Joshua S

    2012-07-26

    Characterizing root system architecture (RSA) is essential to understanding the development and function of vascular plants. Identifying RSA-associated genes also represents an underexplored opportunity for crop improvement. Software tools are needed to accelerate the pace at which quantitative traits of RSA are estimated from images of root networks. We have developed GiA Roots (General Image Analysis of Roots), a semi-automated software tool designed specifically for the high-throughput analysis of root system images. GiA Roots includes user-assisted algorithms to distinguish root from background and a fully automated pipeline that extracts dozens of root system phenotypes. Quantitative information on each phenotype, along with intermediate steps for full reproducibility, is returned to the end-user for downstream analysis. GiA Roots has a GUI front end and a command-line interface for interweaving the software into large-scale workflows. GiA Roots can also be extended to estimate novel phenotypes specified by the end-user. We demonstrate the use of GiA Roots on a set of 2393 images of rice roots representing 12 genotypes from the species Oryza sativa. We validate trait measurements against prior analyses of this image set that demonstrated that RSA traits are likely heritable and associated with genotypic differences. Moreover, we demonstrate that GiA Roots is extensible and an end-user can add functionality so that GiA Roots can estimate novel RSA traits. In summary, we show that the software can function as an efficient tool as part of a workflow to move from large numbers of root images to downstream analysis.

  9. A Second-Generation Device for Automated Training and Quantitative Behavior Analyses of Molecularly-Tractable Model Organisms

    PubMed Central

    Blackiston, Douglas; Shomrat, Tal; Nicolas, Cindy L.; Granata, Christopher; Levin, Michael

    2010-01-01

    A deep understanding of cognitive processes requires functional, quantitative analyses of the steps leading from genetics and the development of nervous system structure to behavior. Molecularly-tractable model systems such as Xenopus laevis and planaria offer an unprecedented opportunity to dissect the mechanisms determining the complex structure of the brain and CNS. A standardized platform that facilitated quantitative analysis of behavior would make a significant impact on evolutionary ethology, neuropharmacology, and cognitive science. While some animal tracking systems exist, the available systems do not allow automated training (feedback to individual subjects in real time, which is necessary for operant conditioning assays). The lack of standardization in the field, and the numerous technical challenges that face the development of a versatile system with the necessary capabilities, comprise a significant barrier keeping molecular developmental biology labs from integrating behavior analysis endpoints into their pharmacological and genetic perturbations. Here we report the development of a second-generation system that is a highly flexible, powerful machine vision and environmental control platform. In order to enable multidisciplinary studies aimed at understanding the roles of genes in brain function and behavior, and aid other laboratories that do not have the facilities to undergo complex engineering development, we describe the device and the problems that it overcomes. We also present sample data using frog tadpoles and flatworms to illustrate its use. Having solved significant engineering challenges in its construction, the resulting design is a relatively inexpensive instrument of wide relevance for several fields, and will accelerate interdisciplinary discovery in pharmacology, neurobiology, regenerative medicine, and cognitive science. PMID:21179424

  10. Analysis of drugs in human tissues by supercritical fluid extraction/immunoassay

    NASA Astrophysics Data System (ADS)

    Furton, Kenneth G.; Sabucedo, Alberta; Rein, Joseph; Hearn, W. L.

    1997-02-01

    A rapid, readily automated method has been developed for the quantitative analysis of phenobarbital from human liver tissues based on supercritical carbon dioxide extraction followed by fluorescence enzyme immunoassay. The method developed significantly reduces sample handling and utilizes the entire liver homogenate. The current method yields comparable recoveries and precision and does not require the use of an internal standard, although traditional GC/MS confirmation can still be performed on sample extracts. Additionally, the proposed method uses non-toxic, inexpensive carbon dioxide, thus eliminating the use of halogenated organic solvents.

  11. An implementation and analysis of the Abstract Syntax Notation One and the basic encoding rules

    NASA Technical Reports Server (NTRS)

    Harvey, James D.; Weaver, Alfred C.

    1990-01-01

    The details of abstract syntax notation one standard (ASN.1) and the basic encoding rules standard (BER) that collectively solve the problem of data transfer across incompatible host environments are presented, and a compiler that was built to automate their use is described. Experiences with this compiler are also discussed which provide a quantitative analysis of the performance costs associated with the application of these standards. An evaluation is offered as to how well suited ASN.1 and BER are in solving the common data representation problem.

  12. Automated genomic DNA purification options in agricultural applications using MagneSil paramagnetic particles

    NASA Astrophysics Data System (ADS)

    Bitner, Rex M.; Koller, Susan C.

    2002-06-01

    The automated high throughput purification of genomic DNA form plant materials can be performed using MagneSil paramagnetic particles on the Beckman-Coulter FX, BioMek 2000, and the Tecan Genesis robot. Similar automated methods are available for DNA purifications from animal blood. These methods eliminate organic extractions, lengthy incubations and cumbersome filter plates. The DNA is suitable for applications such as PCR and RAPD analysis. Methods are described for processing traditionally difficult samples such as those containing large amounts of polyphenolics or oils, while still maintaining a high level of DNA purity. The robotic protocols have ben optimized for agricultural applications such as marker assisted breeding, seed-quality testing, and SNP discovery and scoring. In addition to high yield purification of DNA from plant samples or animal blood, the use of Promega's DNA-IQ purification system is also described. This method allows for the purification of a narrow range of DNA regardless of the amount of additional DNA that is present in the initial sample. This simultaneous Isolation and Quantification of DNA allows the DNA to be used directly in applications such as PCR, SNP analysis, and RAPD, without the need for separate quantitation of the DNA.

  13. Automated Spatial Brain Normalization and Hindbrain White Matter Reference Tissue Give Improved [(18)F]-Florbetaben PET Quantitation in Alzheimer's Model Mice.

    PubMed

    Overhoff, Felix; Brendel, Matthias; Jaworska, Anna; Korzhova, Viktoria; Delker, Andreas; Probst, Federico; Focke, Carola; Gildehaus, Franz-Josef; Carlsen, Janette; Baumann, Karlheinz; Haass, Christian; Bartenstein, Peter; Herms, Jochen; Rominger, Axel

    2016-01-01

    Preclinical PET studies of β-amyloid (Aβ) accumulation are of growing importance, but comparisons between research sites require standardized and optimized methods for quantitation. Therefore, we aimed to evaluate systematically the (1) impact of an automated algorithm for spatial brain normalization, and (2) intensity scaling methods of different reference regions for Aβ-PET in a large dataset of transgenic mice. PS2APP mice in a 6 week longitudinal setting (N = 37) and another set of PS2APP mice at a histologically assessed narrow range of Aβ burden (N = 40) were investigated by [(18)F]-florbetaben PET. Manual spatial normalization by three readers at different training levels was performed prior to application of an automated brain spatial normalization and inter-reader agreement was assessed by Fleiss Kappa (κ). For this method the impact of templates at different pathology stages was investigated. Four different reference regions on brain uptake normalization were used to calculate frontal cortical standardized uptake value ratios (SUVRCTX∕REF), relative to raw SUVCTX. Results were compared on the basis of longitudinal stability (Cohen's d), and in reference to gold standard histopathological quantitation (Pearson's R). Application of an automated brain spatial normalization resulted in nearly perfect agreement (all κ≥0.99) between different readers, with constant or improved correlation with histology. Templates based on inappropriate pathology stage resulted in up to 2.9% systematic bias for SUVRCTX∕REF. All SUVRCTX∕REF methods performed better than SUVCTX both with regard to longitudinal stability (d≥1.21 vs. d = 0.23) and histological gold standard agreement (R≥0.66 vs. R≥0.31). Voxel-wise analysis suggested a physiologically implausible longitudinal decrease by global mean scaling. The hindbrain white matter reference (R mean = 0.75) was slightly superior to the brainstem (R mean = 0.74) and the cerebellum (R mean = 0.73). Automated brain normalization with reference region templates presents an excellent method to avoid the inter-reader variability in preclinical Aβ-PET scans. Intracerebral reference regions lacking Aβ pathology serve for precise longitudinal in vivo quantification of [(18)F]-florbetaben PET. Hindbrain white matter reference performed best when considering the composite of quality criteria.

  14. Automated detection and quantitation of bacterial RNA by using electrical microarrays.

    PubMed

    Elsholz, B; Wörl, R; Blohm, L; Albers, J; Feucht, H; Grunwald, T; Jürgen, B; Schweder, T; Hintsche, Rainer

    2006-07-15

    Low-density electrical 16S rRNA specific oligonucleotide microarrays and an automated analysis system have been developed for the identification and quantitation of pathogens. The pathogens are Escherichia coli, Pseudomonas aeruginosa, Enterococcus faecalis, Staphylococcus aureus, and Staphylococcus epidermidis, which are typically involved in urinary tract infections. Interdigitated gold array electrodes (IDA-electrodes), which have structures in the nanometer range, have been used for very sensitive analysis. Thiol-modified oligonucleotides are immobilized on the gold IDA as capture probes. They mediate the specific recognition of the target 16S rRNA by hybridization. Additionally three unlabeled oligonucleotides are hybridized in close proximity to the capturing site. They are supporting molecules, because they improve the RNA hybridization at the capturing site. A biotin labeled detector oligonucleotide is also allowed to hybridize to the captured RNA sequence. The biotin labels enable the binding of avidin alkaline phophatase conjugates. The phosphatase liberates the electrochemical mediator p-aminophenol from its electrically inactive phosphate derivative. The electrical signals were generated by amperometric redox cycling and detected by a unique multipotentiostat. The read out signals of the microarray are position specific current and change over time in proportion to the analyte concentration. If two additional biotins are introduced into the affinity binding complex via the supporting oligonucleotides, the sensitivity of the assays increase more than 60%. The limit of detection of Escherichia coli total RNA has been determined to be 0.5 ng/microL. The control of fluidics for variable assay formats as well as the multichannel electrical read out and data handling have all been fully automated. The fast and easy procedure does not require any amplification of the targeted nucleic acids by PCR.

  15. Fully Automated Quantitative Estimation of Volumetric Breast Density from Digital Breast Tomosynthesis Images: Preliminary Results and Comparison with Digital Mammography and MR Imaging

    PubMed Central

    Pertuz, Said; McDonald, Elizabeth S.; Weinstein, Susan P.; Conant, Emily F.

    2016-01-01

    Purpose To assess a fully automated method for volumetric breast density (VBD) estimation in digital breast tomosynthesis (DBT) and to compare the findings with those of full-field digital mammography (FFDM) and magnetic resonance (MR) imaging. Materials and Methods Bilateral DBT images, FFDM images, and sagittal breast MR images were retrospectively collected from 68 women who underwent breast cancer screening from October 2011 to September 2012 with institutional review board–approved, HIPAA-compliant protocols. A fully automated computer algorithm was developed for quantitative estimation of VBD from DBT images. FFDM images were processed with U.S. Food and Drug Administration–cleared software, and the MR images were processed with a previously validated automated algorithm to obtain corresponding VBD estimates. Pearson correlation and analysis of variance with Tukey-Kramer post hoc correction were used to compare the multimodality VBD estimates. Results Estimates of VBD from DBT were significantly correlated with FFDM-based and MR imaging–based estimates with r = 0.83 (95% confidence interval [CI]: 0.74, 0.90) and r = 0.88 (95% CI: 0.82, 0.93), respectively (P < .001). The corresponding correlation between FFDM and MR imaging was r = 0.84 (95% CI: 0.76, 0.90). However, statistically significant differences after post hoc correction (α = 0.05) were found among VBD estimates from FFDM (mean ± standard deviation, 11.1% ± 7.0) relative to MR imaging (16.6% ± 11.2) and DBT (19.8% ± 16.2). Differences between VDB estimates from DBT and MR imaging were not significant (P = .26). Conclusion Fully automated VBD estimates from DBT, FFDM, and MR imaging are strongly correlated but show statistically significant differences. Therefore, absolute differences in VBD between FFDM, DBT, and MR imaging should be considered in breast cancer risk assessment. © RSNA, 2015 Online supplemental material is available for this article. PMID:26491909

  16. High-Throughput Method for Automated Colony and Cell Counting by Digital Image Analysis Based on Edge Detection

    PubMed Central

    Choudhry, Priya

    2016-01-01

    Counting cells and colonies is an integral part of high-throughput screens and quantitative cellular assays. Due to its subjective and time-intensive nature, manual counting has hindered the adoption of cellular assays such as tumor spheroid formation in high-throughput screens. The objective of this study was to develop an automated method for quick and reliable counting of cells and colonies from digital images. For this purpose, I developed an ImageJ macro Cell Colony Edge and a CellProfiler Pipeline Cell Colony Counting, and compared them to other open-source digital methods and manual counts. The ImageJ macro Cell Colony Edge is valuable in counting cells and colonies, and measuring their area, volume, morphology, and intensity. In this study, I demonstrate that Cell Colony Edge is superior to other open-source methods, in speed, accuracy and applicability to diverse cellular assays. It can fulfill the need to automate colony/cell counting in high-throughput screens, colony forming assays, and cellular assays. PMID:26848849

  17. Automated GC-MS analysis of free amino acids in biological fluids.

    PubMed

    Kaspar, Hannelore; Dettmer, Katja; Gronwald, Wolfram; Oefner, Peter J

    2008-07-15

    A gas chromatography-mass spectrometry (GC-MS) method was developed for the quantitative analysis of free amino acids as their propyl chloroformate derivatives in biological fluids. Derivatization with propyl chloroformate is carried out directly in the biological samples without prior protein precipitation or solid-phase extraction of the amino acids, thereby allowing automation of the entire procedure, including addition of reagents, extraction and injection into the GC-MS. The total analysis time was 30 min and 30 amino acids could be reliably quantified using 19 stable isotope-labeled amino acids as internal standards. Limits of detection (LOD) and lower limits of quantification (LLOQ) were in the range of 0.03-12 microM and 0.3-30 microM, respectively. The method was validated using a certified amino acid standard and reference plasma, and its applicability to different biological fluids was shown. Intra-day precision for the analysis of human urine, blood plasma, and cell culture medium was 2.0-8.8%, 0.9-8.3%, and 2.0-14.3%, respectively, while the inter-day precision for human urine was 1.5-14.1%.

  18. AutoQSAR: an automated machine learning tool for best-practice quantitative structure-activity relationship modeling.

    PubMed

    Dixon, Steven L; Duan, Jianxin; Smith, Ethan; Von Bargen, Christopher D; Sherman, Woody; Repasky, Matthew P

    2016-10-01

    We introduce AutoQSAR, an automated machine-learning application to build, validate and deploy quantitative structure-activity relationship (QSAR) models. The process of descriptor generation, feature selection and the creation of a large number of QSAR models has been automated into a single workflow within AutoQSAR. The models are built using a variety of machine-learning methods, and each model is scored using a novel approach. Effectiveness of the method is demonstrated through comparison with literature QSAR models using identical datasets for six end points: protein-ligand binding affinity, solubility, blood-brain barrier permeability, carcinogenicity, mutagenicity and bioaccumulation in fish. AutoQSAR demonstrates similar or better predictive performance as compared with published results for four of the six endpoints while requiring minimal human time and expertise.

  19. Automated High-Throughput Quantification of Mitotic Spindle Positioning from DIC Movies of Caenorhabditis Embryos

    PubMed Central

    Cluet, David; Spichty, Martin; Delattre, Marie

    2014-01-01

    The mitotic spindle is a microtubule-based structure that elongates to accurately segregate chromosomes during anaphase. Its position within the cell also dictates the future cell cleavage plan, thereby determining daughter cell orientation within a tissue or cell fate adoption for polarized cells. Therefore, the mitotic spindle ensures at the same time proper cell division and developmental precision. Consequently, spindle dynamics is the matter of intensive research. Among the different cellular models that have been explored, the one-cell stage C. elegans embryo has been an essential and powerful system to dissect the molecular and biophysical basis of spindle elongation and positioning. Indeed, in this large and transparent cell, spindle poles (or centrosomes) can be easily detected from simple DIC microscopy by human eyes. To perform quantitative and high-throughput analysis of spindle motion, we developed a computer program ACT for Automated-Centrosome-Tracking from DIC movies of C. elegans embryos. We therefore offer an alternative to the image acquisition and processing of transgenic lines expressing fluorescent spindle markers. Consequently, experiments on large sets of cells can be performed with a simple setup using inexpensive microscopes. Moreover, analysis of any mutant or wild-type backgrounds is accessible because laborious rounds of crosses with transgenic lines become unnecessary. Last, our program allows spindle detection in other nematode species, offering the same quality of DIC images but for which techniques of transgenesis are not accessible. Thus, our program also opens the way towards a quantitative evolutionary approach of spindle dynamics. Overall, our computer program is a unique macro for the image- and movie-processing platform ImageJ. It is user-friendly and freely available under an open-source licence. ACT allows batch-wise analysis of large sets of mitosis events. Within 2 minutes, a single movie is processed and the accuracy of the automated tracking matches the precision of the human eye. PMID:24763198

  20. Three-dimensional segmentation of luminal and adventitial borders in serial intravascular ultrasound images

    NASA Technical Reports Server (NTRS)

    Shekhar, R.; Cothren, R. M.; Vince, D. G.; Chandra, S.; Thomas, J. D.; Cornhill, J. F.

    1999-01-01

    Intravascular ultrasound (IVUS) provides exact anatomy of arteries, allowing accurate quantitative analysis. Automated segmentation of IVUS images is a prerequisite for routine quantitative analyses. We present a new three-dimensional (3D) segmentation technique, called active surface segmentation, which detects luminal and adventitial borders in IVUS pullback examinations of coronary arteries. The technique was validated against expert tracings by computing correlation coefficients (range 0.83-0.97) and William's index values (range 0.37-0.66). The technique was statistically accurate, robust to image artifacts, and capable of segmenting a large number of images rapidly. Active surface segmentation enabled geometrically accurate 3D reconstruction and visualization of coronary arteries and volumetric measurements.

  1. Structural Image Analysis of the Brain in Neuropsychology Using Magnetic Resonance Imaging (MRI) Techniques.

    PubMed

    Bigler, Erin D

    2015-09-01

    Magnetic resonance imaging (MRI) of the brain provides exceptional image quality for visualization and neuroanatomical classification of brain structure. A variety of image analysis techniques provide both qualitative as well as quantitative methods to relate brain structure with neuropsychological outcome and are reviewed herein. Of particular importance are more automated methods that permit analysis of a broad spectrum of anatomical measures including volume, thickness and shape. The challenge for neuropsychology is which metric to use, for which disorder and the timing of when image analysis methods are applied to assess brain structure and pathology. A basic overview is provided as to the anatomical and pathoanatomical relations of different MRI sequences in assessing normal and abnormal findings. Some interpretive guidelines are offered including factors related to similarity and symmetry of typical brain development along with size-normalcy features of brain anatomy related to function. The review concludes with a detailed example of various quantitative techniques applied to analyzing brain structure for neuropsychological outcome studies in traumatic brain injury.

  2. Development of CD3 cell quantitation algorithms for renal allograft biopsy rejection assessment utilizing open source image analysis software.

    PubMed

    Moon, Andres; Smith, Geoffrey H; Kong, Jun; Rogers, Thomas E; Ellis, Carla L; Farris, Alton B Brad

    2018-02-01

    Renal allograft rejection diagnosis depends on assessment of parameters such as interstitial inflammation; however, studies have shown interobserver variability regarding interstitial inflammation assessment. Since automated image analysis quantitation can be reproducible, we devised customized analysis methods for CD3+ T-cell staining density as a measure of rejection severity and compared them with established commercial methods along with visual assessment. Renal biopsy CD3 immunohistochemistry slides (n = 45), including renal allografts with various degrees of acute cellular rejection (ACR) were scanned for whole slide images (WSIs). Inflammation was quantitated in the WSIs using pathologist visual assessment, commercial algorithms (Aperio nuclear algorithm for CD3+ cells/mm 2 and Aperio positive pixel count algorithm), and customized open source algorithms developed in ImageJ with thresholding/positive pixel counting (custom CD3+%) and identification of pixels fulfilling "maxima" criteria for CD3 expression (custom CD3+ cells/mm 2 ). Based on visual inspections of "markup" images, CD3 quantitation algorithms produced adequate accuracy. Additionally, CD3 quantitation algorithms correlated between each other and also with visual assessment in a statistically significant manner (r = 0.44 to 0.94, p = 0.003 to < 0.0001). Methods for assessing inflammation suggested a progression through the tubulointerstitial ACR grades, with statistically different results in borderline versus other ACR types, in all but the custom methods. Assessment of CD3-stained slides using various open source image analysis algorithms presents salient correlations with established methods of CD3 quantitation. These analysis techniques are promising and highly customizable, providing a form of on-slide "flow cytometry" that can facilitate additional diagnostic accuracy in tissue-based assessments.

  3. [Morphometry of pulmonary tissue: From manual to high throughput automation].

    PubMed

    Sallon, C; Soulet, D; Tremblay, Y

    2017-12-01

    Weibel's research has shown that any alteration of the pulmonary structure has effects on function. This demonstration required a quantitative analysis of lung structures called morphometry. This is possible thanks to stereology, a set of methods based on principles of geometry and statistics. His work has helped to better understand the morphological harmony of the lung, which is essential for its proper functioning. An imbalance leads to pathophysiology such as chronic obstructive pulmonary disease in adults and bronchopulmonary dysplasia in neonates. It is by studying this imbalance that new therapeutic approaches can be developed. These advances are achievable only through morphometric analytical methods, which are increasingly precise and focused, in particular thanks to the high-throughput automation of these methods. This review makes a comparison between an automated method that we developed in the laboratory and semi-manual methods of morphometric analyzes. The automation of morphometric measurements is a fundamental asset in the study of pulmonary pathophysiology because it is an assurance of robustness, reproducibility and speed. This tool will thus contribute significantly to the acceleration of the race for the development of new drugs. Copyright © 2017 SPLF. Published by Elsevier Masson SAS. All rights reserved.

  4. Using normalization 3D model for automatic clinical brain quantative analysis and evaluation

    NASA Astrophysics Data System (ADS)

    Lin, Hong-Dun; Yao, Wei-Jen; Hwang, Wen-Ju; Chung, Being-Tau; Lin, Kang-Ping

    2003-05-01

    Functional medical imaging, such as PET or SPECT, is capable of revealing physiological functions of the brain, and has been broadly used in diagnosing brain disorders by clinically quantitative analysis for many years. In routine procedures, physicians manually select desired ROIs from structural MR images and then obtain physiological information from correspondent functional PET or SPECT images. The accuracy of quantitative analysis thus relies on that of the subjectively selected ROIs. Therefore, standardizing the analysis procedure is fundamental and important in improving the analysis outcome. In this paper, we propose and evaluate a normalization procedure with a standard 3D-brain model to achieve precise quantitative analysis. In the normalization process, the mutual information registration technique was applied for realigning functional medical images to standard structural medical images. Then, the standard 3D-brain model that shows well-defined brain regions was used, replacing the manual ROIs in the objective clinical analysis. To validate the performance, twenty cases of I-123 IBZM SPECT images were used in practical clinical evaluation. The results show that the quantitative analysis outcomes obtained from this automated method are in agreement with the clinical diagnosis evaluation score with less than 3% error in average. To sum up, the method takes advantage of obtaining precise VOIs, information automatically by well-defined standard 3-D brain model, sparing manually drawn ROIs slice by slice from structural medical images in traditional procedure. That is, the method not only can provide precise analysis results, but also improve the process rate for mass medical images in clinical.

  5. Automated selected reaction monitoring data analysis workflow for large-scale targeted proteomic studies.

    PubMed

    Surinova, Silvia; Hüttenhain, Ruth; Chang, Ching-Yun; Espona, Lucia; Vitek, Olga; Aebersold, Ruedi

    2013-08-01

    Targeted proteomics based on selected reaction monitoring (SRM) mass spectrometry is commonly used for accurate and reproducible quantification of protein analytes in complex biological mixtures. Strictly hypothesis-driven, SRM assays quantify each targeted protein by collecting measurements on its peptide fragment ions, called transitions. To achieve sensitive and accurate quantitative results, experimental design and data analysis must consistently account for the variability of the quantified transitions. This consistency is especially important in large experiments, which increasingly require profiling up to hundreds of proteins over hundreds of samples. Here we describe a robust and automated workflow for the analysis of large quantitative SRM data sets that integrates data processing, statistical protein identification and quantification, and dissemination of the results. The integrated workflow combines three software tools: mProphet for peptide identification via probabilistic scoring; SRMstats for protein significance analysis with linear mixed-effect models; and PASSEL, a public repository for storage, retrieval and query of SRM data. The input requirements for the protocol are files with SRM traces in mzXML format, and a file with a list of transitions in a text tab-separated format. The protocol is especially suited for data with heavy isotope-labeled peptide internal standards. We demonstrate the protocol on a clinical data set in which the abundances of 35 biomarker candidates were profiled in 83 blood plasma samples of subjects with ovarian cancer or benign ovarian tumors. The time frame to realize the protocol is 1-2 weeks, depending on the number of replicates used in the experiment.

  6. Combined Falling Drop/Open Port Sampling Interface System for Automated Flow Injection Mass Spectrometry

    DOE PAGES

    Van Berkel, Gary J.; Kertesz, Vilmos; Orcutt, Matt; ...

    2017-11-07

    The aim of this work was to demonstrate and to evaluate the analytical performance of a combined falling drop/open port sampling interface (OPSI) system as a simple noncontact, no-carryover, automated system for flow injection analysis with mass spectrometry. The falling sample drops were introduced into the OPSI using a widely available autosampler platform utilizing low cost disposable pipet tips and conventional disposable microtiter well plates. The volume of the drops that fell onto the OPSI was in the 7–15 μL range with an injected sample volume of several hundred nanoliters. Sample drop height, positioning of the internal capillary on themore » sampling end of the probe, and carrier solvent flow rate were optimized for maximum signal. Sample throughput, signal reproducibility, matrix effects, and quantitative analysis capability of the system were established using the drug molecule propranolol and its isotope labeled internal standard in water, unprocessed river water and two commercially available buffer matrices. A sample-to-sample throughput of ~45 s with a ~4.5 s base-to-base flow injection peak profile was obtained in these experiments. In addition, quantitation with minimally processed rat plasma samples was demonstrated with three different statin drugs (atorvastatin, rosuvastatin, and fluvastatin). Direct characterization capability of unprocessed samples was demonstrated by the analysis of neat vegetable oils. Employing the autosampler system for spatially resolved liquid extraction surface sampling exemplified by the analysis of propranolol and its hydroxypropranolol glucuronide phase II metabolites from a rat thin tissue section was also illustrated.« less

  7. Combined Falling Drop/Open Port Sampling Interface System for Automated Flow Injection Mass Spectrometry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Van Berkel, Gary J.; Kertesz, Vilmos; Orcutt, Matt

    The aim of this work was to demonstrate and to evaluate the analytical performance of a combined falling drop/open port sampling interface (OPSI) system as a simple noncontact, no-carryover, automated system for flow injection analysis with mass spectrometry. The falling sample drops were introduced into the OPSI using a widely available autosampler platform utilizing low cost disposable pipet tips and conventional disposable microtiter well plates. The volume of the drops that fell onto the OPSI was in the 7–15 μL range with an injected sample volume of several hundred nanoliters. Sample drop height, positioning of the internal capillary on themore » sampling end of the probe, and carrier solvent flow rate were optimized for maximum signal. Sample throughput, signal reproducibility, matrix effects, and quantitative analysis capability of the system were established using the drug molecule propranolol and its isotope labeled internal standard in water, unprocessed river water and two commercially available buffer matrices. A sample-to-sample throughput of ~45 s with a ~4.5 s base-to-base flow injection peak profile was obtained in these experiments. In addition, quantitation with minimally processed rat plasma samples was demonstrated with three different statin drugs (atorvastatin, rosuvastatin, and fluvastatin). Direct characterization capability of unprocessed samples was demonstrated by the analysis of neat vegetable oils. Employing the autosampler system for spatially resolved liquid extraction surface sampling exemplified by the analysis of propranolol and its hydroxypropranolol glucuronide phase II metabolites from a rat thin tissue section was also illustrated.« less

  8. TH-AB-207A-05: A Fully-Automated Pipeline for Generating CT Images Across a Range of Doses and Reconstruction Methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Young, S; Lo, P; Hoffman, J

    Purpose: To evaluate the robustness of CAD or Quantitative Imaging methods, they should be tested on a variety of cases and under a variety of image acquisition and reconstruction conditions that represent the heterogeneity encountered in clinical practice. The purpose of this work was to develop a fully-automated pipeline for generating CT images that represent a wide range of dose and reconstruction conditions. Methods: The pipeline consists of three main modules: reduced-dose simulation, image reconstruction, and quantitative analysis. The first two modules of the pipeline can be operated in a completely automated fashion, using configuration files and running the modulesmore » in a batch queue. The input to the pipeline is raw projection CT data; this data is used to simulate different levels of dose reduction using a previously-published algorithm. Filtered-backprojection reconstructions are then performed using FreeCT-wFBP, a freely-available reconstruction software for helical CT. We also added support for an in-house, model-based iterative reconstruction algorithm using iterative coordinate-descent optimization, which may be run in tandem with the more conventional recon methods. The reduced-dose simulations and image reconstructions are controlled automatically by a single script, and they can be run in parallel on our research cluster. The pipeline was tested on phantom and lung screening datasets from a clinical scanner (Definition AS, Siemens Healthcare). Results: The images generated from our test datasets appeared to represent a realistic range of acquisition and reconstruction conditions that we would expect to find clinically. The time to generate images was approximately 30 minutes per dose/reconstruction combination on a hybrid CPU/GPU architecture. Conclusion: The automated research pipeline promises to be a useful tool for either training or evaluating performance of quantitative imaging software such as classifiers and CAD algorithms across the range of acquisition and reconstruction parameters present in the clinical environment. Funding support: NIH U01 CA181156; Disclosures (McNitt-Gray): Institutional research agreement, Siemens Healthcare; Past recipient, research grant support, Siemens Healthcare; Consultant, Toshiba America Medical Systems; Consultant, Samsung Electronics.« less

  9. A Novel ImageJ Macro for Automated Cell Death Quantitation in the Retina

    PubMed Central

    Maidana, Daniel E.; Tsoka, Pavlina; Tian, Bo; Dib, Bernard; Matsumoto, Hidetaka; Kataoka, Keiko; Lin, Haijiang; Miller, Joan W.; Vavvas, Demetrios G.

    2015-01-01

    Purpose TUNEL assay is widely used to evaluate cell death. Quantification of TUNEL-positive (TUNEL+) cells in tissue sections is usually performed manually, ideally by two masked observers. This process is time consuming, prone to measurement errors, and not entirely reproducible. In this paper, we describe an automated quantification approach to address these difficulties. Methods We developed an ImageJ macro to quantitate cell death by TUNEL assay in retinal cross-section images. The script was coded using IJ1 programming language. To validate this tool, we selected a dataset of TUNEL assay digital images, calculated layer area and cell count manually (done by two observers), and compared measurements between observers and macro results. Results The automated macro segmented outer nuclear layer (ONL) and inner nuclear layer (INL) successfully. Automated TUNEL+ cell counts were in-between counts of inexperienced and experienced observers. The intraobserver coefficient of variation (COV) ranged from 13.09% to 25.20%. The COV between both observers was 51.11 ± 25.83% for the ONL and 56.07 ± 24.03% for the INL. Comparing observers' results with macro results, COV was 23.37 ± 15.97% for the ONL and 23.44 ± 18.56% for the INL. Conclusions We developed and validated an ImageJ macro that can be used as an accurate and precise quantitative tool for retina researchers to achieve repeatable, unbiased, fast, and accurate cell death quantitation. We believe that this standardized measurement tool could be advantageous to compare results across different research groups, as it is freely available as open source. PMID:26469755

  10. A novel scheme for the validation of an automated classification method for epileptic spikes by comparison with multiple observers.

    PubMed

    Sharma, Niraj K; Pedreira, Carlos; Centeno, Maria; Chaudhary, Umair J; Wehner, Tim; França, Lucas G S; Yadee, Tinonkorn; Murta, Teresa; Leite, Marco; Vos, Sjoerd B; Ourselin, Sebastien; Diehl, Beate; Lemieux, Louis

    2017-07-01

    To validate the application of an automated neuronal spike classification algorithm, Wave_clus (WC), on interictal epileptiform discharges (IED) obtained from human intracranial EEG (icEEG) data. Five 10-min segments of icEEG recorded in 5 patients were used. WC and three expert EEG reviewers independently classified one hundred IED events into IED classes or non-IEDs. First, we determined whether WC-human agreement variability falls within inter-reviewer agreement variability by calculating the variation of information for each classifier pair and quantifying the overlap between all WC-reviewer and all reviewer-reviewer pairs. Second, we compared WC and EEG reviewers' spike identification and individual spike class labels visually and quantitatively. The overlap between all WC-human pairs and all human pairs was >80% for 3/5 patients and >58% for the other 2 patients demonstrating WC falling within inter-human variation. The average sensitivity of spike marking for WC was 91% and >87% for all three EEG reviewers. Finally, there was a strong visual and quantitative similarity between WC and EEG reviewers. WC performance is indistinguishable to that of EEG reviewers' suggesting it could be a valid clinical tool for the assessment of IEDs. WC can be used to provide quantitative analysis of epileptic spikes. Copyright © 2017 International Federation of Clinical Neurophysiology. Published by Elsevier B.V. All rights reserved.

  11. Automated characterization of normal and pathologic lung tissue by topological texture analysis of multidetector CT

    NASA Astrophysics Data System (ADS)

    Boehm, H. F.; Fink, C.; Becker, C.; Reiser, M.

    2007-03-01

    Reliable and accurate methods for objective quantitative assessment of parenchymal alterations in the lung are necessary for diagnosis, treatment and follow-up of pulmonary diseases. Two major types of alterations are pulmonary emphysema and fibrosis, emphysema being characterized by abnormal enlargement of the air spaces distal to the terminal, nonrespiratory bronchiole, accompanied by destructive changes of the alveolar walls. The main characteristic of fibrosis is coursening of the interstitial fibers and compaction of the pulmonary tissue. With the ability to display anatomy free from superimposing structures and greater visual clarity, Multi-Detector-CT has shown to be more sensitive than the chest radiograph in identifying alterations of lung parenchyma. In automated evaluation of pulmonary CT-scans, quantitative image processing techniques are applied for objective evaluation of the data. A number of methods have been proposed in the past, most of which utilize simple densitometric tissue features based on the mean X-ray attenuation coefficients expressed in terms of Hounsfield Units [HU]. Due to partial volume effects, most of the density-based methodologies tend to fail, namely in cases, where emphysema and fibrosis occur within narrow spatial limits. In this study, we propose a methodology based upon the topological assessment of graylevel distribution in the 3D image data of lung tissue which provides a way of improving quantitative CT evaluation. Results are compared to the more established density-based methods.

  12. Automation of laboratory testing for infectious diseases using the polymerase chain reaction-- our past, our present, our future.

    PubMed

    Jungkind, D

    2001-01-01

    While it is an extremely powerful and versatile assay method, polymerase chain reaction (PCR) can be a labor-intensive process. Since the advent of commercial test kits from Roche and the semi-automated microwell Amplicor system, PCR has become an increasingly useful and widespread clinical tool. However, more widespread acceptance of molecular testing will depend upon automation that allows molecular assays to enter the routine clinical laboratory. The forces driving the need for automated PCR are the requirements for diagnosis and treatment of chronic viral diseases, economic pressures to develop more automated and less expensive test procedures similar to those in the clinical chemistry laboratories, and a shortage in many areas of qualified laboratory personnel trained in the types of manual procedures used in past decades. The automated Roche COBAS AMPLICOR system has automated the amplification and detection process. Specimen preparation remains the most labor-intensive part of the PCR testing process, accounting for the majority of the hands-on-time in most of the assays. A new automated specimen preparation system, the COBAS AmpliPrep, was evaluated. The system automatically releases the target nucleic acid, captures the target with specific oligonucleotide probes, which become attached to magnetic beads via a biotin-streptavidin binding reaction. Once attached to the beads, the target is purified and concentrated automatically. Results of 298 qualitative and 57 quantitative samples representing a wide range of virus concentrations analyzed after the COBAS AmpliPrep and manual specimen preparation methods, showed that there was no significant difference in qualitative or quantitative hepatitis C virus (HCV) assay performance, respectively. The AmpliPrep instrument decreased the time required to prepare serum or plasma samples for HCV PCR to under 1 min per sample. This was a decrease of 76% compared to the manual specimen preparation method. Systems that can analyze more samples with higher throughput and that can answer more questions about the nature of the microbes that we can presently only detect and quantitate will be needed in the future.

  13. Exponential error reduction in pretransfusion testing with automation.

    PubMed

    South, Susan F; Casina, Tony S; Li, Lily

    2012-08-01

    Protecting the safety of blood transfusion is the top priority of transfusion service laboratories. Pretransfusion testing is a critical element of the entire transfusion process to enhance vein-to-vein safety. Human error associated with manual pretransfusion testing is a cause of transfusion-related mortality and morbidity and most human errors can be eliminated by automated systems. However, the uptake of automation in transfusion services has been slow and many transfusion service laboratories around the world still use manual blood group and antibody screen (G&S) methods. The goal of this study was to compare error potentials of commonly used manual (e.g., tiles and tubes) versus automated (e.g., ID-GelStation and AutoVue Innova) G&S methods. Routine G&S processes in seven transfusion service laboratories (four with manual and three with automated G&S methods) were analyzed using failure modes and effects analysis to evaluate the corresponding error potentials of each method. Manual methods contained a higher number of process steps ranging from 22 to 39, while automated G&S methods only contained six to eight steps. Corresponding to the number of the process steps that required human interactions, the risk priority number (RPN) of the manual methods ranged from 5304 to 10,976. In contrast, the RPN of the automated methods was between 129 and 436 and also demonstrated a 90% to 98% reduction of the defect opportunities in routine G&S testing. This study provided quantitative evidence on how automation could transform pretransfusion testing processes by dramatically reducing error potentials and thus would improve the safety of blood transfusion. © 2012 American Association of Blood Banks.

  14. A New Automated Method and Sample Data Flow for Analysis of Volatile Nitrosamines in Human Urine*

    PubMed Central

    Hodgson, James A.; Seyler, Tiffany H.; McGahee, Ernest; Arnstein, Stephen; Wang, Lanqing

    2016-01-01

    Volatile nitrosamines (VNAs) are a group of compounds classified as probable (group 2A) and possible (group 2B) carcinogens in humans. Along with certain foods and contaminated drinking water, VNAs are detected at high levels in tobacco products and in both mainstream and sidestream smoke. Our laboratory monitors six urinary VNAs—N-nitrosodimethylamine (NDMA), N-nitrosomethylethylamine (NMEA), N-nitrosodiethylamine (NDEA), N-nitrosopiperidine (NPIP), N-nitrosopyrrolidine (NPYR), and N-nitrosomorpholine (NMOR)—using isotope dilution GC-MS/MS (QQQ) for large population studies such as the National Health and Nutrition Examination Survey (NHANES). In this paper, we report for the first time a new automated sample preparation method to more efficiently quantitate these VNAs. Automation is done using Hamilton STAR™ and Caliper Staccato™ workstations. This new automated method reduces sample preparation time from 4 hours to 2.5 hours while maintaining precision (inter-run CV < 10%) and accuracy (85% - 111%). More importantly this method increases sample throughput while maintaining a low limit of detection (<10 pg/mL) for all analytes. A streamlined sample data flow was created in parallel to the automated method, in which samples can be tracked from receiving to final LIMs output with minimal human intervention, further minimizing human error in the sample preparation process. This new automated method and the sample data flow are currently applied in bio-monitoring of VNAs in the US non-institutionalized population NHANES 2013-2014 cycle. PMID:26949569

  15. The effects of AVIRIS atmospheric calibration methodology on identification and quantitative mapping of surface mineralogy, Drum Mountains, Utah

    NASA Technical Reports Server (NTRS)

    Kruse, Fred A.; Dwyer, John L.

    1993-01-01

    The Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) measures reflected light in 224 contiguous spectra bands in the 0.4 to 2.45 micron region of the electromagnetic spectrum. Numerous studies have used these data for mineralogic identification and mapping based on the presence of diagnostic spectral features. Quantitative mapping requires conversion of the AVIRIS data to physical units (usually reflectance) so that analysis results can be compared and validated with field and laboratory measurements. This study evaluated two different AVIRIS calibration techniques to ground reflectance: an empirically-based method and an atmospheric model based method to determine their effects on quantitative scientific analyses. Expert system analysis and linear spectral unmixing were applied to both calibrated data sets to determine the effect of the calibration on the mineral identification and quantitative mapping results. Comparison of the image-map results and image reflectance spectra indicate that the model-based calibrated data can be used with automated mapping techniques to produce accurate maps showing the spatial distribution and abundance of surface mineralogy. This has positive implications for future operational mapping using AVIRIS or similar imaging spectrometer data sets without requiring a priori knowledge.

  16. Quantification of EEG reactivity in comatose patients

    PubMed Central

    Hermans, Mathilde C.; Westover, M. Brandon; van Putten, Michel J.A.M.; Hirsch, Lawrence J.; Gaspard, Nicolas

    2016-01-01

    Objective EEG reactivity is an important predictor of outcome in comatose patients. However, visual analysis of reactivity is prone to subjectivity and may benefit from quantitative approaches. Methods In EEG segments recorded during reactivity testing in 59 comatose patients, 13 quantitative EEG parameters were used to compare the spectral characteristics of 1-minute segments before and after the onset of stimulation (spectral temporal symmetry). Reactivity was quantified with probability values estimated using combinations of these parameters. The accuracy of probability values as a reactivity classifier was evaluated against the consensus assessment of three expert clinical electroencephalographers using visual analysis. Results The binary classifier assessing spectral temporal symmetry in four frequency bands (delta, theta, alpha and beta) showed best accuracy (Median AUC: 0.95) and was accompanied by substantial agreement with the individual opinion of experts (Gwet’s AC1: 65–70%), at least as good as inter-expert agreement (AC1: 55%). Probability values also reflected the degree of reactivity, as measured by the inter-experts’ agreement regarding reactivity for each individual case. Conclusion Automated quantitative EEG approaches based on probabilistic description of spectral temporal symmetry reliably quantify EEG reactivity. Significance Quantitative EEG may be useful for evaluating reactivity in comatose patients, offering increased objectivity. PMID:26183757

  17. The Importance of Human Reliability Analysis in Human Space Flight: Understanding the Risks

    NASA Technical Reports Server (NTRS)

    Hamlin, Teri L.

    2010-01-01

    HRA is a method used to describe, qualitatively and quantitatively, the occurrence of human failures in the operation of complex systems that affect availability and reliability. Modeling human actions with their corresponding failure in a PRA (Probabilistic Risk Assessment) provides a more complete picture of the risk and risk contributions. A high quality HRA can provide valuable information on potential areas for improvement, including training, procedural, equipment design and need for automation.

  18. clusterProfiler: an R package for comparing biological themes among gene clusters.

    PubMed

    Yu, Guangchuang; Wang, Li-Gen; Han, Yanyan; He, Qing-Yu

    2012-05-01

    Increasing quantitative data generated from transcriptomics and proteomics require integrative strategies for analysis. Here, we present an R package, clusterProfiler that automates the process of biological-term classification and the enrichment analysis of gene clusters. The analysis module and visualization module were combined into a reusable workflow. Currently, clusterProfiler supports three species, including humans, mice, and yeast. Methods provided in this package can be easily extended to other species and ontologies. The clusterProfiler package is released under Artistic-2.0 License within Bioconductor project. The source code and vignette are freely available at http://bioconductor.org/packages/release/bioc/html/clusterProfiler.html.

  19. Design and implementation of software for automated quality control and data analysis for a complex LC/MS/MS assay for urine opiates and metabolites.

    PubMed

    Dickerson, Jane A; Schmeling, Michael; Hoofnagle, Andrew N; Hoffman, Noah G

    2013-01-16

    Mass spectrometry provides a powerful platform for performing quantitative, multiplexed assays in the clinical laboratory, but at the cost of increased complexity of analysis and quality assurance calculations compared to other methodologies. Here we describe the design and implementation of a software application that performs quality control calculations for a complex, multiplexed, mass spectrometric analysis of opioids and opioid metabolites. The development and implementation of this application improved our data analysis and quality assurance processes in several ways. First, use of the software significantly improved the procedural consistency for performing quality control calculations. Second, it reduced the amount of time technologists spent preparing and reviewing the data, saving on average over four hours per run, and in some cases improving turnaround time by a day. Third, it provides a mechanism for coupling procedural and software changes with the results of each analysis. We describe several key details of the implementation including the use of version control software and automated unit tests. These generally useful software engineering principles should be considered for any software development project in the clinical lab. Copyright © 2012 Elsevier B.V. All rights reserved.

  20. The Impact of Acquisition Dose on Quantitative Breast Density Estimation with Digital Mammography: Results from ACRIN PA 4006.

    PubMed

    Chen, Lin; Ray, Shonket; Keller, Brad M; Pertuz, Said; McDonald, Elizabeth S; Conant, Emily F; Kontos, Despina

    2016-09-01

    Purpose To investigate the impact of radiation dose on breast density estimation in digital mammography. Materials and Methods With institutional review board approval and Health Insurance Portability and Accountability Act compliance under waiver of consent, a cohort of women from the American College of Radiology Imaging Network Pennsylvania 4006 trial was retrospectively analyzed. All patients underwent breast screening with a combination of dose protocols, including standard full-field digital mammography, low-dose digital mammography, and digital breast tomosynthesis. A total of 5832 images from 486 women were analyzed with previously validated, fully automated software for quantitative estimation of density. Clinical Breast Imaging Reporting and Data System (BI-RADS) density assessment results were also available from the trial reports. The influence of image acquisition radiation dose on quantitative breast density estimation was investigated with analysis of variance and linear regression. Pairwise comparisons of density estimations at different dose levels were performed with Student t test. Agreement of estimation was evaluated with quartile-weighted Cohen kappa values and Bland-Altman limits of agreement. Results Radiation dose of image acquisition did not significantly affect quantitative density measurements (analysis of variance, P = .37 to P = .75), with percent density demonstrating a high overall correlation between protocols (r = 0.88-0.95; weighted κ = 0.83-0.90). However, differences in breast percent density (1.04% and 3.84%, P < .05) were observed within high BI-RADS density categories, although they were significantly correlated across the different acquisition dose levels (r = 0.76-0.92, P < .05). Conclusion Precision and reproducibility of automated breast density measurements with digital mammography are not substantially affected by variations in radiation dose; thus, the use of low-dose techniques for the purpose of density estimation may be feasible. (©) RSNA, 2016 Online supplemental material is available for this article.

  1. The Impact of Acquisition Dose on Quantitative Breast Density Estimation with Digital Mammography: Results from ACRIN PA 4006

    PubMed Central

    Chen, Lin; Ray, Shonket; Keller, Brad M.; Pertuz, Said; McDonald, Elizabeth S.; Conant, Emily F.

    2016-01-01

    Purpose To investigate the impact of radiation dose on breast density estimation in digital mammography. Materials and Methods With institutional review board approval and Health Insurance Portability and Accountability Act compliance under waiver of consent, a cohort of women from the American College of Radiology Imaging Network Pennsylvania 4006 trial was retrospectively analyzed. All patients underwent breast screening with a combination of dose protocols, including standard full-field digital mammography, low-dose digital mammography, and digital breast tomosynthesis. A total of 5832 images from 486 women were analyzed with previously validated, fully automated software for quantitative estimation of density. Clinical Breast Imaging Reporting and Data System (BI-RADS) density assessment results were also available from the trial reports. The influence of image acquisition radiation dose on quantitative breast density estimation was investigated with analysis of variance and linear regression. Pairwise comparisons of density estimations at different dose levels were performed with Student t test. Agreement of estimation was evaluated with quartile-weighted Cohen kappa values and Bland-Altman limits of agreement. Results Radiation dose of image acquisition did not significantly affect quantitative density measurements (analysis of variance, P = .37 to P = .75), with percent density demonstrating a high overall correlation between protocols (r = 0.88–0.95; weighted κ = 0.83–0.90). However, differences in breast percent density (1.04% and 3.84%, P < .05) were observed within high BI-RADS density categories, although they were significantly correlated across the different acquisition dose levels (r = 0.76–0.92, P < .05). Conclusion Precision and reproducibility of automated breast density measurements with digital mammography are not substantially affected by variations in radiation dose; thus, the use of low-dose techniques for the purpose of density estimation may be feasible. © RSNA, 2016 Online supplemental material is available for this article. PMID:27002418

  2. ICSH recommendations for assessing automated high-performance liquid chromatography and capillary electrophoresis equipment for the quantitation of HbA2.

    PubMed

    Stephens, A D; Colah, R; Fucharoen, S; Hoyer, J; Keren, D; McFarlane, A; Perrett, D; Wild, B J

    2015-10-01

    Automated high performance liquid chromatography and Capillary electrophoresis are used to quantitate the proportion of Hemoglobin A2 (HbA2 ) in blood samples order to enable screening and diagnosis of carriers of β-thalassemia. Since there is only a very small difference in HbA2 levels between people who are carriers and people who are not carriers such analyses need to be both precise and accurate. This paper examines the different parameters of such equipment and discusses how they should be assessed. © 2015 John Wiley & Sons Ltd.

  3. Automated digital volume measurement of melanoma metastases in sentinel nodes predicts disease recurrence and survival.

    PubMed

    Riber-Hansen, Rikke; Nyengaard, Jens R; Hamilton-Dutoit, Stephen J; Sjoegren, Pia; Steiniche, Torben

    2011-09-01

    Total metastatic volume (TMV) is an important prognostic factor in melanoma sentinel lymph nodes (SLNs) that avoids both the interobserver variation and unidirectional upstaging seen when using semi-quantitative size estimates. However, it is somewhat laborious for routine application. Our aim was to investigate whether digital image analysis can estimate TMV accurately in melanoma SLNs. TMV was measured in 147 SLNs from 95 patients both manually and by automated digital image analysis. The results were compared by Bland-Altman plots (numerical data) and kappa statistics (categorical data). In addition, disease-free and melanoma-specific survivals were calculated. Mean metastatic volume per patient was 10.6 mm(3) (median 0.05 mm(3); range 0.0001-621.3 mm(3)) and 9.62 mm(3) (median 0.05 mm(3); range 0.00001-564.3 mm(3)) with manual and digital measurement, respectively. The Bland-Altman plot showed an even distribution of the differences, and the kappa statistic was 0.84. In multivariate analysis, both manual and digital metastasis volume measurements were independent progression markers when corrected for primary tumour thickness [manual: hazard ratio (HR): 1.21, 95% confidence interval (CI): 1.07-1.36, P = 0.002; digital: HR: 1.21, 95% CI: 1.06-1.37, P = 0.004]. Stereology-based, automated digital metastasis volume measurement in melanoma SLNs predicts disease recurrence and survival. © 2011 Blackwell Publishing Limited.

  4. Analysis of Four Automated Urinalysis Systems Compared to Reference Methods.

    PubMed

    Bartosova, Kamila; Kubicek, Zdenek; Franekova, Janka; Louzensky, Gustav; Lavrikova, Petra; Jabor, Antonin

    2016-11-01

    The aim of this study was to compare four automated urinalysis systems: the Iris iQ200 Sprint (Iris Diagnostics, U.S.A.) combined with the Arkray AUTION MAX AX 4030, Iris + AUTION, Arkray AU 4050 (Arkray Global Business, Inc., Japan), Dirui FUS 2000 (Dirui Industrial Co., P.R.C.), and Menarini sediMAX (Menarini, Italy). Urine concentrations of protein and glucose (Iris, Dirui) were compared using reference quantitative analysis on an Abbott Architect c16000. Leukocytes, erythrocytes, epithelia, and casts (Iris, Arkray, Diuri, Menarini) were compared to urine sediment under reference light microscopy, Leica DM2000 (Leica Microsystems GmbH, Germany) with calibrated FastRead plates (Biosigma S.r.l., Italy), using both native and stained preparations. Total protein and glucose levels were measured using the Iris + AUTION system with borderline trueness, while the Dirui analysis revealed worse performances for the protein and glucose measurements. True classifications of leukocytes and erythrocytes were above 85% and 72%, respectively. Kappa statistics revealed a nearly perfect evaluation of leukocytes for all tested systems; the erythrocyte evaluation was nearly perfect for the Iris, Dirui and Arkray analyzers and substantial for the Menarini analyzer. The epithelia identification was connected to high false negativity (above 15%) in the Iris, Arkray, and Menarini analyses. False-negative casts were above 70% for all tested systems. The use of automated urinalysis demonstrated some weaknesses and should be checked by experienced laboratory staff using light microscopy.

  5. An Ibm PC/AT-Based Image Acquisition And Processing System For Quantitative Image Analysis

    NASA Astrophysics Data System (ADS)

    Kim, Yongmin; Alexander, Thomas

    1986-06-01

    In recent years, a large number of applications have been developed for image processing systems in the area of biological imaging. We have already finished the development of a dedicated microcomputer-based image processing and analysis system for quantitative microscopy. The system's primary function has been to facilitate and ultimately automate quantitative image analysis tasks such as the measurement of cellular DNA contents. We have recognized from this development experience, and interaction with system users, biologists and technicians, that the increasingly widespread use of image processing systems, and the development and application of new techniques for utilizing the capabilities of such systems, would generate a need for some kind of inexpensive general purpose image acquisition and processing system specially tailored for the needs of the medical community. We are currently engaged in the development and testing of hardware and software for a fairly high-performance image processing computer system based on a popular personal computer. In this paper, we describe the design and development of this system. Biological image processing computer systems have now reached a level of hardware and software refinement where they could become convenient image analysis tools for biologists. The development of a general purpose image processing system for quantitative image analysis that is inexpensive, flexible, and easy-to-use represents a significant step towards making the microscopic digital image processing techniques more widely applicable not only in a research environment as a biologist's workstation, but also in clinical environments as a diagnostic tool.

  6. Development and prospective evaluation of an automated software system for quality control of quantitative 99mTc-MAG3 renal studies.

    PubMed

    Folks, Russell D; Garcia, Ernest V; Taylor, Andrew T

    2007-03-01

    Quantitative nuclear renography has numerous potential sources of error. We previously reported the initial development of a computer software module for comprehensively addressing the issue of quality control (QC) in the analysis of radionuclide renal images. The objective of this study was to prospectively test the QC software. The QC software works in conjunction with standard quantitative renal image analysis using a renal quantification program. The software saves a text file that summarizes QC findings as possible errors in user-entered values, calculated values that may be unreliable because of the patient's clinical condition, and problems relating to acquisition or processing. To test the QC software, a technologist not involved in software development processed 83 consecutive nontransplant clinical studies. The QC findings of the software were then tabulated. QC events were defined as technical (study descriptors that were out of range or were entered and then changed, unusually sized or positioned regions of interest, or missing frames in the dynamic image set) or clinical (calculated functional values judged to be erroneous or unreliable). Technical QC events were identified in 36 (43%) of 83 studies. Clinical QC events were identified in 37 (45%) of 83 studies. Specific QC events included starting the camera after the bolus had reached the kidney, dose infiltration, oversubtraction of background activity, and missing frames in the dynamic image set. QC software has been developed to automatically verify user input, monitor calculation of renal functional parameters, summarize QC findings, and flag potentially unreliable values for the nuclear medicine physician. Incorporation of automated QC features into commercial or local renal software can reduce errors and improve technologist performance and should improve the efficiency and accuracy of image interpretation.

  7. Client - server programs analysis in the EPOCA environment

    NASA Astrophysics Data System (ADS)

    Donatelli, Susanna; Mazzocca, Nicola; Russo, Stefano

    1996-09-01

    Client - server processing is a popular paradigm for distributed computing. In the development of client - server programs, the designer has first to ensure that the implementation behaves correctly, in particular that it is deadlock free. Second, he has to guarantee that the program meets predefined performance requirements. This paper addresses the issues in the analysis of client - server programs in EPOCA. EPOCA is a computer-aided software engeneering (CASE) support system that allows the automated construction and analysis of generalized stochastic Petri net (GSPN) models of concurrent applications. The paper describes, on the basis of a realistic case study, how client - server systems are modelled in EPOCA, and the kind of qualitative and quantitative analysis supported by its tools.

  8. Quantification of fibre polymerization through Fourier space image analysis

    PubMed Central

    Nekouzadeh, Ali; Genin, Guy M.

    2011-01-01

    Quantification of changes in the total length of randomly oriented and possibly curved lines appearing in an image is a necessity in a wide variety of biological applications. Here, we present an automated approach based upon Fourier space analysis. Scaled, band-pass filtered power spectral densities of greyscale images are integrated to provide a quantitative measurement of the total length of lines of a particular range of thicknesses appearing in an image. A procedure is presented to correct for changes in image intensity. The method is most accurate for two-dimensional processes with fibres that do not occlude one another. PMID:24959096

  9. [Automated morphometric evaluation of the chromatin structure of liver cell nuclei after vagotomy].

    PubMed

    Butusova, N N; Zhukotskiĭ, A V; Sherbo, I V; Gribkov, E N; Dubovaia, T K

    1989-05-01

    The morphometric analysis of the interphase chromatine structure of the hepatic cells nuclei was carried out on the automated TV installation for the quantitative analysis of images "IBAS-2" (by the OPTON firm, the FRG) according to 50 optical and geometric parameters during various periods (1.2 and 4 weeks) after the vagotomy operation. It is determined that upper-molecular organisation of chromatine undergoes the biggest changes one week after operation, and changes of granular component are more informative than changes of the nongranular component (with the difference 15-20%). It was also revealed that chromatine components differ in tinctorial properties, which are evidently dependent on physicochemical characteristics of the chromatine under various functional conditions of the cell. As a result of the correlation analysis the group of morphometric indices of chromatine structure was revealed, which are highly correlated with level of transcription activity of chromatine during various terms after denervation. The correlation quotient of these parameters is 0.85-0.97. The summing up: vagus denervation of the liver causes changes in the morphofunctional organisation of the chromatine.

  10. High content image analysis for human H4 neuroglioma cells exposed to CuO nanoparticles.

    PubMed

    Li, Fuhai; Zhou, Xiaobo; Zhu, Jinmin; Ma, Jinwen; Huang, Xudong; Wong, Stephen T C

    2007-10-09

    High content screening (HCS)-based image analysis is becoming an important and widely used research tool. Capitalizing this technology, ample cellular information can be extracted from the high content cellular images. In this study, an automated, reliable and quantitative cellular image analysis system developed in house has been employed to quantify the toxic responses of human H4 neuroglioma cells exposed to metal oxide nanoparticles. This system has been proved to be an essential tool in our study. The cellular images of H4 neuroglioma cells exposed to different concentrations of CuO nanoparticles were sampled using IN Cell Analyzer 1000. A fully automated cellular image analysis system has been developed to perform the image analysis for cell viability. A multiple adaptive thresholding method was used to classify the pixels of the nuclei image into three classes: bright nuclei, dark nuclei, and background. During the development of our image analysis methodology, we have achieved the followings: (1) The Gaussian filtering with proper scale has been applied to the cellular images for generation of a local intensity maximum inside each nucleus; (2) a novel local intensity maxima detection method based on the gradient vector field has been established; and (3) a statistical model based splitting method was proposed to overcome the under segmentation problem. Computational results indicate that 95.9% nuclei can be detected and segmented correctly by the proposed image analysis system. The proposed automated image analysis system can effectively segment the images of human H4 neuroglioma cells exposed to CuO nanoparticles. The computational results confirmed our biological finding that human H4 neuroglioma cells had a dose-dependent toxic response to the insult of CuO nanoparticles.

  11. CRAFT (complete reduction to amplitude frequency table)--robust and time-efficient Bayesian approach for quantitative mixture analysis by NMR.

    PubMed

    Krishnamurthy, Krish

    2013-12-01

    The intrinsic quantitative nature of NMR is increasingly exploited in areas ranging from complex mixture analysis (as in metabolomics and reaction monitoring) to quality assurance/control. Complex NMR spectra are more common than not, and therefore, extraction of quantitative information generally involves significant prior knowledge and/or operator interaction to characterize resonances of interest. Moreover, in most NMR-based metabolomic experiments, the signals from metabolites are normally present as a mixture of overlapping resonances, making quantification difficult. Time-domain Bayesian approaches have been reported to be better than conventional frequency-domain analysis at identifying subtle changes in signal amplitude. We discuss an approach that exploits Bayesian analysis to achieve a complete reduction to amplitude frequency table (CRAFT) in an automated and time-efficient fashion - thus converting the time-domain FID to a frequency-amplitude table. CRAFT uses a two-step approach to FID analysis. First, the FID is digitally filtered and downsampled to several sub FIDs, and secondly, these sub FIDs are then modeled as sums of decaying sinusoids using the Bayesian approach. CRAFT tables can be used for further data mining of quantitative information using fingerprint chemical shifts of compounds of interest and/or statistical analysis of modulation of chemical quantity in a biological study (metabolomics) or process study (reaction monitoring) or quality assurance/control. The basic principles behind this approach as well as results to evaluate the effectiveness of this approach in mixture analysis are presented. Copyright © 2013 John Wiley & Sons, Ltd.

  12. Using integrated environmental modeling to automate a process-based Quantitative Microbial Risk Assessment

    EPA Science Inventory

    Integrated Environmental Modeling (IEM) organizes multidisciplinary knowledge that explains and predicts environmental-system response to stressors. A Quantitative Microbial Risk Assessment (QMRA) is an approach integrating a range of disparate data (fate/transport, exposure, an...

  13. Using Integrated Environmental Modeling to Automate a Process-Based Quantitative Microbial Risk Assessment (presentation)

    EPA Science Inventory

    Integrated Environmental Modeling (IEM) organizes multidisciplinary knowledge that explains and predicts environmental-system response to stressors. A Quantitative Microbial Risk Assessment (QMRA) is an approach integrating a range of disparate data (fate/transport, exposure, and...

  14. Automated High-Throughput Permethylation for Glycosylation Analysis of Biologics Using MALDI-TOF-MS.

    PubMed

    Shubhakar, Archana; Kozak, Radoslaw P; Reiding, Karli R; Royle, Louise; Spencer, Daniel I R; Fernandes, Daryl L; Wuhrer, Manfred

    2016-09-06

    Monitoring glycoprotein therapeutics for changes in glycosylation throughout the drug's life cycle is vital, as glycans significantly modulate the stability, biological activity, serum half-life, safety, and immunogenicity. Biopharma companies are increasingly adopting Quality by Design (QbD) frameworks for measuring, optimizing, and controlling drug glycosylation. Permethylation of glycans prior to analysis by matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (MALDI-TOF-MS) is a valuable tool for glycan characterization and for screening of large numbers of samples in QbD drug realization. However, the existing protocols for manual permethylation and liquid-liquid extraction (LLE) steps are labor intensive and are thus not practical for high-throughput (HT) studies. Here we present a glycan permethylation protocol, based on 96-well microplates, that has been developed into a kit suitable for HT work. The workflow is largely automated using a liquid handling robot and includes N-glycan release, enrichment of N-glycans, permethylation, and LLE. The kit has been validated according to industry analytical performance guidelines and applied to characterize biopharmaceutical samples, including IgG4 monoclonal antibodies (mAbs) and recombinant human erythropoietin (rhEPO). The HT permethylation enabled glycan characterization and relative quantitation with minimal side reactions: the MALDI-TOF-MS profiles obtained were in good agreement with hydrophilic liquid interaction chromatography (HILIC) and ultrahigh performance liquid chromatography (UHPLC) data. Automated permethylation and extraction of 96 glycan samples was achieved in less than 5 h and automated data acquisition on MALDI-TOF-MS took on average less than 1 min per sample. This automated and HT glycan preparation and permethylation showed to be convenient, fast, and reliable and can be applied for drug glycan profiling and clinical glycan biomarker studies.

  15. Optimization and automation of quantitative NMR data extraction.

    PubMed

    Bernstein, Michael A; Sýkora, Stan; Peng, Chen; Barba, Agustín; Cobas, Carlos

    2013-06-18

    NMR is routinely used to quantitate chemical species. The necessary experimental procedures to acquire quantitative data are well-known, but relatively little attention has been applied to data processing and analysis. We describe here a robust expert system that can be used to automatically choose the best signals in a sample for overall concentration determination and determine analyte concentration using all accepted methods. The algorithm is based on the complete deconvolution of the spectrum which makes it tolerant of cases where signals are very close to one another and includes robust methods for the automatic classification of NMR resonances and molecule-to-spectrum multiplets assignments. With the functionality in place and optimized, it is then a relatively simple matter to apply the same workflow to data in a fully automatic way. The procedure is desirable for both its inherent performance and applicability to NMR data acquired for very large sample sets.

  16. Development of a software for quantitative evaluation radiotherapy target and organ-at-risk segmentation comparison.

    PubMed

    Kalpathy-Cramer, Jayashree; Awan, Musaddiq; Bedrick, Steven; Rasch, Coen R N; Rosenthal, David I; Fuller, Clifton D

    2014-02-01

    Modern radiotherapy requires accurate region of interest (ROI) inputs for plan optimization and delivery. Target delineation, however, remains operator-dependent and potentially serves as a major source of treatment delivery error. In order to optimize this critical, yet observer-driven process, a flexible web-based platform for individual and cooperative target delineation analysis and instruction was developed in order to meet the following unmet needs: (1) an open-source/open-access platform for automated/semiautomated quantitative interobserver and intraobserver ROI analysis and comparison, (2) a real-time interface for radiation oncology trainee online self-education in ROI definition, and (3) a source for pilot data to develop and validate quality metrics for institutional and cooperative group quality assurance efforts. The resultant software, Target Contour Testing/Instructional Computer Software (TaCTICS), developed using Ruby on Rails, has since been implemented and proven flexible, feasible, and useful in several distinct analytical and research applications.

  17. Three-dimensional morphological analysis of intracranial aneurysms: a fully automated method for aneurysm sac isolation and quantification.

    PubMed

    Larrabide, Ignacio; Cruz Villa-Uriol, Maria; Cárdenes, Rubén; Pozo, Jose Maria; Macho, Juan; San Roman, Luis; Blasco, Jordi; Vivas, Elio; Marzo, Alberto; Hose, D Rod; Frangi, Alejandro F

    2011-05-01

    Morphological descriptors are practical and essential biomarkers for diagnosis and treatment selection for intracranial aneurysm management according to the current guidelines in use. Nevertheless, relatively little work has been dedicated to improve the three-dimensional quantification of aneurysmal morphology, to automate the analysis, and hence to reduce the inherent intra and interobserver variability of manual analysis. In this paper we propose a methodology for the automated isolation and morphological quantification of saccular intracranial aneurysms based on a 3D representation of the vascular anatomy. This methodology is based on the analysis of the vasculature skeleton's topology and the subsequent application of concepts from deformable cylinders. These are expanded inside the parent vessel to identify different regions and discriminate the aneurysm sac from the parent vessel wall. The method renders as output the surface representation of the isolated aneurysm sac, which can then be quantified automatically. The proposed method provides the means for identifying the aneurysm neck in a deterministic way. The results obtained by the method were assessed in two ways: they were compared to manual measurements obtained by three independent clinicians as normally done during diagnosis and to automated measurements from manually isolated aneurysms by three independent operators, nonclinicians, experts in vascular image analysis. All the measurements were obtained using in-house tools. The results were qualitatively and quantitatively compared for a set of the saccular intracranial aneurysms (n = 26). Measurements performed on a synthetic phantom showed that the automated measurements obtained from manually isolated aneurysms where the most accurate. The differences between the measurements obtained by the clinicians and the manually isolated sacs were statistically significant (neck width: p <0.001, sac height: p = 0.002). When comparing clinicians' measurements to automatically isolated sacs, only the differences for the neck width were significant (neck width: p <0.001, sac height: p = 0.95). However, the correlation and agreement between the measurements obtained from manually and automatically isolated aneurysms for the neck width: p = 0.43 and sac height: p = 0.95 where found. The proposed method allows the automated isolation of intracranial aneurysms, eliminating the interobserver variability. In average, the computational cost of the automated method (2 min 36 s) was similar to the time required by a manual operator (measurement by clinicians: 2 min 51 s, manual isolation: 2 min 21 s) but eliminating human interaction. The automated measurements are irrespective of the viewing angle, eliminating any bias or difference between the observer criteria. Finally, the qualitative assessment of the results showed acceptable agreement between manually and automatically isolated aneurysms.

  18. Image analysis and modeling in medical image computing. Recent developments and advances.

    PubMed

    Handels, H; Deserno, T M; Meinzer, H-P; Tolxdorff, T

    2012-01-01

    Medical image computing is of growing importance in medical diagnostics and image-guided therapy. Nowadays, image analysis systems integrating advanced image computing methods are used in practice e.g. to extract quantitative image parameters or to support the surgeon during a navigated intervention. However, the grade of automation, accuracy, reproducibility and robustness of medical image computing methods has to be increased to meet the requirements in clinical routine. In the focus theme, recent developments and advances in the field of modeling and model-based image analysis are described. The introduction of models in the image analysis process enables improvements of image analysis algorithms in terms of automation, accuracy, reproducibility and robustness. Furthermore, model-based image computing techniques open up new perspectives for prediction of organ changes and risk analysis of patients. Selected contributions are assembled to present latest advances in the field. The authors were invited to present their recent work and results based on their outstanding contributions to the Conference on Medical Image Computing BVM 2011 held at the University of Lübeck, Germany. All manuscripts had to pass a comprehensive peer review. Modeling approaches and model-based image analysis methods showing new trends and perspectives in model-based medical image computing are described. Complex models are used in different medical applications and medical images like radiographic images, dual-energy CT images, MR images, diffusion tensor images as well as microscopic images are analyzed. The applications emphasize the high potential and the wide application range of these methods. The use of model-based image analysis methods can improve segmentation quality as well as the accuracy and reproducibility of quantitative image analysis. Furthermore, image-based models enable new insights and can lead to a deeper understanding of complex dynamic mechanisms in the human body. Hence, model-based image computing methods are important tools to improve medical diagnostics and patient treatment in future.

  19. Reading the leaves: A comparison of leaf rank and automated areole measurement for quantifying aspects of leaf venation1

    PubMed Central

    Green, Walton A.; Little, Stefan A.; Price, Charles A.; Wing, Scott L.; Smith, Selena Y.; Kotrc, Benjamin; Doria, Gabriela

    2014-01-01

    The reticulate venation that is characteristic of a dicot leaf has excited interest from systematists for more than a century, and from physiological and developmental botanists for decades. The tools of digital image acquisition and computer image analysis, however, are only now approaching the sophistication needed to quantify aspects of the venation network found in real leaves quickly, easily, accurately, and reliably enough to produce biologically meaningful data. In this paper, we examine 120 leaves distributed across vascular plants (representing 118 genera and 80 families) using two approaches: a semiquantitative scoring system called “leaf ranking,” devised by the late Leo Hickey, and an automated image-analysis protocol. In the process of comparing these approaches, we review some methodological issues that arise in trying to quantify a vein network, and discuss the strengths and weaknesses of automatic data collection and human pattern recognition. We conclude that subjective leaf rank provides a relatively consistent, semiquantitative measure of areole size among other variables; that modal areole size is generally consistent across large sections of a leaf lamina; and that both approaches—semiquantitative, subjective scoring; and fully quantitative, automated measurement—have appropriate places in the study of leaf venation. PMID:25202646

  20. Detection of lobular structures in normal breast tissue.

    PubMed

    Apou, Grégory; Schaadt, Nadine S; Naegel, Benoît; Forestier, Germain; Schönmeyer, Ralf; Feuerhake, Friedrich; Wemmert, Cédric; Grote, Anne

    2016-07-01

    Ongoing research into inflammatory conditions raises an increasing need to evaluate immune cells in histological sections in biologically relevant regions of interest (ROIs). Herein, we compare different approaches to automatically detect lobular structures in human normal breast tissue in digitized whole slide images (WSIs). This automation is required to perform objective and consistent quantitative studies on large data sets. In normal breast tissue from nine healthy patients immunohistochemically stained for different markers, we evaluated and compared three different image analysis methods to automatically detect lobular structures in WSIs: (1) a bottom-up approach using the cell-based data for subsequent tissue level classification, (2) a top-down method starting with texture classification at tissue level analysis of cell densities in specific ROIs, and (3) a direct texture classification using deep learning technology. All three methods result in comparable overall quality allowing automated detection of lobular structures with minor advantage in sensitivity (approach 3), specificity (approach 2), or processing time (approach 1). Combining the outputs of the approaches further improved the precision. Different approaches of automated ROI detection are feasible and should be selected according to the individual needs of biomarker research. Additionally, detected ROIs could be used as a basis for quantification of immune infiltration in lobular structures. Copyright © 2016 Elsevier Ltd. All rights reserved.

  1. Crowdsourcing scoring of immunohistochemistry images: Evaluating Performance of the Crowd and an Automated Computational Method

    NASA Astrophysics Data System (ADS)

    Irshad, Humayun; Oh, Eun-Yeong; Schmolze, Daniel; Quintana, Liza M.; Collins, Laura; Tamimi, Rulla M.; Beck, Andrew H.

    2017-02-01

    The assessment of protein expression in immunohistochemistry (IHC) images provides important diagnostic, prognostic and predictive information for guiding cancer diagnosis and therapy. Manual scoring of IHC images represents a logistical challenge, as the process is labor intensive and time consuming. Since the last decade, computational methods have been developed to enable the application of quantitative methods for the analysis and interpretation of protein expression in IHC images. These methods have not yet replaced manual scoring for the assessment of IHC in the majority of diagnostic laboratories and in many large-scale research studies. An alternative approach is crowdsourcing the quantification of IHC images to an undefined crowd. The aim of this study is to quantify IHC images for labeling of ER status with two different crowdsourcing approaches, image-labeling and nuclei-labeling, and compare their performance with automated methods. Crowdsourcing- derived scores obtained greater concordance with the pathologist interpretations for both image-labeling and nuclei-labeling tasks (83% and 87%), as compared to the pathologist concordance achieved by the automated method (81%) on 5,338 TMA images from 1,853 breast cancer patients. This analysis shows that crowdsourcing the scoring of protein expression in IHC images is a promising new approach for large scale cancer molecular pathology studies.

  2. Automated cell counts on CSF samples: A multicenter performance evaluation of the GloCyte system.

    PubMed

    Hod, E A; Brugnara, C; Pilichowska, M; Sandhaus, L M; Luu, H S; Forest, S K; Netterwald, J C; Reynafarje, G M; Kratz, A

    2018-02-01

    Automated cell counters have replaced manual enumeration of cells in blood and most body fluids. However, due to the unreliability of automated methods at very low cell counts, most laboratories continue to perform labor-intensive manual counts on many or all cerebrospinal fluid (CSF) samples. This multicenter clinical trial investigated if the GloCyte System (Advanced Instruments, Norwood, MA), a recently FDA-approved automated cell counter, which concentrates and enumerates red blood cells (RBCs) and total nucleated cells (TNCs), is sufficiently accurate and precise at very low cell counts to replace all manual CSF counts. The GloCyte System concentrates CSF and stains RBCs with fluorochrome-labeled antibodies and TNCs with nucleic acid dyes. RBCs and TNCs are then counted by digital image analysis. Residual adult and pediatric CSF samples obtained for clinical analysis at five different medical centers were used for the study. Cell counts were performed by the manual hemocytometer method and with the GloCyte System following the same protocol at all sites. The limits of the blank, detection, and quantitation, as well as precision and accuracy of the GloCyte, were determined. The GloCyte detected as few as 1 TNC/μL and 1 RBC/μL, and reliably counted as low as 3 TNCs/μL and 2 RBCs/μL. The total coefficient of variation was less than 20%. Comparison with cell counts obtained with a hemocytometer showed good correlation (>97%) between the GloCyte and the hemocytometer, including at very low cell counts. The GloCyte instrument is a precise, accurate, and stable system to obtain red cell and nucleated cell counts in CSF samples. It allows for the automated enumeration of even very low cell numbers, which is crucial for CSF analysis. These results suggest that GloCyte is an acceptable alternative to the manual method for all CSF samples, including those with normal cell counts. © 2017 John Wiley & Sons Ltd.

  3. A novel image-based quantitative method for the characterization of NETosis

    PubMed Central

    Zhao, Wenpu; Fogg, Darin K.; Kaplan, Mariana J.

    2015-01-01

    NETosis is a newly recognized mechanism of programmed neutrophil death. It is characterized by a stepwise progression of chromatin decondensation, membrane rupture, and release of bactericidal DNA-based structures called neutrophil extracellular traps (NETs). Conventional ‘suicidal’ NETosis has been described in pathogenic models of systemic autoimmune disorders. Recent in vivo studies suggest that a process of ‘vital’ NETosis also exists, in which chromatin is condensed and membrane integrity is preserved. Techniques to assess ‘suicidal’ or ‘vital’ NET formation in a specific, quantitative, rapid and semiautomated way have been lacking, hindering the characterization of this process. Here we have developed a new method to simultaneously assess both ‘suicidal’ and ‘vital’ NETosis, using high-speed multi-spectral imaging coupled to morphometric image analysis, to quantify spontaneous NET formation observed ex-vivo or stimulus-induced NET formation triggered in vitro. Use of imaging flow cytometry allows automated, quantitative and rapid analysis of subcellular morphology and texture, and introduces the potential for further investigation using NETosis as a biomarker in pre-clinical and clinical studies. PMID:26003624

  4. Introduction of an automated user-independent quantitative volumetric magnetic resonance imaging breast density measurement system using the Dixon sequence: comparison with mammographic breast density assessment.

    PubMed

    Wengert, Georg Johannes; Helbich, Thomas H; Vogl, Wolf-Dieter; Baltzer, Pascal; Langs, Georg; Weber, Michael; Bogner, Wolfgang; Gruber, Stephan; Trattnig, Siegfried; Pinker, Katja

    2015-02-01

    The purposes of this study were to introduce and assess an automated user-independent quantitative volumetric (AUQV) breast density (BD) measurement system on the basis of magnetic resonance imaging (MRI) using the Dixon technique as well as to compare it with qualitative and quantitative mammographic (MG) BD measurements. Forty-three women with normal mammogram results (Breast Imaging Reporting and Data System 1) were included in this institutional review board-approved prospective study. All participants were subjected to BD assessment with MRI using the following sequence with the Dixon technique (echo time/echo time, 6 milliseconds/2.45 milliseconds/2.67 milliseconds; 1-mm isotropic; 3 minutes 38 seconds). To test the reproducibility, a second MRI after patient repositioning was performed. The AUQV magnetic resonance (MR) BD measurement system automatically calculated percentage (%) BD. The qualitative BD assessment was performed using the American College of Radiology Breast Imaging Reporting and Data System BD categories. Quantitative BD was estimated semiautomatically using the thresholding technique Cumulus4. Appropriate statistical tests were used to assess the agreement between the AUQV MR measurements and to compare them with qualitative and quantitative MG BD estimations. The AUQV MR BD measurements were successfully performed in all 43 women. There was a nearly perfect agreement of AUQV MR BD measurements between the 2 MR examinations for % BD (P < 0.001; intraclass correlation coefficient, 0.998) with no significant differences (P = 0.384). The AUQV MR BD measurements were significantly lower than quantitative and qualitative MG BD assessment (P < 0.001). The AUQV MR BD measurement system allows a fully automated, user-independent, robust, reproducible, as well as radiation- and compression-free volumetric quantitative BD assessment through different levels of BD. The AUQV MR BD measurements were significantly lower than the currently used qualitative and quantitative MG-based approaches, implying that the current assessment might overestimate breast density with MG.

  5. Automated Cross-Sectional Measurement Method of Intracranial Dural Venous Sinuses.

    PubMed

    Lublinsky, S; Friedman, A; Kesler, A; Zur, D; Anconina, R; Shelef, I

    2016-03-01

    MRV is an important blood vessel imaging and diagnostic tool for the evaluation of stenosis, occlusions, or aneurysms. However, an accurate image-processing tool for vessel comparison is unavailable. The purpose of this study was to develop and test an automated technique for vessel cross-sectional analysis. An algorithm for vessel cross-sectional analysis was developed that included 7 main steps: 1) image registration, 2) masking, 3) segmentation, 4) skeletonization, 5) cross-sectional planes, 6) clustering, and 7) cross-sectional analysis. Phantom models were used to validate the technique. The method was also tested on a control subject and a patient with idiopathic intracranial hypertension (4 large sinuses tested: right and left transverse sinuses, superior sagittal sinus, and straight sinus). The cross-sectional area and shape measurements were evaluated before and after lumbar puncture in patients with idiopathic intracranial hypertension. The vessel-analysis algorithm had a high degree of stability with <3% of cross-sections manually corrected. All investigated principal cranial blood sinuses had a significant cross-sectional area increase after lumbar puncture (P ≤ .05). The average triangularity of the transverse sinuses was increased, and the mean circularity of the sinuses was decreased by 6% ± 12% after lumbar puncture. Comparison of phantom and real data showed that all computed errors were <1 voxel unit, which confirmed that the method provided a very accurate solution. In this article, we present a novel automated imaging method for cross-sectional vessels analysis. The method can provide an efficient quantitative detection of abnormalities in the dural sinuses. © 2016 by American Journal of Neuroradiology.

  6. Using integrated environmental modeling to automate a process-based Quantitative Microbial Risk Assessment

    USDA-ARS?s Scientific Manuscript database

    Integrated Environmental Modeling (IEM) organizes multidisciplinary knowledge that explains and predicts environmental-system response to stressors. A Quantitative Microbial Risk Assessment (QMRA) is an approach integrating a range of disparate data (fate/transport, exposure, and human health effect...

  7. Electrochemical Detection in Stacked Paper Networks.

    PubMed

    Liu, Xiyuan; Lillehoj, Peter B

    2015-08-01

    Paper-based electrochemical biosensors are a promising technology that enables rapid, quantitative measurements on an inexpensive platform. However, the control of liquids in paper networks is generally limited to a single sample delivery step. Here, we propose a simple method to automate the loading and delivery of liquid samples to sensing electrodes on paper networks by stacking multiple layers of paper. Using these stacked paper devices (SPDs), we demonstrate a unique strategy to fully immerse planar electrodes by aqueous liquids via capillary flow. Amperometric measurements of xanthine oxidase revealed that electrochemical sensors on four-layer SPDs generated detection signals up to 75% higher compared with those on single-layer paper devices. Furthermore, measurements could be performed with minimal user involvement and completed within 30 min. Due to its simplicity, enhanced automation, and capability for quantitative measurements, stacked paper electrochemical biosensors can be useful tools for point-of-care testing in resource-limited settings. © 2015 Society for Laboratory Automation and Screening.

  8. Orbital Debris Quarterly News. Volume 13; No. 1

    NASA Technical Reports Server (NTRS)

    Liou, J.-C. (Editor); Shoots, Debi (Editor)

    2009-01-01

    Topics discussed include: new debris from a decommissioned satellite with a nuclear power source; debris from the destruction of the Fengyun-1C meteorological satellite; quantitative analysis of the European Space Agency's Automated Transfer Vehicle 'Jules Verne' reentry event; microsatellite impact tests; solar cycle 24 predictions and other long-term projections and geosynchronus (GEO) environment for the Orbital Debris Engineering Model (ORDEM2008). Abstracts from the NASA Orbital Debris Program Office, examining satellite reentry risk assessments and statistical issues for uncontrolled reentry hazards, are also included.

  9. Automating spectral unmixing of AVIRIS data using convex geometry concepts

    NASA Technical Reports Server (NTRS)

    Boardman, Joseph W.

    1993-01-01

    Spectral mixture analysis, or unmixing, has proven to be a useful tool in the semi-quantitative interpretation of AVIRIS data. Using a linear mixing model and a set of hypothesized endmember spectra, unmixing seeks to estimate the fractional abundance patterns of the various materials occurring within the imaged area. However, the validity and accuracy of the unmixing rest heavily on the 'user-supplied' set of endmember spectra. Current methods for emdmember determination are the weak link in the unmixing chain.

  10. 2015 Summer Series - Lee Stone - Brain Function Through the Eyes of the Beholder

    NASA Image and Video Library

    2015-06-09

    The Visuomotor Control Laboratory (VCL) at NASA Ames conducts neuroscience research on the link between eye movements and brain function to provide an efficient and quantitative means of monitoring human perceptual performance. The VCL aims to make dramatic improvements in mission success through analysis, experimentation, and modeling of human performance and human-automation interaction. Dr. Lee Stone elaborates on how this research is conducted and how it contributes to NASA's mission and advances human-centered design and operations of complex aerospace systems.

  11. Recent Achievements in Characterizing the Histone Code and Approaches to Integrating Epigenomics and Systems Biology.

    PubMed

    Janssen, K A; Sidoli, S; Garcia, B A

    2017-01-01

    Functional epigenetic regulation occurs by dynamic modification of chromatin, including genetic material (i.e., DNA methylation), histone proteins, and other nuclear proteins. Due to the highly complex nature of the histone code, mass spectrometry (MS) has become the leading technique in identification of single and combinatorial histone modifications. MS has now overcome antibody-based strategies due to its automation, high resolution, and accurate quantitation. Moreover, multiple approaches to analysis have been developed for global quantitation of posttranslational modifications (PTMs), including large-scale characterization of modification coexistence (middle-down and top-down proteomics), which is not currently possible with any other biochemical strategy. Recently, our group and others have simplified and increased the effectiveness of analyzing histone PTMs by improving multiple MS methods and data analysis tools. This review provides an overview of the major achievements in the analysis of histone PTMs using MS with a focus on the most recent improvements. We speculate that the workflow for histone analysis at its state of the art is highly reliable in terms of identification and quantitation accuracy, and it has the potential to become a routine method for systems biology thanks to the possibility of integrating histone MS results with genomics and proteomics datasets. © 2017 Elsevier Inc. All rights reserved.

  12. Automated quantitative gait analysis during overground locomotion in the rat: its application to spinal cord contusion and transection injuries.

    PubMed

    Hamers, F P; Lankhorst, A J; van Laar, T J; Veldhuis, W B; Gispen, W H

    2001-02-01

    Analysis of locomotion is an important tool in the study of peripheral and central nervous system damage. Most locomotor scoring systems in rodents are based either upon open field locomotion assessment, for example, the BBB score or upon foot print analysis. The former yields a semiquantitative description of locomotion as a whole, whereas the latter generates quantitative data on several selected gait parameters. In this paper, we describe the use of a newly developed gait analysis method that allows easy quantitation of a large number of locomotion parameters during walkway crossing. We were able to extract data on interlimb coordination, swing duration, paw print areas (total over stance, and at 20-msec time resolution), stride length, and base of support: Similar data can not be gathered by any single previously described method. We compare changes in gait parameters induced by two different models of spinal cord injury in rats, transection of the dorsal half of the spinal cord and spinal cord contusion injury induced by the NYU or MASCIS device. Although we applied this method to rats with spinal cord injury, the usefulness of this method is not limited to rats or to the investigation of spinal cord injuries alone.

  13. Research highlights: microfluidics meets big data.

    PubMed

    Tseng, Peter; Weaver, Westbrook M; Masaeli, Mahdokht; Owsley, Keegan; Di Carlo, Dino

    2014-03-07

    In this issue we highlight a collection of recent work in which microfluidic parallelization and automation have been employed to address the increasing need for large amounts of quantitative data concerning cellular function--from correlating microRNA levels to protein expression, increasing the throughput and reducing the noise when studying protein dynamics in single-cells, and understanding how signal dynamics encodes information. The painstaking dissection of cellular pathways one protein at a time appears to be coming to an end, leading to more rapid discoveries which will inevitably translate to better cellular control--in producing useful gene products and treating disease at the individual cell level. From these studies it is also clear that development of large scale mutant or fusion libraries, automation of microscopy, image analysis, and data extraction will be key components as microfluidics contributes its strengths to aid systems biology moving forward.

  14. A fully-automated multiscale kernel graph cuts based particle localization scheme for temporal focusing two-photon microscopy

    NASA Astrophysics Data System (ADS)

    Huang, Xia; Li, Chunqiang; Xiao, Chuan; Sun, Wenqing; Qian, Wei

    2017-03-01

    The temporal focusing two-photon microscope (TFM) is developed to perform depth resolved wide field fluorescence imaging by capturing frames sequentially. However, due to strong nonignorable noises and diffraction rings surrounding particles, further researches are extremely formidable without a precise particle localization technique. In this paper, we developed a fully-automated scheme to locate particles positions with high noise tolerance. Our scheme includes the following procedures: noise reduction using a hybrid Kalman filter method, particle segmentation based on a multiscale kernel graph cuts global and local segmentation algorithm, and a kinematic estimation based particle tracking method. Both isolated and partial-overlapped particles can be accurately identified with removal of unrelated pixels. Based on our quantitative analysis, 96.22% isolated particles and 84.19% partial-overlapped particles were successfully detected.

  15. Global sensing of gaseous and aerosol trace species using automated instrumentation on 747 airliners

    NASA Technical Reports Server (NTRS)

    Perkins, P. J.; Papathakos, L. C.

    1977-01-01

    The Global Atmospheric Sampling Program (GASP) by NASA is collecting and analyzing data on gaseous and aerosol trace species in the upper troposphere and lower stratosphere. Measurements are obtained from automated systems installed on four 747 airliners flying global air routes. Advances were made in airborne sampling instrumentation. Improved instruments and analysis techniques are providing an expanding data base for trace species including ozone, carbon monoxide, water vapor, condensation nuclei and mass concentrations of sulfates and nitrates. Simultaneous measurements of several trace species obtained frequently can be used to uniquely identify the source of the air mass as being typically tropospheric or stratospheric. A quantitative understanding of the tropospheric-stratospheric exchange processes leads to better knowledge of the atmospheric impact of pollution through the development of improved simulation models of the atmosphere.

  16. A quantitative measure for degree of automation and its relation to system performance and mental load.

    PubMed

    Wei, Z G; Macwan, A P; Wieringa, P A

    1998-06-01

    In this paper we quantitatively model degree of automation (DofA) in supervisory control as a function of the number and nature of tasks to be performed by the operator and automation. This model uses a task weighting scheme in which weighting factors are obtained from task demand load, task mental load, and task effect on system performance. The computation of DofA is demonstrated using an experimental system. Based on controlled experiments using operators, analyses of the task effect on system performance, the prediction and assessment of task demand load, and the prediction of mental load were performed. Each experiment had a different DofA. The effect of a change in DofA on system performance and mental load was investigated. It was found that system performance became less sensitive to changes in DofA at higher levels of DofA. The experimental data showed that when the operator controlled a partly automated system, perceived mental load could be predicted from the task mental load for each task component, as calculated by analyzing a situation in which all tasks were manually controlled. Actual or potential applications of this research include a methodology to balance and optimize the automation of complex industrial systems.

  17. Early prediction of coma recovery after cardiac arrest with blinded pupillometry.

    PubMed

    Solari, Daria; Rossetti, Andrea O; Carteron, Laurent; Miroz, John-Paul; Novy, Jan; Eckert, Philippe; Oddo, Mauro

    2017-06-01

    Prognostication studies on comatose cardiac arrest (CA) patients are limited by lack of blinding, potentially causing overestimation of outcome predictors and self-fulfilling prophecy. Using a blinded approach, we analyzed the value of quantitative automated pupillometry to predict neurological recovery after CA. We examined a prospective cohort of 103 comatose adult patients who were unconscious 48 hours after CA and underwent repeated measurements of quantitative pupillary light reflex (PLR) using the Neurolight-Algiscan device. Clinical examination, electroencephalography (EEG), somatosensory evoked potentials (SSEP), and serum neuron-specific enolase were performed in parallel, as part of standard multimodal assessment. Automated pupillometry results were blinded to clinicians involved in patient care. Cerebral Performance Categories (CPC) at 1 year was the outcome endpoint. Survivors (n = 50 patients; 32 CPC 1, 16 CPC 2, 2 CPC 3) had higher quantitative PLR (median = 20 [range = 13-41] vs 11 [0-55] %, p < 0.0001) and constriction velocity (1.46 [0.85-4.63] vs 0.94 [0.16-4.97] mm/s, p < 0.0001) than nonsurvivors. At 48 hours, a quantitative PLR < 13% had 100% specificity and positive predictive value to predict poor recovery (0% false-positive rate), and provided equal performance to that of EEG and SSEP. Reduced quantitative PLR correlated with higher serum neuron-specific enolase (Spearman r = -0.52, p < 0.0001). Reduced quantitative PLR correlates with postanoxic brain injury and, when compared to standard multimodal assessment, is highly accurate in predicting long-term prognosis after CA. This is the first prognostication study to show the value of automated pupillometry using a blinded approach to minimize self-fulfilling prophecy. Ann Neurol 2017;81:804-810. © 2017 American Neurological Association.

  18. Automated Vocal Analysis of Children with Hearing Loss and Their Typical and Atypical Peers

    PubMed Central

    VanDam, Mark; Oller, D. Kimbrough; Ambrose, Sophie E.; Gray, Sharmistha; Richards, Jeffrey A.; Xu, Dongxin; Gilkerson, Jill; Silbert, Noah H.; Moeller, Mary Pat

    2014-01-01

    Objectives This study investigated automatic assessment of vocal development in children with hearing loss as compared with children who are typically developing, have language delays, and autism spectrum disorder. Statistical models are examined for performance in a classification model and to predict age within the four groups of children. Design The vocal analysis system analyzed over 1900 whole-day, naturalistic acoustic recordings from 273 toddlers and preschoolers comprising children who were typically developing, hard of hearing, language delayed, or autistic. Results Samples from children who were hard-of-hearing patterned more similarly to those of typically-developing children than to the language-delayed or autistic samples. The statistical models were able to classify children from the four groups examined and estimate developmental age based on automated vocal analysis. Conclusions This work shows a broad similarity between children with hearing loss and typically developing children, although children with hearing loss show some delay in their production of speech. Automatic acoustic analysis can now be used to quantitatively compare vocal development in children with and without speech-related disorders. The work may serve to better distinguish among various developmental disorders and ultimately contribute to improved intervention. PMID:25587667

  19. Automated detection of inaccurate and imprecise transitions in peptide quantification by multiple reaction monitoring mass spectrometry.

    PubMed

    Abbatiello, Susan E; Mani, D R; Keshishian, Hasmik; Carr, Steven A

    2010-02-01

    Multiple reaction monitoring mass spectrometry (MRM-MS) of peptides with stable isotope-labeled internal standards (SISs) is increasingly being used to develop quantitative assays for proteins in complex biological matrices. These assays can be highly precise and quantitative, but the frequent occurrence of interferences requires that MRM-MS data be manually reviewed, a time-intensive process subject to human error. We developed an algorithm that identifies inaccurate transition data based on the presence of interfering signal or inconsistent recovery among replicate samples. The algorithm objectively evaluates MRM-MS data with 2 orthogonal approaches. First, it compares the relative product ion intensities of the analyte peptide to those of the SIS peptide and uses a t-test to determine if they are significantly different. A CV is then calculated from the ratio of the analyte peak area to the SIS peak area from the sample replicates. The algorithm identified problematic transitions and achieved accuracies of 94%-100%, with a sensitivity and specificity of 83%-100% for correct identification of errant transitions. The algorithm was robust when challenged with multiple types of interferences and problematic transitions. This algorithm for automated detection of inaccurate and imprecise transitions (AuDIT) in MRM-MS data reduces the time required for manual and subjective inspection of data, improves the overall accuracy of data analysis, and is easily implemented into the standard data-analysis work flow. AuDIT currently works with results exported from MRM-MS data-processing software packages and may be implemented as an analysis tool within such software.

  20. Automated Detection of Inaccurate and Imprecise Transitions in Peptide Quantification by Multiple Reaction Monitoring Mass Spectrometry

    PubMed Central

    Abbatiello, Susan E.; Mani, D. R.; Keshishian, Hasmik; Carr, Steven A.

    2010-01-01

    BACKGROUND Multiple reaction monitoring mass spectrometry (MRM-MS) of peptides with stable isotope–labeled internal standards (SISs) is increasingly being used to develop quantitative assays for proteins in complex biological matrices. These assays can be highly precise and quantitative, but the frequent occurrence of interferences requires that MRM-MS data be manually reviewed, a time-intensive process subject to human error. We developed an algorithm that identifies inaccurate transition data based on the presence of interfering signal or inconsistent recovery among replicate samples. METHODS The algorithm objectively evaluates MRM-MS data with 2 orthogonal approaches. First, it compares the relative product ion intensities of the analyte peptide to those of the SIS peptide and uses a t-test to determine if they are significantly different. A CV is then calculated from the ratio of the analyte peak area to the SIS peak area from the sample replicates. RESULTS The algorithm identified problematic transitions and achieved accuracies of 94%–100%, with a sensitivity and specificity of 83%–100% for correct identification of errant transitions. The algorithm was robust when challenged with multiple types of interferences and problematic transitions. CONCLUSIONS This algorithm for automated detection of inaccurate and imprecise transitions (AuDIT) in MRM-MS data reduces the time required for manual and subjective inspection of data, improves the overall accuracy of data analysis, and is easily implemented into the standard data-analysis work flow. AuDIT currently works with results exported from MRM-MS data-processing software packages and may be implemented as an analysis tool within such software. PMID:20022980

  1. High-throughput fabrication and screening improves gold nanoparticle chemiresistor sensor performance.

    PubMed

    Hubble, Lee J; Cooper, James S; Sosa-Pintos, Andrea; Kiiveri, Harri; Chow, Edith; Webster, Melissa S; Wieczorek, Lech; Raguse, Burkhard

    2015-02-09

    Chemiresistor sensor arrays are a promising technology to replace current laboratory-based analysis instrumentation, with the advantage of facile integration into portable, low-cost devices for in-field use. To increase the performance of chemiresistor sensor arrays a high-throughput fabrication and screening methodology was developed to assess different organothiol-functionalized gold nanoparticle chemiresistors. This high-throughput fabrication and testing methodology was implemented to screen a library consisting of 132 different organothiol compounds as capping agents for functionalized gold nanoparticle chemiresistor sensors. The methodology utilized an automated liquid handling workstation for the in situ functionalization of gold nanoparticle films and subsequent automated analyte testing of sensor arrays using a flow-injection analysis system. To test the methodology we focused on the discrimination and quantitation of benzene, toluene, ethylbenzene, p-xylene, and naphthalene (BTEXN) mixtures in water at low microgram per liter concentration levels. The high-throughput methodology identified a sensor array configuration consisting of a subset of organothiol-functionalized chemiresistors which in combination with random forests analysis was able to predict individual analyte concentrations with overall root-mean-square errors ranging between 8-17 μg/L for mixtures of BTEXN in water at the 100 μg/L concentration. The ability to use a simple sensor array system to quantitate BTEXN mixtures in water at the low μg/L concentration range has direct and significant implications to future environmental monitoring and reporting strategies. In addition, these results demonstrate the advantages of high-throughput screening to improve the performance of gold nanoparticle based chemiresistors for both new and existing applications.

  2. Automated finite element modeling of the lumbar spine: Using a statistical shape model to generate a virtual population of models.

    PubMed

    Campbell, J Q; Petrella, A J

    2016-09-06

    Population-based modeling of the lumbar spine has the potential to be a powerful clinical tool. However, developing a fully parameterized model of the lumbar spine with accurate geometry has remained a challenge. The current study used automated methods for landmark identification to create a statistical shape model of the lumbar spine. The shape model was evaluated using compactness, generalization ability, and specificity. The primary shape modes were analyzed visually, quantitatively, and biomechanically. The biomechanical analysis was performed by using the statistical shape model with an automated method for finite element model generation to create a fully parameterized finite element model of the lumbar spine. Functional finite element models of the mean shape and the extreme shapes (±3 standard deviations) of all 17 shape modes were created demonstrating the robust nature of the methods. This study represents an advancement in finite element modeling of the lumbar spine and will allow population-based modeling in the future. Copyright © 2016 Elsevier Ltd. All rights reserved.

  3. AUTOMATED CELL SEGMENTATION WITH 3D FLUORESCENCE MICROSCOPY IMAGES.

    PubMed

    Kong, Jun; Wang, Fusheng; Teodoro, George; Liang, Yanhui; Zhu, Yangyang; Tucker-Burden, Carol; Brat, Daniel J

    2015-04-01

    A large number of cell-oriented cancer investigations require an effective and reliable cell segmentation method on three dimensional (3D) fluorescence microscopic images for quantitative analysis of cell biological properties. In this paper, we present a fully automated cell segmentation method that can detect cells from 3D fluorescence microscopic images. Enlightened by fluorescence imaging techniques, we regulated the image gradient field by gradient vector flow (GVF) with interpolated and smoothed data volume, and grouped voxels based on gradient modes identified by tracking GVF field. Adaptive thresholding was then applied to voxels associated with the same gradient mode where voxel intensities were enhanced by a multiscale cell filter. We applied the method to a large volume of 3D fluorescence imaging data of human brain tumor cells with (1) small cell false detection and missing rates for individual cells; and (2) trivial over and under segmentation incidences for clustered cells. Additionally, the concordance of cell morphometry structure between automated and manual segmentation was encouraging. These results suggest a promising 3D cell segmentation method applicable to cancer studies.

  4. Analysis of HER2 status in breast carcinoma by fully automated HER2 fluorescence in situ hybridization (FISH): comparison of two immunohistochemical tests and manual FISH.

    PubMed

    Yoon, Nara; Do, In-Gu; Cho, Eun Yoon

    2014-09-01

    Easy and accurate HER2 testing is essential when considering the prognostic and predictive significance of HER2 in breast cancer. The use of a fully automated, quantitative FISH assay would be helpful to detect HER2 amplification in breast cancer tissue specimens with reduced inter-laboratory variability. We compared the concordance of HER2 status as assessed by an automated FISH staining system to manual FISH testing. Using 60 formalin-fixed paraffin-embedded breast carcinoma specimens, we assessed HER2 immunoexpression with two antibodies (DAKO HercepTest and CB11). In addition, HER2 status was evaluated with automated FISH using the Leica FISH System for BOND and a manual FISH using the Abbott PathVysion DNA Probe Kit. All but one specimen were successfully stained using both FISH methods. When the data were divided into two groups according to HER2/CEP17 ratio, positive and negative, the results from both the automated and manual FISH techniques were identical for all 59 evaluable specimens. The HER2 and CEP17 copy numbers and HER2/CEP17 ratio showed great agreement between both FISH methods. The automated FISH technique was interpretable with signal intensity similar to those of the manual FISH technique. In contrast with manual FISH, the automated FISH technique showed well-preserved architecture due to low membrane digestion. HER2 immunohistochemistry and FISH results showed substantial significant agreement (κ = 1.0, p < 0.001). HER2 status can be reliably determined using a fully automated HER2 FISH system with high concordance to the well-established manual FISH method. Because of stable signal intensity and high staining quality, the automated FISH technique may be more appropriate than manual FISH for routine applications. © 2013 APMIS. Published by John Wiley & Sons Ltd.

  5. Principles, performance, and applications of spectral reconstitution (SR) in quantitative analysis of oils by Fourier transform infrared spectroscopy (FT-IR).

    PubMed

    García-González, Diego L; Sedman, Jacqueline; van de Voort, Frederik R

    2013-04-01

    Spectral reconstitution (SR) is a dilution technique developed to facilitate the rapid, automated, and quantitative analysis of viscous oil samples by Fourier transform infrared spectroscopy (FT-IR). This technique involves determining the dilution factor through measurement of an absorption band of a suitable spectral marker added to the diluent, and then spectrally removing the diluent from the sample and multiplying the resulting spectrum to compensate for the effect of dilution on the band intensities. The facsimile spectrum of the neat oil thus obtained can then be qualitatively or quantitatively analyzed for the parameter(s) of interest. The quantitative performance of the SR technique was examined with two transition-metal carbonyl complexes as spectral markers, chromium hexacarbonyl and methylcyclopentadienyl manganese tricarbonyl. The estimation of the volume fraction (VF) of the diluent in a model system, consisting of canola oil diluted to various extents with odorless mineral spirits, served as the basis for assessment of these markers. The relationship between the VF estimates and the true volume fraction (VF(t)) was found to be strongly dependent on the dilution ratio and also depended, to a lesser extent, on the spectral resolution. These dependences are attributable to the effect of changes in matrix polarity on the bandwidth of the ν(CO) marker bands. Excellent VF(t) estimates were obtained by making a polarity correction devised with a variance-spectrum-delineated correction equation. In the absence of such a correction, SR was shown to introduce only a minor and constant bias, provided that polarity differences among all the diluted samples analyzed were minimal. This bias can be built into the calibration of a quantitative FT-IR analytical method by subjecting appropriate calibration standards to the same SR procedure as the samples to be analyzed. The primary purpose of the SR technique is to simplify preparation of diluted samples such that only approximate proportions need to be adhered to, rather than using exact weights or volumes, the marker accounting for minor variations. Additional applications discussed include the use of the SR technique in extraction-based, quantitative, automated FT-IR methods for the determination of moisture, acid number, and base number in lubricating oils, as well as of moisture content in edible oils.

  6. Comparative study between quantitative digital image analysis and fluorescence in situ hybridization of breast cancer equivocal human epidermal growth factor receptors 2 score 2(+) cases.

    PubMed

    Ayad, Essam; Mansy, Mina; Elwi, Dalal; Salem, Mostafa; Salama, Mohamed; Kayser, Klaus

    2015-01-01

    Optimization of workflow for breast cancer samples with equivocal human epidermal growth factor receptors 2 (HER2)/neu score 2(+) results in routine practice, remains to be a central focus of the on-going efforts to assess HER2 status. According to the College of American Pathologists/American Society of Clinical Oncology guidelines equivocal HER2/neu score 2(+) cases are subject for further testing, usually by fluorescence in situ hybridization (FISH) investigations. It still remains on open question, whether quantitative digital image analysis of HER2 immunohistochemistry (IHC) stained slides can assist in further refining the HER2 score 2(+). To assess utility of quantitative digital analysis of IHC stained slides and compare its performance to FISH in cases of breast cancer with equivocal HER2 score 2(+). Fifteen specimens (previously diagnosed as breast cancer and was evaluated as HER 2(-) score 2(+)) represented the study population. Contemporary new cuts were prepared for re-evaluation of HER2 immunohistochemical studies and FISH examination. All the cases were digitally scanned by iScan (Produced by BioImagene [Now Roche-Ventana]). The IHC signals of HER2 were measured using an automated image analyzing system (MECES, www.Diagnomx.eu/meces). Finally, a comparative study was done between the results of the FISH and the quantitative analysis of the virtual slides. Three out of the 15 cases with equivocal HER2 score 2(+), turned out to be positive (3(+)) by quantitative digital analysis, and 12 were found to be negative in FISH too. Two of these three positive cases proved to be positive with FISH, and only one was negative. Quantitative digital analysis is highly sensitive and relatively specific when compared to FISH in detecting HER2/neu overexpression. Therefore, it represents a potential reliable substitute for FISH in breast cancer cases, which desire further refinement of equivocal IHC results.

  7. NeuronMetrics: Software for Semi-Automated Processing of Cultured-Neuron Images

    PubMed Central

    Narro, Martha L.; Yang, Fan; Kraft, Robert; Wenk, Carola; Efrat, Alon; Restifo, Linda L.

    2007-01-01

    Using primary cell culture to screen for changes in neuronal morphology requires specialized analysis software. We developed NeuronMetrics™ for semi-automated, quantitative analysis of two-dimensional (2D) images of fluorescently labeled cultured neurons. It skeletonizes the neuron image using two complementary image-processing techniques, capturing fine terminal neurites with high fidelity. An algorithm was devised to span wide gaps in the skeleton. NeuronMetrics uses a novel strategy based on geometric features called faces to extract a branch-number estimate from complex arbors with numerous neurite-to-neurite contacts, without creating a precise, contact-free representation of the neurite arbor. It estimates total neurite length, branch number, primary neurite number, territory (the area of the convex polygon bounding the skeleton and cell body), and Polarity Index (a measure of neuronal polarity). These parameters provide fundamental information about the size and shape of neurite arbors, which are critical factors for neuronal function. NeuronMetrics streamlines optional manual tasks such as removing noise, isolating the largest primary neurite, and correcting length for self-fasciculating neurites. Numeric data are output in a single text file, readily imported into other applications for further analysis. Written as modules for ImageJ, NeuronMetrics provides practical analysis tools that are easy to use and support batch processing. Depending on the need for manual intervention, processing time for a batch of ~60 2D images is 1.0–2.5 hours, from a folder of images to a table of numeric data. NeuronMetrics’ output accelerates the quantitative detection of mutations and chemical compounds that alter neurite morphology in vitro, and will contribute to the use of cultured neurons for drug discovery. PMID:17270152

  8. NeuronMetrics: software for semi-automated processing of cultured neuron images.

    PubMed

    Narro, Martha L; Yang, Fan; Kraft, Robert; Wenk, Carola; Efrat, Alon; Restifo, Linda L

    2007-03-23

    Using primary cell culture to screen for changes in neuronal morphology requires specialized analysis software. We developed NeuronMetrics for semi-automated, quantitative analysis of two-dimensional (2D) images of fluorescently labeled cultured neurons. It skeletonizes the neuron image using two complementary image-processing techniques, capturing fine terminal neurites with high fidelity. An algorithm was devised to span wide gaps in the skeleton. NeuronMetrics uses a novel strategy based on geometric features called faces to extract a branch number estimate from complex arbors with numerous neurite-to-neurite contacts, without creating a precise, contact-free representation of the neurite arbor. It estimates total neurite length, branch number, primary neurite number, territory (the area of the convex polygon bounding the skeleton and cell body), and Polarity Index (a measure of neuronal polarity). These parameters provide fundamental information about the size and shape of neurite arbors, which are critical factors for neuronal function. NeuronMetrics streamlines optional manual tasks such as removing noise, isolating the largest primary neurite, and correcting length for self-fasciculating neurites. Numeric data are output in a single text file, readily imported into other applications for further analysis. Written as modules for ImageJ, NeuronMetrics provides practical analysis tools that are easy to use and support batch processing. Depending on the need for manual intervention, processing time for a batch of approximately 60 2D images is 1.0-2.5 h, from a folder of images to a table of numeric data. NeuronMetrics' output accelerates the quantitative detection of mutations and chemical compounds that alter neurite morphology in vitro, and will contribute to the use of cultured neurons for drug discovery.

  9. Automated high resolution full-field spatial coherence tomography for quantitative phase imaging of human red blood cells

    NASA Astrophysics Data System (ADS)

    Singla, Neeru; Dubey, Kavita; Srivastava, Vishal; Ahmad, Azeem; Mehta, D. S.

    2018-02-01

    We developed an automated high-resolution full-field spatial coherence tomography (FF-SCT) microscope for quantitative phase imaging that is based on the spatial, rather than the temporal, coherence gating. The Red and Green color laser light was used for finding the quantitative phase images of unstained human red blood cells (RBCs). This study uses morphological parameters of unstained RBCs phase images to distinguish between normal and infected cells. We recorded the single interferogram by a FF-SCT microscope for red and green color wavelength and average the two phase images to further reduced the noise artifacts. In order to characterize anemia infected from normal cells different morphological features were extracted and these features were used to train machine learning ensemble model to classify RBCs with high accuracy.

  10. Nonparticipatory Stiffness in the Male Perioral Complex

    ERIC Educational Resources Information Center

    Chu, Shin-Ying; Barlow, Steven M.; Lee, Jaehoon

    2009-01-01

    Purpose: The objective of this study was to extend previous published findings in the authors' laboratory using a new automated technology to quantitatively characterize nonparticipatory perioral stiffness in healthy male adults. Method: Quantitative measures of perioral stiffness were sampled during a nonparticipatory task using a…

  11. Automated liver elasticity calculation for 3D MRE

    NASA Astrophysics Data System (ADS)

    Dzyubak, Bogdan; Glaser, Kevin J.; Manduca, Armando; Ehman, Richard L.

    2017-03-01

    Magnetic Resonance Elastography (MRE) is a phase-contrast MRI technique which calculates quantitative stiffness images, called elastograms, by imaging the propagation of acoustic waves in tissues. It is used clinically to diagnose liver fibrosis. Automated analysis of MRE is difficult as the corresponding MRI magnitude images (which contain anatomical information) are affected by intensity inhomogeneity, motion artifact, and poor tissue- and edge-contrast. Additionally, areas with low wave amplitude must be excluded. An automated algorithm has already been successfully developed and validated for clinical 2D MRE. 3D MRE acquires substantially more data and, due to accelerated acquisition, has exacerbated image artifacts. Also, the current 3D MRE processing does not yield a confidence map to indicate MRE wave quality and guide ROI selection, as is the case in 2D. In this study, extension of the 2D automated method, with a simple wave-amplitude metric, was developed and validated against an expert reader in a set of 57 patient exams with both 2D and 3D MRE. The stiffness discrepancy with the expert for 3D MRE was -0.8% +/- 9.45% and was better than discrepancy with the same reader for 2D MRE (-3.2% +/- 10.43%), and better than the inter-reader discrepancy observed in previous studies. There were no automated processing failures in this dataset. Thus, the automated liver elasticity calculation (ALEC) algorithm is able to calculate stiffness from 3D MRE data with minimal bias and good precision, while enabling stiffness measurements to be fully reproducible and to be easily performed on the large 3D MRE datasets.

  12. Label-free tissue scanner for colorectal cancer screening

    NASA Astrophysics Data System (ADS)

    Kandel, Mikhail E.; Sridharan, Shamira; Liang, Jon; Luo, Zelun; Han, Kevin; Macias, Virgilia; Shah, Anish; Patel, Roshan; Tangella, Krishnarao; Kajdacsy-Balla, Andre; Guzman, Grace; Popescu, Gabriel

    2017-06-01

    The current practice of surgical pathology relies on external contrast agents to reveal tissue architecture, which is then qualitatively examined by a trained pathologist. The diagnosis is based on the comparison with standardized empirical, qualitative assessments of limited objectivity. We propose an approach to pathology based on interferometric imaging of "unstained" biopsies, which provides unique capabilities for quantitative diagnosis and automation. We developed a label-free tissue scanner based on "quantitative phase imaging," which maps out optical path length at each point in the field of view and, thus, yields images that are sensitive to the "nanoscale" tissue architecture. Unlike analysis of stained tissue, which is qualitative in nature and affected by color balance, staining strength and imaging conditions, optical path length measurements are intrinsically quantitative, i.e., images can be compared across different instruments and clinical sites. These critical features allow us to automate the diagnosis process. We paired our interferometric optical system with highly parallelized, dedicated software algorithms for data acquisition, allowing us to image at a throughput comparable to that of commercial tissue scanners while maintaining the nanoscale sensitivity to morphology. Based on the measured phase information, we implemented software tools for autofocusing during imaging, as well as image archiving and data access. To illustrate the potential of our technology for large volume pathology screening, we established an "intrinsic marker" for colorectal disease that detects tissue with dysplasia or colorectal cancer and flags specific areas for further examination, potentially improving the efficiency of existing pathology workflows.

  13. Integrated Microfluidic Devices for Automated Microarray-Based Gene Expression and Genotyping Analysis

    NASA Astrophysics Data System (ADS)

    Liu, Robin H.; Lodes, Mike; Fuji, H. Sho; Danley, David; McShea, Andrew

    Microarray assays typically involve multistage sample processing and fluidic handling, which are generally labor-intensive and time-consuming. Automation of these processes would improve robustness, reduce run-to-run and operator-to-operator variation, and reduce costs. In this chapter, a fully integrated and self-contained microfluidic biochip device that has been developed to automate the fluidic handling steps for microarray-based gene expression or genotyping analysis is presented. The device consists of a semiconductor-based CustomArray® chip with 12,000 features and a microfluidic cartridge. The CustomArray was manufactured using a semiconductor-based in situ synthesis technology. The micro-fluidic cartridge consists of microfluidic pumps, mixers, valves, fluid channels, and reagent storage chambers. Microarray hybridization and subsequent fluidic handling and reactions (including a number of washing and labeling steps) were performed in this fully automated and miniature device before fluorescent image scanning of the microarray chip. Electrochemical micropumps were integrated in the cartridge to provide pumping of liquid solutions. A micromixing technique based on gas bubbling generated by electrochemical micropumps was developed. Low-cost check valves were implemented in the cartridge to prevent cross-talk of the stored reagents. Gene expression study of the human leukemia cell line (K562) and genotyping detection and sequencing of influenza A subtypes have been demonstrated using this integrated biochip platform. For gene expression assays, the microfluidic CustomArray device detected sample RNAs with a concentration as low as 0.375 pM. Detection was quantitative over more than three orders of magnitude. Experiment also showed that chip-to-chip variability was low indicating that the integrated microfluidic devices eliminate manual fluidic handling steps that can be a significant source of variability in genomic analysis. The genotyping results showed that the device identified influenza A hemagglutinin and neuraminidase subtypes and sequenced portions of both genes, demonstrating the potential of integrated microfluidic and microarray technology for multiple virus detection. The device provides a cost-effective solution to eliminate labor-intensive and time-consuming fluidic handling steps and allows microarray-based DNA analysis in a rapid and automated fashion.

  14. Automated detection of feeding strikes by larval fish using continuous high-speed digital video: a novel method to extract quantitative data from fast, sparse kinematic events.

    PubMed

    Shamur, Eyal; Zilka, Miri; Hassner, Tal; China, Victor; Liberzon, Alex; Holzman, Roi

    2016-06-01

    Using videography to extract quantitative data on animal movement and kinematics constitutes a major tool in biomechanics and behavioral ecology. Advanced recording technologies now enable acquisition of long video sequences encompassing sparse and unpredictable events. Although such events may be ecologically important, analysis of sparse data can be extremely time-consuming and potentially biased; data quality is often strongly dependent on the training level of the observer and subject to contamination by observer-dependent biases. These constraints often limit our ability to study animal performance and fitness. Using long videos of foraging fish larvae, we provide a framework for the automated detection of prey acquisition strikes, a behavior that is infrequent yet critical for larval survival. We compared the performance of four video descriptors and their combinations against manually identified feeding events. For our data, the best single descriptor provided a classification accuracy of 77-95% and detection accuracy of 88-98%, depending on fish species and size. Using a combination of descriptors improved the accuracy of classification by ∼2%, but did not improve detection accuracy. Our results indicate that the effort required by an expert to manually label videos can be greatly reduced to examining only the potential feeding detections in order to filter false detections. Thus, using automated descriptors reduces the amount of manual work needed to identify events of interest from weeks to hours, enabling the assembly of an unbiased large dataset of ecologically relevant behaviors. © 2016. Published by The Company of Biologists Ltd.

  15. Use of a capillary electrophoresis instrument with laser-induced fluorescence detection for DNA quantitation. Comparison of YO-PRO-1 and PicoGreen assays.

    PubMed

    Guillo, Christelle; Ferrance, Jerome P; Landers, James P

    2006-04-28

    Highly selective and sensitive assays are required for detection and quantitation of the small masses of DNA typically encountered in clinical and forensic settings. High detection sensitivity is achieved using fluorescent labeling dyes and detection techniques such as spectrofluorometers, microplate readers and cytometers. This work describes the use of a laser-induced fluorescence (LIF) detector in conjunction with a commercial capillary electrophoresis instrument for DNA quantitation. PicoGreen and YO-PRO-1, two fluorescent DNA labeling dyes, were used to assess the potential of the system for routine DNA analysis. Linearity, reproducibility, sensitivity, limits of detection and quantitation, and sample stability were examined for the two assays. The LIF detector response was found to be linear (R2 > 0.999) and reproducible (RSD < 9%) in both cases. The PicoGreen assay displayed lower limits of detection and quantitation (20 pg and 60 pg, respectively) than the YO-PRO-1 assay (60 pg and 260 pg, respectively). Although a small variation in fluorescence was observed for the DNA/dye complexes over time, quantitation was not significantly affected and the solutions were found to be relatively stable for 80 min. The advantages of the technique include a 4- to 40-fold reduction in the volume of sample required compared to traditional assays, a 2- to 20-fold reduction in the volume of reagents consumed, fast and automated analysis, and low cost (no specific instrumentation required).

  16. Validation of a new classifier for the automated analysis of the human epidermal growth factor receptor 2 (HER2) gene amplification in breast cancer specimens

    PubMed Central

    2013-01-01

    Amplification of the human epidermal growth factor receptor 2 (HER2) is a prognostic marker for poor clinical outcome and a predictive marker for therapeutic response to targeted therapies in breast cancer patients. With the introduction of anti-HER2 therapies, accurate assessment of HER2 status has become essential. Fluorescence in situ hybridization (FISH) is a widely used technique for the determination of HER2 status in breast cancer. However, the manual signal enumeration is time-consuming. Therefore, several companies like MetaSystem have developed automated image analysis software. Some of these signal enumeration software employ the so called “tile-sampling classifier”, a programming algorithm through which the software quantifies fluorescent signals in images on the basis of square tiles of fixed dimensions. Considering that the size of tile does not always correspond to the size of a single tumor cell nucleus, some users argue that this analysis method might not completely reflect the biology of cells. For that reason, MetaSystems has developed a new classifier which is able to recognize nuclei within tissue sections in order to determine the HER2 amplification status on nuclei basis. We call this new programming algorithm “nuclei-sampling classifier”. In this study, we evaluated the accuracy of the “nuclei-sampling classifier” in determining HER2 gene amplification by FISH in nuclei of breast cancer cells. To this aim, we randomly selected from our cohort 64 breast cancer specimens (32 nonamplified and 32 amplified) and we compared results obtained through manual scoring and through this new classifier. The new classifier automatically recognized individual nuclei. The automated analysis was followed by an optional human correction, during which the user interacted with the software in order to improve the selection of cell nuclei automatically selected. Overall concordance between manual scoring and automated nuclei-sampling analysis was 98.4% (100% for nonamplified cases and 96.9% for amplified cases). However, after human correction, concordance between the two methods was 100%. We conclude that the nuclei-based classifier is a new available tool for automated quantitative HER2 FISH signals analysis in nuclei in breast cancer specimen and it can be used for clinical purposes. PMID:23379971

  17. Tracking and Motion Analysis of Crack Propagations in Crystals for Molecular Dynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tsap, L V; Duchaineau, M; Goldgof, D B

    2001-05-14

    This paper presents a quantitative analysis for a discovery in molecular dynamics. Recent simulations have shown that velocities of crack propagations in crystals under certain conditions can become supersonic, which is contrary to classical physics. In this research, they present a framework for tracking and motion analysis of crack propagations in crystals. It includes line segment extraction based on Canny edge maps, feature selection based on physical properties, and subsequent tracking of primary and secondary wavefronts. This tracking is completely automated; it runs in real time on three 834-image sequences using forty 250 MHZ processors. Results supporting physical observations aremore » presented in terms of both feature tracking and velocity analysis.« less

  18. High-Throughput Analysis and Automation for Glycomics Studies.

    PubMed

    Shubhakar, Archana; Reiding, Karli R; Gardner, Richard A; Spencer, Daniel I R; Fernandes, Daryl L; Wuhrer, Manfred

    This review covers advances in analytical technologies for high-throughput (HTP) glycomics. Our focus is on structural studies of glycoprotein glycosylation to support biopharmaceutical realization and the discovery of glycan biomarkers for human disease. For biopharmaceuticals, there is increasing use of glycomics in Quality by Design studies to help optimize glycan profiles of drugs with a view to improving their clinical performance. Glycomics is also used in comparability studies to ensure consistency of glycosylation both throughout product development and between biosimilars and innovator drugs. In clinical studies there is as well an expanding interest in the use of glycomics-for example in Genome Wide Association Studies-to follow changes in glycosylation patterns of biological tissues and fluids with the progress of certain diseases. These include cancers, neurodegenerative disorders and inflammatory conditions. Despite rising activity in this field, there are significant challenges in performing large scale glycomics studies. The requirement is accurate identification and quantitation of individual glycan structures. However, glycoconjugate samples are often very complex and heterogeneous and contain many diverse branched glycan structures. In this article we cover HTP sample preparation and derivatization methods, sample purification, robotization, optimized glycan profiling by UHPLC, MS and multiplexed CE, as well as hyphenated techniques and automated data analysis tools. Throughout, we summarize the advantages and challenges with each of these technologies. The issues considered include reliability of the methods for glycan identification and quantitation, sample throughput, labor intensity, and affordability for large sample numbers.

  19. Retinal health information and notification system (RHINO)

    NASA Astrophysics Data System (ADS)

    Dashtbozorg, Behdad; Zhang, Jiong; Abbasi-Sureshjani, Samaneh; Huang, Fan; ter Haar Romeny, Bart M.

    2017-03-01

    The retinal vasculature is the only part of the blood circulation system that can be observed non-invasively using fundus cameras. Changes in the dynamic properties of retinal blood vessels are associated with many systemic and vascular diseases, such as hypertension, coronary heart disease and diabetes. The assessment of the characteristics of the retinal vascular network provides important information for an early diagnosis and prognosis of many systemic and vascular diseases. The manual analysis of the retinal vessels and measurement of quantitative biomarkers in large-scale screening programs is a tedious task, time-consuming and costly. This paper describes a reliable, automated, and efficient retinal health information and notification system (acronym RHINO) which can extract a wealth of geometric biomarkers in large volumes of fundus images. The fully automated software presented in this paper includes vessel enhancement and segmentation, artery/vein classification, optic disc, fovea, and vessel junction detection, and bifurcation/crossing discrimination. Pipelining these tools allows the assessment of several quantitative vascular biomarkers: width, curvature, bifurcation geometry features and fractal dimension. The brain-inspired algorithms outperform most of the state-of-the-art techniques. Moreover, several annotation tools are implemented in RHINO for the manual labeling of arteries and veins, marking optic disc and fovea, and delineating vessel centerlines. The validation phase is ongoing and the software is currently being used for the analysis of retinal images from the Maastricht study (the Netherlands) which includes over 10,000 subjects (healthy and diabetic) with a broad spectrum of clinical measurements

  20. Informatics in radiology: automated structured reporting of imaging findings using the AIM standard and XML.

    PubMed

    Zimmerman, Stefan L; Kim, Woojin; Boonn, William W

    2011-01-01

    Quantitative and descriptive imaging data are a vital component of the radiology report and are frequently of paramount importance to the ordering physician. Unfortunately, current methods of recording these data in the report are both inefficient and error prone. In addition, the free-text, unstructured format of a radiology report makes aggregate analysis of data from multiple reports difficult or even impossible without manual intervention. A structured reporting work flow has been developed that allows quantitative data created at an advanced imaging workstation to be seamlessly integrated into the radiology report with minimal radiologist intervention. As an intermediary step between the workstation and the reporting software, quantitative and descriptive data are converted into an extensible markup language (XML) file in a standardized format specified by the Annotation and Image Markup (AIM) project of the National Institutes of Health Cancer Biomedical Informatics Grid. The AIM standard was created to allow image annotation data to be stored in a uniform machine-readable format. These XML files containing imaging data can also be stored on a local database for data mining and analysis. This structured work flow solution has the potential to improve radiologist efficiency, reduce errors, and facilitate storage of quantitative and descriptive imaging data for research. Copyright © RSNA, 2011.

  1. Quantification of EEG reactivity in comatose patients.

    PubMed

    Hermans, Mathilde C; Westover, M Brandon; van Putten, Michel J A M; Hirsch, Lawrence J; Gaspard, Nicolas

    2016-01-01

    EEG reactivity is an important predictor of outcome in comatose patients. However, visual analysis of reactivity is prone to subjectivity and may benefit from quantitative approaches. In EEG segments recorded during reactivity testing in 59 comatose patients, 13 quantitative EEG parameters were used to compare the spectral characteristics of 1-minute segments before and after the onset of stimulation (spectral temporal symmetry). Reactivity was quantified with probability values estimated using combinations of these parameters. The accuracy of probability values as a reactivity classifier was evaluated against the consensus assessment of three expert clinical electroencephalographers using visual analysis. The binary classifier assessing spectral temporal symmetry in four frequency bands (delta, theta, alpha and beta) showed best accuracy (Median AUC: 0.95) and was accompanied by substantial agreement with the individual opinion of experts (Gwet's AC1: 65-70%), at least as good as inter-expert agreement (AC1: 55%). Probability values also reflected the degree of reactivity, as measured by the inter-experts' agreement regarding reactivity for each individual case. Automated quantitative EEG approaches based on probabilistic description of spectral temporal symmetry reliably quantify EEG reactivity. Quantitative EEG may be useful for evaluating reactivity in comatose patients, offering increased objectivity. Copyright © 2015 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

  2. Quantification of sterol-specific response in human macrophages using automated imaged-based analysis.

    PubMed

    Gater, Deborah L; Widatalla, Namareq; Islam, Kinza; AlRaeesi, Maryam; Teo, Jeremy C M; Pearson, Yanthe E

    2017-12-13

    The transformation of normal macrophage cells into lipid-laden foam cells is an important step in the progression of atherosclerosis. One major contributor to foam cell formation in vivo is the intracellular accumulation of cholesterol. Here, we report the effects of various combinations of low-density lipoprotein, sterols, lipids and other factors on human macrophages, using an automated image analysis program to quantitatively compare single cell properties, such as cell size and lipid content, in different conditions. We observed that the addition of cholesterol caused an increase in average cell lipid content across a range of conditions. All of the sterol-lipid mixtures examined were capable of inducing increases in average cell lipid content, with variations in the distribution of the response, in cytotoxicity and in how the sterol-lipid combination interacted with other activating factors. For example, cholesterol and lipopolysaccharide acted synergistically to increase cell lipid content while also increasing cell survival compared with the addition of lipopolysaccharide alone. Additionally, ergosterol and cholesteryl hemisuccinate caused similar increases in lipid content but also exhibited considerably greater cytotoxicity than cholesterol. The use of automated image analysis enables us to assess not only changes in average cell size and content, but also to rapidly and automatically compare population distributions based on simple fluorescence images. Our observations add to increasing understanding of the complex and multifactorial nature of foam-cell formation and provide a novel approach to assessing the heterogeneity of macrophage response to a variety of factors.

  3. 3D Slicer as an Image Computing Platform for the Quantitative Imaging Network

    PubMed Central

    Fedorov, Andriy; Beichel, Reinhard; Kalpathy-Cramer, Jayashree; Finet, Julien; Fillion-Robin, Jean-Christophe; Pujol, Sonia; Bauer, Christian; Jennings, Dominique; Fennessy, Fiona; Sonka, Milan; Buatti, John; Aylward, Stephen; Miller, James V.; Pieper, Steve; Kikinis, Ron

    2012-01-01

    Quantitative analysis has tremendous but mostly unrealized potential in healthcare to support objective and accurate interpretation of the clinical imaging. In 2008, the National Cancer Institute began building the Quantitative Imaging Network (QIN) initiative with the goal of advancing quantitative imaging in the context of personalized therapy and evaluation of treatment response. Computerized analysis is an important component contributing to reproducibility and efficiency of the quantitative imaging techniques. The success of quantitative imaging is contingent on robust analysis methods and software tools to bring these methods from bench to bedside. 3D Slicer is a free open source software application for medical image computing. As a clinical research tool, 3D Slicer is similar to a radiology workstation that supports versatile visualizations but also provides advanced functionality such as automated segmentation and registration for a variety of application domains. Unlike a typical radiology workstation, 3D Slicer is free and is not tied to specific hardware. As a programming platform, 3D Slicer facilitates translation and evaluation of the new quantitative methods by allowing the biomedical researcher to focus on the implementation of the algorithm, and providing abstractions for the common tasks of data communication, visualization and user interface development. Compared to other tools that provide aspects of this functionality, 3D Slicer is fully open source and can be readily extended and redistributed. In addition, 3D Slicer is designed to facilitate the development of new functionality in the form of 3D Slicer extensions. In this paper, we present an overview of 3D Slicer as a platform for prototyping, development and evaluation of image analysis tools for clinical research applications. To illustrate the utility of the platform in the scope of QIN, we discuss several use cases of 3D Slicer by the existing QIN teams, and we elaborate on the future directions that can further facilitate development and validation of imaging biomarkers using 3D Slicer. PMID:22770690

  4. Automated segmentation and analysis of normal and osteoarthritic knee menisci from magnetic resonance images--data from the Osteoarthritis Initiative.

    PubMed

    Paproki, A; Engstrom, C; Chandra, S S; Neubert, A; Fripp, J; Crozier, S

    2014-09-01

    To validate an automatic scheme for the segmentation and quantitative analysis of the medial meniscus (MM) and lateral meniscus (LM) in magnetic resonance (MR) images of the knee. We analysed sagittal water-excited double-echo steady-state MR images of the knee from a subset of the Osteoarthritis Initiative (OAI) cohort. The MM and LM were automatically segmented in the MR images based on a deformable model approach. Quantitative parameters including volume, subluxation and tibial-coverage were automatically calculated for comparison (Wilcoxon tests) between knees with variable radiographic osteoarthritis (rOA), medial and lateral joint space narrowing (mJSN, lJSN) and pain. Automatic segmentations and estimated parameters were evaluated for accuracy using manual delineations of the menisci in 88 pathological knee MR examinations at baseline and 12 months time-points. The median (95% confidence-interval (CI)) Dice similarity index (DSI) (2 ∗|Auto ∩ Manual|/(|Auto|+|Manual|)∗ 100) between manual and automated segmentations for the MM and LM volumes were 78.3% (75.0-78.7), 83.9% (82.1-83.9) at baseline and 75.3% (72.8-76.9), 83.0% (81.6-83.5) at 12 months. Pearson coefficients between automatic and manual segmentation parameters ranged from r = 0.70 to r = 0.92. MM in rOA/mJSN knees had significantly greater subluxation and smaller tibial-coverage than no-rOA/no-mJSN knees. LM in rOA knees had significantly greater volumes and tibial-coverage than no-rOA knees. Our automated method successfully segmented the menisci in normal and osteoarthritic knee MR images and detected meaningful morphological differences with respect to rOA and joint space narrowing (JSN). Our approach will facilitate analyses of the menisci in prospective MR cohorts such as the OAI for investigations into pathophysiological changes occurring in early osteoarthritis (OA) development. Copyright © 2014 Osteoarthritis Research Society International. Published by Elsevier Ltd. All rights reserved.

  5. Silicon ribbon study program. [dendritic crystals for use in solar cells

    NASA Technical Reports Server (NTRS)

    Seidensticker, R. G.; Duncan, C. S.

    1975-01-01

    The feasibility is studied of growing wide, thin silicon dendritic web for solar cell fabrication and conceptual designs are developed for the apparatus required. An analysis of the mechanisms of dendritic web growth indicated that there were no apparent fundamental limitations to the process. The analysis yielded quantitative guidelines for the thermal conditions required for this mode of crystal growth. Crucible designs were then investigated: the usual quartz crucible configurations and configurations in which silicon itself is used for the crucible. The quartz crucible design is feasible and is incorporated into a conceptual design for a laboratory scale crystal growth facility capable of semi-automated quasi-continuous operation.

  6. 3-D interactive visualisation tools for Hi spectral line imaging

    NASA Astrophysics Data System (ADS)

    van der Hulst, J. M.; Punzo, D.; Roerdink, J. B. T. M.

    2017-06-01

    Upcoming HI surveys will deliver such large datasets that automated processing using the full 3-D information to find and characterize HI objects is unavoidable. Full 3-D visualization is an essential tool for enabling qualitative and quantitative inspection and analysis of the 3-D data, which is often complex in nature. Here we present SlicerAstro, an open-source extension of 3DSlicer, a multi-platform open source software package for visualization and medical image processing, which we developed for the inspection and analysis of HI spectral line data. We describe its initial capabilities, including 3-D filtering, 3-D selection and comparative modelling.

  7. An integrated enhancement and reconstruction strategy for the quantitative extraction of actin stress fibers from fluorescence micrographs.

    PubMed

    Zhang, Zhen; Xia, Shumin; Kanchanawong, Pakorn

    2017-05-22

    The stress fibers are prominent organization of actin filaments that perform important functions in cellular processes such as migration, polarization, and traction force generation, and whose collective organization reflects the physiological and mechanical activities of the cells. Easily visualized by fluorescence microscopy, the stress fibers are widely used as qualitative descriptors of cell phenotypes. However, due to the complexity of the stress fibers and the presence of other actin-containing cellular features, images of stress fibers are relatively challenging to quantitatively analyze using previously developed approaches, requiring significant user intervention. This poses a challenge for the automation of their detection, segmentation, and quantitative analysis. Here we describe an open-source software package, SFEX (Stress Fiber Extractor), which is geared for efficient enhancement, segmentation, and analysis of actin stress fibers in adherent tissue culture cells. Our method made use of a carefully chosen image filtering technique to enhance filamentous structures, effectively facilitating the detection and segmentation of stress fibers by binary thresholding. We subdivided the skeletons of stress fiber traces into piecewise-linear fragments, and used a set of geometric criteria to reconstruct the stress fiber networks by pairing appropriate fiber fragments. Our strategy enables the trajectory of a majority of stress fibers within the cells to be comprehensively extracted. We also present a method for quantifying the dimensions of the stress fibers using an image gradient-based approach. We determine the optimal parameter space using sensitivity analysis, and demonstrate the utility of our approach by analyzing actin stress fibers in cells cultured on various micropattern substrates. We present an open-source graphically-interfaced computational tool for the extraction and quantification of stress fibers in adherent cells with minimal user input. This facilitates the automated extraction of actin stress fibers from fluorescence images. We highlight their potential uses by analyzing images of cells with shapes constrained by fibronectin micropatterns. The method we reported here could serve as the first step in the detection and characterization of the spatial properties of actin stress fibers to enable further detailed morphological analysis.

  8. SU-C-9A-02: Structured Noise Index as An Automated Quality Control for Nuclear Medicine: A Two Year Experience

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nelson, J; Christianson, O; Samei, E

    Purpose: Flood-field uniformity evaluation is an essential element in the assessment of nuclear medicine (NM) gamma cameras. It serves as the central element of the quality control (QC) program, acquired and analyzed on a daily basis prior to clinical imaging. Uniformity images are traditionally analyzed using pixel value-based metrics which often fail to capture subtle structure and patterns caused by changes in gamma camera performance requiring additional visual inspection which is subjective and time demanding. The goal of this project was to develop and implement a robust QC metrology for NM that is effective in identifying non-uniformity issues, reporting issuesmore » in a timely manner for efficient correction prior to clinical involvement, all incorporated into an automated effortless workflow, and to characterize the program over a two year period. Methods: A new quantitative uniformity analysis metric was developed based on 2D noise power spectrum metrology and confirmed based on expert observer visual analysis. The metric, termed Structured Noise Index (SNI) was then integrated into an automated program to analyze, archive, and report on daily NM QC uniformity images. The effectiveness of the program was evaluated over a period of 2 years. Results: The SNI metric successfully identified visually apparent non-uniformities overlooked by the pixel valuebased analysis methods. Implementation of the program has resulted in nonuniformity identification in about 12% of daily flood images. In addition, due to the vigilance of staff response, the percentage of days exceeding trigger value shows a decline over time. Conclusion: The SNI provides a robust quantification of the NM performance of gamma camera uniformity. It operates seamlessly across a fleet of multiple camera models. The automated process provides effective workflow within the NM spectra between physicist, technologist, and clinical engineer. The reliability of this process has made it the preferred platform for NM uniformity analysis.« less

  9. Processing of CT images for analysis of diffuse lung disease in the lung tissue research consortium

    NASA Astrophysics Data System (ADS)

    Karwoski, Ronald A.; Bartholmai, Brian; Zavaletta, Vanessa A.; Holmes, David; Robb, Richard A.

    2008-03-01

    The goal of Lung Tissue Resource Consortium (LTRC) is to improve the management of diffuse lung diseases through a better understanding of the biology of Chronic Obstructive Pulmonary Disease (COPD) and fibrotic interstitial lung disease (ILD) including Idiopathic Pulmonary Fibrosis (IPF). Participants are subjected to a battery of tests including tissue biopsies, physiologic testing, clinical history reporting, and CT scanning of the chest. The LTRC is a repository from which investigators can request tissue specimens and test results as well as semi-quantitative radiology reports, pathology reports, and automated quantitative image analysis results from the CT scan data performed by the LTRC core laboratories. The LTRC Radiology Core Laboratory (RCL), in conjunction with the Biomedical Imaging Resource (BIR), has developed novel processing methods for comprehensive characterization of pulmonary processes on volumetric high-resolution CT scans to quantify how these diseases manifest in radiographic images. Specifically, the RCL has implemented a semi-automated method for segmenting the anatomical regions of the lungs and airways. In these anatomic regions, automated quantification of pathologic features of disease including emphysema volumes and tissue classification are performed using both threshold techniques and advanced texture measures to determine the extent and location of emphysema, ground glass opacities, "honeycombing" (HC) and "irregular linear" or "reticular" pulmonary infiltrates and normal lung. Wall thickness measurements of the trachea, and its branches to the 3 rd and limited 4 th order are also computed. The methods for processing, segmentation and quantification are described. The results are reviewed and verified by an expert radiologist following processing and stored in the public LTRC database for use by pulmonary researchers. To date, over 1200 CT scans have been processed by the RCL and the LTRC project is on target for recruitment of the 2200 patients with 1800 CT scans in the repository for the 5-year effort. Ongoing analysis of the results in the LTRC database by the LTRC participating institutions and outside investigators are underway to look at the clinical and physiological significance of the imaging features of these diseases and correlate these findings with quality of life and other important prognostic indicators of severity. In the future, the quantitative measures of disease may have greater utility by showing correlation with prognosis, disease severity and other physiological parameters. These imaging features may provide non-invasive alternative endpoints or surrogate markers to alleviate the need for tissue biopsy or provide an accurate means to monitor rate of disease progression or response to therapy.

  10. Quantitation of repaglinide and metabolites in mouse whole-body thin tissue sections using droplet-based liquid microjunction surface sampling-high-performance liquid chromatography-electrospray ionization tandem mass spectrometry.

    PubMed

    Chen, Weiqi; Wang, Lifei; Van Berkel, Gary J; Kertesz, Vilmos; Gan, Jinping

    2016-03-25

    Herein, quantitation aspects of a fully automated autosampler/HPLC-MS/MS system applied for unattended droplet-based surface sampling of repaglinide dosed thin tissue sections with subsequent HPLC separation and mass spectrometric analysis of parent drug and various drug metabolites were studied. Major organs (brain, lung, liver, kidney and muscle) from whole-body thin tissue sections and corresponding organ homogenates prepared from repaglinide dosed mice were sampled by surface sampling and by bulk extraction, respectively, and analyzed by HPLC-MS/MS. A semi-quantitative agreement between data obtained by surface sampling and that by employing organ homogenate extraction was observed. Drug concentrations obtained by the two methods followed the same patterns for post-dose time points (0.25, 0.5, 1 and 2 h). Drug amounts determined in the specific tissues was typically higher when analyzing extracts from the organ homogenates. In addition, relative comparison of the levels of individual metabolites between the two analytical methods also revealed good semi-quantitative agreement. Copyright © 2015 Elsevier B.V. All rights reserved.

  11. Quantitation of repaglinide and metabolites in mouse whole-body thin tissue sections using droplet-based liquid microjunction surface sampling-high-performance liquid chromatography-electrospray ionization tandem mass spectrometry

    DOE PAGES

    Chen, Weiqi; Wang, Lifei; Van Berkel, Gary J.; ...

    2015-11-03

    Herein, quantitation aspects of a fully automated autosampler/HPLC-MS/MS system applied for unattended droplet-based surface sampling of repaglinide dosed thin tissue sections with subsequent HPLC separation and mass spectrometric analysis of parent drug and various drug metabolites was studied. Major organs (brain, lung, liver, kidney, muscle) from whole-body thin tissue sections and corresponding organ homogenates prepared from repaglinide dosed mice were sampled by surface sampling and by bulk extraction, respectively, and analyzed by HPLC-MS/MS. A semi-quantitative agreement between data obtained by surface sampling and that by employing organ homogenate extraction was observed. Drug concentrations obtained by the two methods followed themore » same patterns for post-dose time points (0.25, 0.5, 1 and 2 h). Drug amounts determined in the specific tissues was typically higher when analyzing extracts from the organ homogenates. Furthermore, relative comparison of the levels of individual metabolites between the two analytical methods also revealed good semi-quantitative agreement.« less

  12. Quantitation of repaglinide and metabolites in mouse whole-body thin tissue sections using droplet-based liquid microjunction surface sampling-high-performance liquid chromatography-electrospray ionization tandem mass spectrometry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Weiqi; Wang, Lifei; Van Berkel, Gary J.

    Herein, quantitation aspects of a fully automated autosampler/HPLC-MS/MS system applied for unattended droplet-based surface sampling of repaglinide dosed thin tissue sections with subsequent HPLC separation and mass spectrometric analysis of parent drug and various drug metabolites was studied. Major organs (brain, lung, liver, kidney, muscle) from whole-body thin tissue sections and corresponding organ homogenates prepared from repaglinide dosed mice were sampled by surface sampling and by bulk extraction, respectively, and analyzed by HPLC-MS/MS. A semi-quantitative agreement between data obtained by surface sampling and that by employing organ homogenate extraction was observed. Drug concentrations obtained by the two methods followed themore » same patterns for post-dose time points (0.25, 0.5, 1 and 2 h). Drug amounts determined in the specific tissues was typically higher when analyzing extracts from the organ homogenates. Furthermore, relative comparison of the levels of individual metabolites between the two analytical methods also revealed good semi-quantitative agreement.« less

  13. Automated optimization techniques for aircraft synthesis

    NASA Technical Reports Server (NTRS)

    Vanderplaats, G. N.

    1976-01-01

    Application of numerical optimization techniques to automated conceptual aircraft design is examined. These methods are shown to be a general and efficient way to obtain quantitative information for evaluating alternative new vehicle projects. Fully automated design is compared with traditional point design methods and time and resource requirements for automated design are given. The NASA Ames Research Center aircraft synthesis program (ACSYNT) is described with special attention to calculation of the weight of a vehicle to fly a specified mission. The ACSYNT procedures for automatically obtaining sensitivity of the design (aircraft weight, performance and cost) to various vehicle, mission, and material technology parameters are presented. Examples are used to demonstrate the efficient application of these techniques.

  14. MsViz: A Graphical Software Tool for In-Depth Manual Validation and Quantitation of Post-translational Modifications.

    PubMed

    Martín-Campos, Trinidad; Mylonas, Roman; Masselot, Alexandre; Waridel, Patrice; Petricevic, Tanja; Xenarios, Ioannis; Quadroni, Manfredo

    2017-08-04

    Mass spectrometry (MS) has become the tool of choice for the large scale identification and quantitation of proteins and their post-translational modifications (PTMs). This development has been enabled by powerful software packages for the automated analysis of MS data. While data on PTMs of thousands of proteins can nowadays be readily obtained, fully deciphering the complexity and combinatorics of modification patterns even on a single protein often remains challenging. Moreover, functional investigation of PTMs on a protein of interest requires validation of the localization and the accurate quantitation of its changes across several conditions, tasks that often still require human evaluation. Software tools for large scale analyses are highly efficient but are rarely conceived for interactive, in-depth exploration of data on individual proteins. We here describe MsViz, a web-based and interactive software tool that supports manual validation of PTMs and their relative quantitation in small- and medium-size experiments. The tool displays sequence coverage information, peptide-spectrum matches, tandem MS spectra and extracted ion chromatograms through a single, highly intuitive interface. We found that MsViz greatly facilitates manual data inspection to validate PTM location and quantitate modified species across multiple samples.

  15. A quantitative image cytometry technique for time series or population analyses of signaling networks.

    PubMed

    Ozaki, Yu-ichi; Uda, Shinsuke; Saito, Takeshi H; Chung, Jaehoon; Kubota, Hiroyuki; Kuroda, Shinya

    2010-04-01

    Modeling of cellular functions on the basis of experimental observation is increasingly common in the field of cellular signaling. However, such modeling requires a large amount of quantitative data of signaling events with high spatio-temporal resolution. A novel technique which allows us to obtain such data is needed for systems biology of cellular signaling. We developed a fully automatable assay technique, termed quantitative image cytometry (QIC), which integrates a quantitative immunostaining technique and a high precision image-processing algorithm for cell identification. With the aid of an automated sample preparation system, this device can quantify protein expression, phosphorylation and localization with subcellular resolution at one-minute intervals. The signaling activities quantified by the assay system showed good correlation with, as well as comparable reproducibility to, western blot analysis. Taking advantage of the high spatio-temporal resolution, we investigated the signaling dynamics of the ERK pathway in PC12 cells. The QIC technique appears as a highly quantitative and versatile technique, which can be a convenient replacement for the most conventional techniques including western blot, flow cytometry and live cell imaging. Thus, the QIC technique can be a powerful tool for investigating the systems biology of cellular signaling.

  16. Analysis of Invasive Activity of CAF Spheroids into Three Dimensional (3D) Collagen Matrices.

    PubMed

    Villaronga, María Ángeles; Teijeiro, Saúl Álvarez; Hermida-Prado, Francisco; Garzón-Arango, Marta; Sanz-Moreno, Victoria; García-Pedrero, Juana María

    2018-01-01

    Tumor growth and progression is the result of a complex process controlled not only by malignant cancer cells but also by the surrounding tumor microenvironment (TME). Cancer associated fibroblasts (CAFs), the most abundant cellular component of TME, play an active role in tumor invasion and metastasis by promoting cancer cell invasion through cell-cell interactions and secretion of pro-invasive factors such as extracellular matrix (ECM)-degrading proteases. Due to their tumor-promoting activities, there is an emerging interest in investigating CAFs biology and its potential as drug targets for cancer therapies. Here we describe an easy and highly reproducible quantitative method to analyze CAF invasive activity by forming multicellular spheroids embedded into a three-dimensional (3D) matrix that mimics in vivo ECM. Subsequently, invasion is monitored over time using a time-lapse microscope. We also provide an automated image analysis system that enables the rapid quantification of the spheroid area increase (invasive area) over time. The use of a 96-well plate format with one CAF spheroid per well and the automated analysis provides a method suitable for drug screening test, such as protease inhibitors.

  17. Automated podosome identification and characterization in fluorescence microscopy images.

    PubMed

    Meddens, Marjolein B M; Rieger, Bernd; Figdor, Carl G; Cambi, Alessandra; van den Dries, Koen

    2013-02-01

    Podosomes are cellular adhesion structures involved in matrix degradation and invasion that comprise an actin core and a ring of cytoskeletal adaptor proteins. They are most often identified by staining with phalloidin, which binds F-actin and therefore visualizes the core. However, not only podosomes, but also many other cytoskeletal structures contain actin, which makes podosome segmentation by automated image processing difficult. Here, we have developed a quantitative image analysis algorithm that is optimized to identify podosome cores within a typical sample stained with phalloidin. By sequential local and global thresholding, our analysis identifies up to 76% of podosome cores excluding other F-actin-based structures. Based on the overlap in podosome identifications and quantification of podosome numbers, our algorithm performs equally well compared to three experts. Using our algorithm we show effects of actin polymerization and myosin II inhibition on the actin intensity in both podosome core and associated actin network. Furthermore, by expanding the core segmentations, we reveal a previously unappreciated differential distribution of cytoskeletal adaptor proteins within the podosome ring. These applications illustrate that our algorithm is a valuable tool for rapid and accurate large-scale analysis of podosomes to increase our understanding of these characteristic adhesion structures.

  18. Automated diagnosis of Alzheimer's disease with multi-atlas based whole brain segmentations

    NASA Astrophysics Data System (ADS)

    Luo, Yuan; Tang, Xiaoying

    2017-03-01

    Voxel-based analysis is widely used in quantitative analysis of structural brain magnetic resonance imaging (MRI) and automated disease detection, such as Alzheimer's disease (AD). However, noise at the voxel level may cause low sensitivity to AD-induced structural abnormalities. This can be addressed with the use of a whole brain structural segmentation approach which greatly reduces the dimension of features (the number of voxels). In this paper, we propose an automatic AD diagnosis system that combines such whole brain segmen- tations with advanced machine learning methods. We used a multi-atlas segmentation technique to parcellate T1-weighted images into 54 distinct brain regions and extract their structural volumes to serve as the features for principal-component-analysis-based dimension reduction and support-vector-machine-based classification. The relationship between the number of retained principal components (PCs) and the diagnosis accuracy was systematically evaluated, in a leave-one-out fashion, based on 28 AD subjects and 23 age-matched healthy subjects. Our approach yielded pretty good classification results with 96.08% overall accuracy being achieved using the three foremost PCs. In addition, our approach yielded 96.43% specificity, 100% sensitivity, and 0.9891 area under the receiver operating characteristic curve.

  19. Automated image analysis reveals the dynamic 3-dimensional organization of multi-ciliary arrays

    PubMed Central

    Galati, Domenico F.; Abuin, David S.; Tauber, Gabriel A.; Pham, Andrew T.; Pearson, Chad G.

    2016-01-01

    ABSTRACT Multi-ciliated cells (MCCs) use polarized fields of undulating cilia (ciliary array) to produce fluid flow that is essential for many biological processes. Cilia are positioned by microtubule scaffolds called basal bodies (BBs) that are arranged within a spatially complex 3-dimensional geometry (3D). Here, we develop a robust and automated computational image analysis routine to quantify 3D BB organization in the ciliate, Tetrahymena thermophila. Using this routine, we generate the first morphologically constrained 3D reconstructions of Tetrahymena cells and elucidate rules that govern the kinetics of MCC organization. We demonstrate the interplay between BB duplication and cell size expansion through the cell cycle. In mutant cells, we identify a potential BB surveillance mechanism that balances large gaps in BB spacing by increasing the frequency of closely spaced BBs in other regions of the cell. Finally, by taking advantage of a mutant predisposed to BB disorganization, we locate the spatial domains that are most prone to disorganization by environmental stimuli. Collectively, our analyses reveal the importance of quantitative image analysis to understand the principles that guide the 3D organization of MCCs. PMID:26700722

  20. Recent development in software and automation tools for high-throughput discovery bioanalysis.

    PubMed

    Shou, Wilson Z; Zhang, Jun

    2012-05-01

    Bioanalysis with LC-MS/MS has been established as the method of choice for quantitative determination of drug candidates in biological matrices in drug discovery and development. The LC-MS/MS bioanalytical support for drug discovery, especially for early discovery, often requires high-throughput (HT) analysis of large numbers of samples (hundreds to thousands per day) generated from many structurally diverse compounds (tens to hundreds per day) with a very quick turnaround time, in order to provide important activity and liability data to move discovery projects forward. Another important consideration for discovery bioanalysis is its fit-for-purpose quality requirement depending on the particular experiments being conducted at this stage, and it is usually not as stringent as those required in bioanalysis supporting drug development. These aforementioned attributes of HT discovery bioanalysis made it an ideal candidate for using software and automation tools to eliminate manual steps, remove bottlenecks, improve efficiency and reduce turnaround time while maintaining adequate quality. In this article we will review various recent developments that facilitate automation of individual bioanalytical procedures, such as sample preparation, MS/MS method development, sample analysis and data review, as well as fully integrated software tools that manage the entire bioanalytical workflow in HT discovery bioanalysis. In addition, software tools supporting the emerging high-resolution accurate MS bioanalytical approach are also discussed.

  1. An automated approach for early detection of diabetic retinopathy using SD-OCT images.

    PubMed

    ElTanboly, Ahmed H; Palacio, Agustina; Shalaby, Ahmed M; Switala, Andrew E; Helmy, Omar; Schaal, Shlomit; El-Baz, Ayman

    2018-01-01

      This study was to demonstrate the feasibility of an automatic approach for early detection of diabetic retinopathy (DR) from SD-OCT images. These scans were prospectively collected from 200 subjects through the fovea then were automatically segmented, into 12 layers. Each layer was characterized by its thickness, tortuosity, and normalized reflectivity. 26 diabetic patients, without DR changes visible by funduscopic examination, were matched with 26 controls, according to age and sex, for purposes of statistical analysis using mixed effects ANOVA. The INL was narrower in diabetes (p = 0.14), while the NFL (p = 0.04) and IZ (p = 0.34) were thicker. Tortuosity of layers NFL through the OPL was greater in diabetes (all p < 0.1), while significantly greater normalized reflectivity was observed in the MZ and OPR (both p < 0.01) as well as ELM and IZ (both p < 0.5). A novel automated method enables to provide quantitative analysis of the changes in each layer of the retina that occur with diabetes. In turn, carries the promise to a reliable non-invasive diagnostic tool for early detection of DR.

  2. Automated Ki-67 Quantification of Immunohistochemical Staining Image of Human Nasopharyngeal Carcinoma Xenografts.

    PubMed

    Shi, Peng; Zhong, Jing; Hong, Jinsheng; Huang, Rongfang; Wang, Kaijun; Chen, Yunbin

    2016-08-26

    Nasopharyngeal carcinoma is one of the malignant neoplasm with high incidence in China and south-east Asia. Ki-67 protein is strictly associated with cell proliferation and malignant degree. Cells with higher Ki-67 expression are always sensitive to chemotherapy and radiotherapy, the assessment of which is beneficial to NPC treatment. It is still challenging to automatically analyze immunohistochemical Ki-67 staining nasopharyngeal carcinoma images due to the uneven color distributions in different cell types. In order to solve the problem, an automated image processing pipeline based on clustering of local correlation features is proposed in this paper. Unlike traditional morphology-based methods, our algorithm segments cells by classifying image pixels on the basis of local pixel correlations from particularly selected color spaces, then characterizes cells with a set of grading criteria for the reference of pathological analysis. Experimental results showed high accuracy and robustness in nucleus segmentation despite image data variance. Quantitative indicators obtained in this essay provide a reliable evidence for the analysis of Ki-67 staining nasopharyngeal carcinoma microscopic images, which would be helpful in relevant histopathological researches.

  3. FBI fingerprint identification automation study: AIDS 3 evaluation report. Volume 5: Current system evaluation

    NASA Technical Reports Server (NTRS)

    Mulhall, B. D. L.

    1980-01-01

    The performance, costs, organization and other characteristics of both the manual system and AIDS 2 were used to establish a baseline case. The results of the evaluation are to be used to determine the feasibility of the AIDS 3 System, as well as provide a basis for ranking alternative systems during the second phase of the JPL study. The results of the study were tabulated by subject, scope and methods, providing a descriptive, quantitative and qualitative analysis of the current operating systems employed by the FBI Identification Division.

  4. Biopharmaceutical production: Applications of surface plasmon resonance biosensors.

    PubMed

    Thillaivinayagalingam, Pranavan; Gommeaux, Julien; McLoughlin, Michael; Collins, David; Newcombe, Anthony R

    2010-01-15

    Surface plasmon resonance (SPR) permits the quantitative analysis of therapeutic antibody concentrations and impurities including bacteria, Protein A, Protein G and small molecule ligands leached from chromatography media. The use of surface plasmon resonance has gained popularity within the biopharmaceutical industry due to the automated, label free, real time interaction that may be exploited when using this method. The application areas to assess protein interactions and develop analytical methods for biopharmaceutical downstream process development, quality control, and in-process monitoring are reviewed. 2009 Elsevier B.V. All rights reserved.

  5. Rapid, automated, parallel quantitative immunoassays using highly integrated microfluidics and AlphaLISA

    PubMed Central

    Tak For Yu, Zeta; Guan, Huijiao; Ki Cheung, Mei; McHugh, Walker M.; Cornell, Timothy T.; Shanley, Thomas P.; Kurabayashi, Katsuo; Fu, Jianping

    2015-01-01

    Immunoassays represent one of the most popular analytical methods for detection and quantification of biomolecules. However, conventional immunoassays such as ELISA and flow cytometry, even though providing high sensitivity and specificity and multiplexing capability, can be labor-intensive and prone to human error, making them unsuitable for standardized clinical diagnoses. Using a commercialized no-wash, homogeneous immunoassay technology (‘AlphaLISA’) in conjunction with integrated microfluidics, herein we developed a microfluidic immunoassay chip capable of rapid, automated, parallel immunoassays of microliter quantities of samples. Operation of the microfluidic immunoassay chip entailed rapid mixing and conjugation of AlphaLISA components with target analytes before quantitative imaging for analyte detections in up to eight samples simultaneously. Aspects such as fluid handling and operation, surface passivation, imaging uniformity, and detection sensitivity of the microfluidic immunoassay chip using AlphaLISA were investigated. The microfluidic immunoassay chip could detect one target analyte simultaneously for up to eight samples in 45 min with a limit of detection down to 10 pg mL−1. The microfluidic immunoassay chip was further utilized for functional immunophenotyping to examine cytokine secretion from human immune cells stimulated ex vivo. Together, the microfluidic immunoassay chip provides a promising high-throughput, high-content platform for rapid, automated, parallel quantitative immunosensing applications. PMID:26074253

  6. Novel image cytometric method for detection of physiological and metabolic changes in Saccharomyces cerevisiae.

    PubMed

    Chan, Leo L; Kury, Alexandria; Wilkinson, Alisha; Berkes, Charlotte; Pirani, Alnoor

    2012-11-01

    The studying and monitoring of physiological and metabolic changes in Saccharomyces cerevisiae (S. cerevisiae) has been a key research area for the brewing, baking, and biofuels industries, which rely on these economically important yeasts to produce their products. Specifically for breweries, physiological and metabolic parameters such as viability, vitality, glycogen, neutral lipid, and trehalose content can be measured to better understand the status of S. cerevisiae during fermentation. Traditionally, these physiological and metabolic changes can be qualitatively observed using fluorescence microscopy or flow cytometry for quantitative fluorescence analysis of fluorescently labeled cellular components associated with each parameter. However, both methods pose known challenges to the end-users. Specifically, conventional fluorescent microscopes lack automation and fluorescence analysis capabilities to quantitatively analyze large numbers of cells. Although flow cytometry is suitable for quantitative analysis of tens of thousands of fluorescently labeled cells, the instruments require a considerable amount of maintenance, highly trained technicians, and the system is relatively expensive to both purchase and maintain. In this work, we demonstrate the first use of Cellometer Vision for the kinetic detection and analysis of vitality, glycogen, neutral lipid, and trehalose content of S. cerevisiae. This method provides an important research tool for large and small breweries to study and monitor these physiological behaviors during production, which can improve fermentation conditions to produce consistent and higher-quality products.

  7. Automated Liver Elasticity Calculation for 3D MRE

    PubMed Central

    Dzyubak, Bogdan; Glaser, Kevin J.; Manduca, Armando; Ehman, Richard L.

    2017-01-01

    Magnetic Resonance Elastography (MRE) is a phase-contrast MRI technique which calculates quantitative stiffness images, called elastograms, by imaging the propagation of acoustic waves in tissues. It is used clinically to diagnose liver fibrosis. Automated analysis of MRE is difficult as the corresponding MRI magnitude images (which contain anatomical information) are affected by intensity inhomogeneity, motion artifact, and poor tissue- and edge-contrast. Additionally, areas with low wave amplitude must be excluded. An automated algorithm has already been successfully developed and validated for clinical 2D MRE. 3D MRE acquires substantially more data and, due to accelerated acquisition, has exacerbated image artifacts. Also, the current 3D MRE processing does not yield a confidence map to indicate MRE wave quality and guide ROI selection, as is the case in 2D. In this study, extension of the 2D automated method, with a simple wave-amplitude metric, was developed and validated against an expert reader in a set of 57 patient exams with both 2D and 3D MRE. The stiffness discrepancy with the expert for 3D MRE was −0.8% ± 9.45% and was better than discrepancy with the same reader for 2D MRE (−3.2% ± 10.43%), and better than the inter-reader discrepancy observed in previous studies. There were no automated processing failures in this dataset. Thus, the automated liver elasticity calculation (ALEC) algorithm is able to calculate stiffness from 3D MRE data with minimal bias and good precision, while enabling stiffness measurements to be fully reproducible and to be easily performed on the large 3D MRE datasets. PMID:29033488

  8. Calibration Issues and Operating System Requirements for Electron-Probe Microanalysis

    NASA Technical Reports Server (NTRS)

    Carpenter, P.

    2006-01-01

    Instrument purchase requirements and dialogue with manufacturers have established hardware parameters for alignment, stability, and reproducibility, which have helped improve the precision and accuracy of electron microprobe analysis (EPMA). The development of correction algorithms and the accurate solution to quantitative analysis problems requires the minimization of systematic errors and relies on internally consistent data sets. Improved hardware and computer systems have resulted in better automation of vacuum systems, stage and wavelength-dispersive spectrometer (WDS) mechanisms, and x-ray detector systems which have improved instrument stability and precision. Improved software now allows extended automated runs involving diverse setups and better integrates digital imaging and quantitative analysis. However, instrumental performance is not regularly maintained, as WDS are aligned and calibrated during installation but few laboratories appear to check and maintain this calibration. In particular, detector deadtime (DT) data is typically assumed rather than measured, due primarily to the difficulty and inconvenience of the measurement process. This is a source of fundamental systematic error in many microprobe laboratories and is unknown to the analyst, as the magnitude of DT correction is not listed in output by microprobe operating systems. The analyst must remain vigilant to deviations in instrumental alignment and calibration, and microprobe system software must conveniently verify the necessary parameters. Microanalysis of mission critical materials requires an ongoing demonstration of instrumental calibration. Possible approaches to improvements in instrument calibration, quality control, and accuracy will be discussed. Development of a set of core requirements based on discussions with users, researchers, and manufacturers can yield documents that improve and unify the methods by which instruments can be calibrated. These results can be used to continue improvements of EPMA.

  9. Quantitative pathology in virtual microscopy: history, applications, perspectives.

    PubMed

    Kayser, Gian; Kayser, Klaus

    2013-07-01

    With the emerging success of commercially available personal computers and the rapid progress in the development of information technologies, morphometric analyses of static histological images have been introduced to improve our understanding of the biology of diseases such as cancer. First applications have been quantifications of immunohistochemical expression patterns. In addition to object counting and feature extraction, laws of thermodynamics have been applied in morphometric calculations termed syntactic structure analysis. Here, one has to consider that the information of an image can be calculated for separate hierarchical layers such as single pixels, cluster of pixels, segmented small objects, clusters of small objects, objects of higher order composed of several small objects. Using syntactic structure analysis in histological images, functional states can be extracted and efficiency of labor in tissues can be quantified. Image standardization procedures, such as shading correction and color normalization, can overcome artifacts blurring clear thresholds. Morphometric techniques are not only useful to learn more about biological features of growth patterns, they can also be helpful in routine diagnostic pathology. In such cases, entropy calculations are applied in analogy to theoretical considerations concerning information content. Thus, regions with high information content can automatically be highlighted. Analysis of the "regions of high diagnostic value" can deliver in the context of clinical information, site of involvement and patient data (e.g. age, sex), support in histopathological differential diagnoses. It can be expected that quantitative virtual microscopy will open new possibilities for automated histological support. Automated integrated quantification of histological slides also serves for quality assurance. The development and theoretical background of morphometric analyses in histopathology are reviewed, as well as their application and potential future implementation in virtual microscopy. Copyright © 2012 Elsevier GmbH. All rights reserved.

  10. Quantitative Microbial Risk Assessment Tutorial: HSPF Setup, Application, and Calibration of Flows and Microbial Fate and Transport on an Example Watershed

    EPA Science Inventory

    A Quantitative Microbial Risk Assessment (QMRA) infrastructure that automates the manual process of characterizing transport of pathogens and microorganisms, from the source of release to a point of exposure, has been developed by loosely configuring a set of modules and process-...

  11. Tunable, Quantitative Fenton-RAFT Polymerization via Metered Reagent Addition.

    PubMed

    Nothling, Mitchell D; McKenzie, Thomas G; Reyhani, Amin; Qiao, Greg G

    2018-05-10

    A continuous supply of radical species is a key requirement for activating chain growth and accessing quantitative monomer conversions in reversible addition-fragmentation chain transfer (RAFT) polymerization. In Fenton-RAFT, activation is provided by hydroxyl radicals, whose indiscriminate reactivity and short-lived nature poses a challenge to accessing extended polymerization times and quantitative monomer conversions. Here, an alternative Fenton-RAFT procedure is presented, whereby radical generation can be finely controlled via metered dosing of a component of the Fenton redox reaction (H 2 O 2 ) using an external pumping system. By limiting the instantaneous flux of radicals and ensuring sustained radical generation over tunable time periods, metered reagent addition reduces unwanted radical "wasting" reactions and provides access to consistent quantitative monomer conversions with high chain-end fidelity. Fine tuning of radical concentration during polymerization is achieved simply via adjustment of reagent dose rate, offering significant potential for automation. This modular strategy holds promise for extending traditional RAFT initiation toward more tightly regulated radical concentration profiles and affords excellent prospects for the automation of Fenton-RAFT polymerization. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  12. SU-G-206-01: A Fully Automated CT Tool to Facilitate Phantom Image QA for Quantitative Imaging in Clinical Trials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wahi-Anwar, M; Lo, P; Kim, H

    Purpose: The use of Quantitative Imaging (QI) methods in Clinical Trials requires both verification of adherence to a specified protocol and an assessment of scanner performance under that protocol, which are currently accomplished manually. This work introduces automated phantom identification and image QA measure extraction towards a fully-automated CT phantom QA system to perform these functions and facilitate the use of Quantitative Imaging methods in clinical trials. Methods: This study used a retrospective cohort of CT phantom scans from existing clinical trial protocols - totaling 84 phantoms, across 3 phantom types using various scanners and protocols. The QA system identifiesmore » the input phantom scan through an ensemble of threshold-based classifiers. Each classifier - corresponding to a phantom type - contains a template slice, which is compared to the input scan on a slice-by-slice basis, resulting in slice-wise similarity metric values for each slice compared. Pre-trained thresholds (established from a training set of phantom images matching the template type) are used to filter the similarity distribution, and the slice with the most optimal local mean similarity, with local neighboring slices meeting the threshold requirement, is chosen as the classifier’s matched slice (if it existed). The classifier with the matched slice possessing the most optimal local mean similarity is then chosen as the ensemble’s best matching slice. If the best matching slice exists, image QA algorithm and ROIs corresponding to the matching classifier extracted the image QA measures. Results: Automated phantom identification performed with 84.5% accuracy and 88.8% sensitivity on 84 phantoms. Automated image quality measurements (following standard protocol) on identified water phantoms (n=35) matched user QA decisions with 100% accuracy. Conclusion: We provide a fullyautomated CT phantom QA system consistent with manual QA performance. Further work will include parallel component to automatically verify image acquisition parameters and automated adherence to specifications. Institutional research agreement, Siemens Healthcare; Past recipient, research grant support, Siemens Healthcare; Consultant, Toshiba America Medical Systems; Consultant, Samsung Electronics; NIH Grant support from: U01 CA181156.« less

  13. Multidimensional electrostatic repulsion-hydrophilic interaction chromatography (ERLIC) for quantitative analysis of the proteome and phosphoproteome in clinical and biomedical research.

    PubMed

    Loroch, Stefan; Schommartz, Tim; Brune, Wolfram; Zahedi, René Peiman; Sickmann, Albert

    2015-05-01

    Quantitative proteomics and phosphoproteomics have become key disciplines in understanding cellular processes. Fundamental research can be done using cell culture providing researchers with virtually infinite sample amounts. In contrast, clinical, pre-clinical and biomedical research is often restricted to minute sample amounts and requires an efficient analysis with only micrograms of protein. To address this issue, we generated a highly sensitive workflow for combined LC-MS-based quantitative proteomics and phosphoproteomics by refining an ERLIC-based 2D phosphoproteomics workflow into an ERLIC-based 3D workflow covering the global proteome as well. The resulting 3D strategy was successfully used for an in-depth quantitative analysis of both, the proteome and the phosphoproteome of murine cytomegalovirus-infected mouse fibroblasts, a model system for host cell manipulation by a virus. In a 2-plex SILAC experiment with 150 μg of a tryptic digest per condition, the 3D strategy enabled the quantification of ~75% more proteins and even ~134% more peptides compared to the 2D strategy. Additionally, we could quantify ~50% more phosphoproteins by non-phosphorylated peptides, concurrently yielding insights into changes on the levels of protein expression and phosphorylation. Beside its sensitivity, our novel three-dimensional ERLIC-strategy has the potential for semi-automated sample processing rendering it a suitable future perspective for clinical, pre-clinical and biomedical research. Copyright © 2015. Published by Elsevier B.V.

  14. Petrographic characterization of lunar soils: Application of x ray digital-imaging to quantitative and automated analysis

    NASA Technical Reports Server (NTRS)

    Higgins, Stefan J.; Patchen, Allan; Chambers, John G.; Taylor, Lawrence A.; Mckay, David S.

    1994-01-01

    The rocks and soils of the moon will be the raw materials for various engineering needs at a lunar base, such as sources of hydrogen, oxygen, metals, etc. The material of choice for most of the bulk needs is the regolith and its less than 1 cm fraction, the soil. For specific mineral resources it may be necessary to concentrate minerals from either rocks or soils. Therefore, quantitative characterizations of these rocks and soils are necessary in order to better define their mineral resource potential. However, using standard point-counting microscopic procedures, it is difficult to quantitatively determine mineral abundances and virtually impossible to obtain data on mineral distributions within grains. As a start to fulfilling these needs, Taylor et al. and Chambers et al. have developed a procedure for characterization of crushed lunar rocks using x ray digital imaging. The development of a similar digital imaging procedure for lunar soils as obtained from a spectrometer is described.

  15. Google glass based immunochromatographic diagnostic test analysis

    NASA Astrophysics Data System (ADS)

    Feng, Steve; Caire, Romain; Cortazar, Bingen; Turan, Mehmet; Wong, Andrew; Ozcan, Aydogan

    2015-03-01

    Integration of optical imagers and sensors into recently emerging wearable computational devices allows for simpler and more intuitive methods of integrating biomedical imaging and medical diagnostics tasks into existing infrastructures. Here we demonstrate the ability of one such device, the Google Glass, to perform qualitative and quantitative analysis of immunochromatographic rapid diagnostic tests (RDTs) using a voice-commandable hands-free software-only interface, as an alternative to larger and more bulky desktop or handheld units. Using the built-in camera of Glass to image one or more RDTs (labeled with Quick Response (QR) codes), our Glass software application uploads the captured image and related information (e.g., user name, GPS, etc.) to our servers for remote analysis and storage. After digital analysis of the RDT images, the results are transmitted back to the originating Glass device, and made available through a website in geospatial and tabular representations. We tested this system on qualitative human immunodeficiency virus (HIV) and quantitative prostate-specific antigen (PSA) RDTs. For qualitative HIV tests, we demonstrate successful detection and labeling (i.e., yes/no decisions) for up to 6-fold dilution of HIV samples. For quantitative measurements, we activated and imaged PSA concentrations ranging from 0 to 200 ng/mL and generated calibration curves relating the RDT line intensity values to PSA concentration. By providing automated digitization of both qualitative and quantitative test results, this wearable colorimetric diagnostic test reader platform on Google Glass can reduce operator errors caused by poor training, provide real-time spatiotemporal mapping of test results, and assist with remote monitoring of various biomedical conditions.

  16. Capillary nano-immunoassays: advancing quantitative proteomics analysis, biomarker assessment, and molecular diagnostics.

    PubMed

    Chen, Jin-Qiu; Wakefield, Lalage M; Goldstein, David J

    2015-06-06

    There is an emerging demand for the use of molecular profiling to facilitate biomarker identification and development, and to stratify patients for more efficient treatment decisions with reduced adverse effects. In the past decade, great strides have been made to advance genomic, transcriptomic and proteomic approaches to address these demands. While there has been much progress with these large scale approaches, profiling at the protein level still faces challenges due to limitations in clinical sample size, poor reproducibility, unreliable quantitation, and lack of assay robustness. A novel automated capillary nano-immunoassay (CNIA) technology has been developed. This technology offers precise and accurate measurement of proteins and their post-translational modifications using either charge-based or size-based separation formats. The system not only uses ultralow nanogram levels of protein but also allows multi-analyte analysis using a parallel single-analyte format for increased sensitivity and specificity. The high sensitivity and excellent reproducibility of this technology make it particularly powerful for analysis of clinical samples. Furthermore, the system can distinguish and detect specific protein post-translational modifications that conventional Western blot and other immunoassays cannot easily capture. This review will summarize and evaluate the latest progress to optimize the CNIA system for comprehensive, quantitative protein and signaling event characterization. It will also discuss how the technology has been successfully applied in both discovery research and clinical studies, for signaling pathway dissection, proteomic biomarker assessment, targeted treatment evaluation and quantitative proteomic analysis. Lastly, a comparison of this novel system with other conventional immuno-assay platforms is performed.

  17. Motion based parsing for video from observational psychology

    NASA Astrophysics Data System (ADS)

    Kokaram, Anil; Doyle, Erika; Lennon, Daire; Joyeux, Laurent; Fuller, Ray

    2006-01-01

    In Psychology it is common to conduct studies involving the observation of humans undertaking some task. The sessions are typically recorded on video and used for subjective visual analysis. The subjective analysis is tedious and time consuming, not only because much useless video material is recorded but also because subjective measures of human behaviour are not necessarily repeatable. This paper presents tools using content based video analysis that allow automated parsing of video from one such study involving Dyslexia. The tools rely on implicit measures of human motion that can be generalised to other applications in the domain of human observation. Results comparing quantitative assessment of human motion with subjective assessment are also presented, illustrating that the system is a useful scientific tool.

  18. Recovering the dynamics of root growth and development using novel image acquisition and analysis methods

    PubMed Central

    Wells, Darren M.; French, Andrew P.; Naeem, Asad; Ishaq, Omer; Traini, Richard; Hijazi, Hussein; Bennett, Malcolm J.; Pridmore, Tony P.

    2012-01-01

    Roots are highly responsive to environmental signals encountered in the rhizosphere, such as nutrients, mechanical resistance and gravity. As a result, root growth and development is very plastic. If this complex and vital process is to be understood, methods and tools are required to capture the dynamics of root responses. Tools are needed which are high-throughput, supporting large-scale experimental work, and provide accurate, high-resolution, quantitative data. We describe and demonstrate the efficacy of the high-throughput and high-resolution root imaging systems recently developed within the Centre for Plant Integrative Biology (CPIB). This toolset includes (i) robotic imaging hardware to generate time-lapse datasets from standard cameras under infrared illumination and (ii) automated image analysis methods and software to extract quantitative information about root growth and development both from these images and via high-resolution light microscopy. These methods are demonstrated using data gathered during an experimental study of the gravitropic response of Arabidopsis thaliana. PMID:22527394

  19. Recovering the dynamics of root growth and development using novel image acquisition and analysis methods.

    PubMed

    Wells, Darren M; French, Andrew P; Naeem, Asad; Ishaq, Omer; Traini, Richard; Hijazi, Hussein I; Hijazi, Hussein; Bennett, Malcolm J; Pridmore, Tony P

    2012-06-05

    Roots are highly responsive to environmental signals encountered in the rhizosphere, such as nutrients, mechanical resistance and gravity. As a result, root growth and development is very plastic. If this complex and vital process is to be understood, methods and tools are required to capture the dynamics of root responses. Tools are needed which are high-throughput, supporting large-scale experimental work, and provide accurate, high-resolution, quantitative data. We describe and demonstrate the efficacy of the high-throughput and high-resolution root imaging systems recently developed within the Centre for Plant Integrative Biology (CPIB). This toolset includes (i) robotic imaging hardware to generate time-lapse datasets from standard cameras under infrared illumination and (ii) automated image analysis methods and software to extract quantitative information about root growth and development both from these images and via high-resolution light microscopy. These methods are demonstrated using data gathered during an experimental study of the gravitropic response of Arabidopsis thaliana.

  20. 3D Filament Network Segmentation with Multiple Active Contours

    NASA Astrophysics Data System (ADS)

    Xu, Ting; Vavylonis, Dimitrios; Huang, Xiaolei

    2014-03-01

    Fluorescence microscopy is frequently used to study two and three dimensional network structures formed by cytoskeletal polymer fibers such as actin filaments and microtubules. While these cytoskeletal structures are often dilute enough to allow imaging of individual filaments or bundles of them, quantitative analysis of these images is challenging. To facilitate quantitative, reproducible and objective analysis of the image data, we developed a semi-automated method to extract actin networks and retrieve their topology in 3D. Our method uses multiple Stretching Open Active Contours (SOACs) that are automatically initialized at image intensity ridges and then evolve along the centerlines of filaments in the network. SOACs can merge, stop at junctions, and reconfigure with others to allow smooth crossing at junctions of filaments. The proposed approach is generally applicable to images of curvilinear networks with low SNR. We demonstrate its potential by extracting the centerlines of synthetic meshwork images, actin networks in 2D TIRF Microscopy images, and 3D actin cable meshworks of live fission yeast cells imaged by spinning disk confocal microscopy.

  1. MDB: the Metalloprotein Database and Browser at The Scripps Research Institute

    PubMed Central

    Castagnetto, Jesus M.; Hennessy, Sean W.; Roberts, Victoria A.; Getzoff, Elizabeth D.; Tainer, John A.; Pique, Michael E.

    2002-01-01

    The Metalloprotein Database and Browser (MDB; http://metallo.scripps.edu) at The Scripps Research Institute is a web-accessible resource for metalloprotein research. It offers the scientific community quantitative information on geometrical parameters of metal-binding sites in protein structures available from the Protein Data Bank (PDB). The MDB also offers analytical tools for the examination of trends or patterns in the indexed metal-binding sites. A user can perform interactive searches, metal-site structure visualization (via a Java applet), and analysis of the quantitative data by accessing the MDB through a web browser without requiring an external application or platform-dependent plugin. The MDB also has a non-interactive interface with which other web sites and network-aware applications can seamlessly incorporate data or statistical analysis results from metal-binding sites. The information contained in the MDB is periodically updated with automated algorithms that find and index metal sites from new protein structures released by the PDB. PMID:11752342

  2. Measuring single-cell gene expression dynamics in bacteria using fluorescence time-lapse microscopy

    PubMed Central

    Young, Jonathan W; Locke, James C W; Altinok, Alphan; Rosenfeld, Nitzan; Bacarian, Tigran; Swain, Peter S; Mjolsness, Eric; Elowitz, Michael B

    2014-01-01

    Quantitative single-cell time-lapse microscopy is a powerful method for analyzing gene circuit dynamics and heterogeneous cell behavior. We describe the application of this method to imaging bacteria by using an automated microscopy system. This protocol has been used to analyze sporulation and competence differentiation in Bacillus subtilis, and to quantify gene regulation and its fluctuations in individual Escherichia coli cells. The protocol involves seeding and growing bacteria on small agarose pads and imaging the resulting microcolonies. Images are then reviewed and analyzed using our laboratory's custom MATLAB analysis code, which segments and tracks cells in a frame-to-frame method. This process yields quantitative expression data on cell lineages, which can illustrate dynamic expression profiles and facilitate mathematical models of gene circuits. With fast-growing bacteria, such as E. coli or B. subtilis, image acquisition can be completed in 1 d, with an additional 1–2 d for progressing through the analysis procedure. PMID:22179594

  3. Subunit mass analysis for monitoring antibody oxidation.

    PubMed

    Sokolowska, Izabela; Mo, Jingjie; Dong, Jia; Lewis, Michael J; Hu, Ping

    2017-04-01

    Methionine oxidation is a common posttranslational modification (PTM) of monoclonal antibodies (mAbs). Oxidation can reduce the in-vivo half-life, efficacy and stability of the product. Peptide mapping is commonly used to monitor the levels of oxidation, but this is a relatively time-consuming method. A high-throughput, automated subunit mass analysis method was developed to monitor antibody methionine oxidation. In this method, samples were treated with IdeS, EndoS and dithiothreitol to generate three individual IgG subunits (light chain, Fd' and single chain Fc). These subunits were analyzed by reversed phase-ultra performance liquid chromatography coupled with an online quadrupole time-of-flight mass spectrometer and the levels of oxidation on each subunit were quantitated based on the deconvoluted mass spectra using the UNIFI software. The oxidation results obtained by subunit mass analysis correlated well with the results obtained by peptide mapping. Method qualification demonstrated that this subunit method had excellent repeatability and intermediate precision. In addition, UNIFI software used in this application allows automated data acquisition and processing, which makes this method suitable for high-throughput process monitoring and product characterization. Finally, subunit mass analysis revealed the different patterns of Fc methionine oxidation induced by chemical and photo stress, which makes it attractive for investigating the root cause of oxidation.

  4. NeuroCa: integrated framework for systematic analysis of spatiotemporal neuronal activity patterns from large-scale optical recording data

    PubMed Central

    Jang, Min Jee; Nam, Yoonkey

    2015-01-01

    Abstract. Optical recording facilitates monitoring the activity of a large neural network at the cellular scale, but the analysis and interpretation of the collected data remain challenging. Here, we present a MATLAB-based toolbox, named NeuroCa, for the automated processing and quantitative analysis of large-scale calcium imaging data. Our tool includes several computational algorithms to extract the calcium spike trains of individual neurons from the calcium imaging data in an automatic fashion. Two algorithms were developed to decompose the imaging data into the activity of individual cells and subsequently detect calcium spikes from each neuronal signal. Applying our method to dense networks in dissociated cultures, we were able to obtain the calcium spike trains of ∼1000 neurons in a few minutes. Further analyses using these data permitted the quantification of neuronal responses to chemical stimuli as well as functional mapping of spatiotemporal patterns in neuronal firing within the spontaneous, synchronous activity of a large network. These results demonstrate that our method not only automates time-consuming, labor-intensive tasks in the analysis of neural data obtained using optical recording techniques but also provides a systematic way to visualize and quantify the collective dynamics of a network in terms of its cellular elements. PMID:26229973

  5. Quantitative Cell Cycle Analysis Based on an Endogenous All-in-One Reporter for Cell Tracking and Classification.

    PubMed

    Zerjatke, Thomas; Gak, Igor A; Kirova, Dilyana; Fuhrmann, Markus; Daniel, Katrin; Gonciarz, Magdalena; Müller, Doris; Glauche, Ingmar; Mansfeld, Jörg

    2017-05-30

    Cell cycle kinetics are crucial to cell fate decisions. Although live imaging has provided extensive insights into this relationship at the single-cell level, the limited number of fluorescent markers that can be used in a single experiment has hindered efforts to link the dynamics of individual proteins responsible for decision making directly to cell cycle progression. Here, we present fluorescently tagged endogenous proliferating cell nuclear antigen (PCNA) as an all-in-one cell cycle reporter that allows simultaneous analysis of cell cycle progression, including the transition into quiescence, and the dynamics of individual fate determinants. We also provide an image analysis pipeline for automated segmentation, tracking, and classification of all cell cycle phases. Combining the all-in-one reporter with labeled endogenous cyclin D1 and p21 as prime examples of cell-cycle-regulated fate determinants, we show how cell cycle and quantitative protein dynamics can be simultaneously extracted to gain insights into G1 phase regulation and responses to perturbations. Copyright © 2017 The Author(s). Published by Elsevier Inc. All rights reserved.

  6. High-throughput 3D whole-brain quantitative histopathology in rodents

    PubMed Central

    Vandenberghe, Michel E.; Hérard, Anne-Sophie; Souedet, Nicolas; Sadouni, Elmahdi; Santin, Mathieu D.; Briet, Dominique; Carré, Denis; Schulz, Jocelyne; Hantraye, Philippe; Chabrier, Pierre-Etienne; Rooney, Thomas; Debeir, Thomas; Blanchard, Véronique; Pradier, Laurent; Dhenain, Marc; Delzescaux, Thierry

    2016-01-01

    Histology is the gold standard to unveil microscopic brain structures and pathological alterations in humans and animal models of disease. However, due to tedious manual interventions, quantification of histopathological markers is classically performed on a few tissue sections, thus restricting measurements to limited portions of the brain. Recently developed 3D microscopic imaging techniques have allowed in-depth study of neuroanatomy. However, quantitative methods are still lacking for whole-brain analysis of cellular and pathological markers. Here, we propose a ready-to-use, automated, and scalable method to thoroughly quantify histopathological markers in 3D in rodent whole brains. It relies on block-face photography, serial histology and 3D-HAPi (Three Dimensional Histology Analysis Pipeline), an open source image analysis software. We illustrate our method in studies involving mouse models of Alzheimer’s disease and show that it can be broadly applied to characterize animal models of brain diseases, to evaluate therapeutic interventions, to anatomically correlate cellular and pathological markers throughout the entire brain and to validate in vivo imaging techniques. PMID:26876372

  7. Blackboard architecture for medical image interpretation

    NASA Astrophysics Data System (ADS)

    Davis, Darryl N.; Taylor, Christopher J.

    1991-06-01

    There is a growing interest in using sophisticated knowledge-based systems for biomedical image interpretation. We present a principled attempt to use artificial intelligence methodologies in interpreting lateral skull x-ray images. Such radiographs are routinely used in cephalometric analysis to provide quantitative measurements useful to clinical orthodontists. Manual and interactive methods of analysis are known to be error prone and previous attempts to automate this analysis typically fail to capture the expertise and adaptability required to cope with the variability in biological structure and image quality. An integrated model-based system has been developed which makes use of a blackboard architecture and multiple knowledge sources. A model definition interface allows quantitative models, of feature appearance and location, to be built from examples as well as more qualitative modelling constructs. Visual task definition and blackboard control modules allow task-specific knowledge sources to act on information available to the blackboard in a hypothesise and test reasoning cycle. Further knowledge-based modules include object selection, location hypothesis, intelligent segmentation, and constraint propagation systems. Alternative solutions to given tasks are permitted.

  8. Segmentation and detection of fluorescent 3D spots.

    PubMed

    Ram, Sundaresh; Rodríguez, Jeffrey J; Bosco, Giovanni

    2012-03-01

    The 3D spatial organization of genes and other genetic elements within the nucleus is important for regulating gene expression. Understanding how this spatial organization is established and maintained throughout the life of a cell is key to elucidating the many layers of gene regulation. Quantitative methods for studying nuclear organization will lead to insights into the molecular mechanisms that maintain gene organization as well as serve as diagnostic tools for pathologies caused by loss of nuclear structure. However, biologists currently lack automated and high throughput methods for quantitative and qualitative global analysis of 3D gene organization. In this study, we use confocal microscopy and fluorescence in-situ hybridization (FISH) as a cytogenetic technique to detect and localize the presence of specific DNA sequences in 3D. FISH uses probes that bind to specific targeted locations on the chromosomes, appearing as fluorescent spots in 3D images obtained using fluorescence microscopy. In this article, we propose an automated algorithm for segmentation and detection of 3D FISH spots. The algorithm is divided into two stages: spot segmentation and spot detection. Spot segmentation consists of 3D anisotropic smoothing to reduce the effect of noise, top-hat filtering, and intensity thresholding, followed by 3D region-growing. Spot detection uses a Bayesian classifier with spot features such as volume, average intensity, texture, and contrast to detect and classify the segmented spots as either true or false spots. Quantitative assessment of the proposed algorithm demonstrates improved segmentation and detection accuracy compared to other techniques. Copyright © 2012 International Society for Advancement of Cytometry.

  9. Optimized manual and automated recovery of amplifiable DNA from tissues preserved in buffered formalin and alcohol-based fixative.

    PubMed

    Duval, Kristin; Aubin, Rémy A; Elliott, James; Gorn-Hondermann, Ivan; Birnboim, H Chaim; Jonker, Derek; Fourney, Ron M; Frégeau, Chantal J

    2010-02-01

    Archival tissue preserved in fixative constitutes an invaluable resource for histological examination, molecular diagnostic procedures and for DNA typing analysis in forensic investigations. However, available material is often limited in size and quantity. Moreover, recovery of DNA is often severely compromised by the presence of covalent DNA-protein cross-links generated by formalin, the most prevalent fixative. We describe the evaluation of buffer formulations, sample lysis regimens and DNA recovery strategies and define optimized manual and automated procedures for the extraction of high quality DNA suitable for molecular diagnostics and genotyping. Using a 3-step enzymatic digestion protocol carried out in the absence of dithiothreitol, we demonstrate that DNA can be efficiently released from cells or tissues preserved in buffered formalin or the alcohol-based fixative GenoFix. This preparatory procedure can then be integrated to traditional phenol/chloroform extraction, a modified manual DNA IQ or automated DNA IQ/Te-Shake-based extraction in order to recover DNA for downstream applications. Quantitative recovery of high quality DNA was best achieved from specimens archived in GenoFix and extracted using magnetic bead capture.

  10. Hematocrit-Independent Quantitation of Stimulants in Dried Blood Spots: Pipet versus Microfluidic-Based Volumetric Sampling Coupled with Automated Flow-Through Desorption and Online Solid Phase Extraction-LC-MS/MS Bioanalysis.

    PubMed

    Verplaetse, Ruth; Henion, Jack

    2016-07-05

    A workflow overcoming microsample collection issues and hematocrit (HCT)-related bias would facilitate more widespread use of dried blood spots (DBS). This report describes comparative results between the use of a pipet and a microfluidic-based sampling device for the creation of volumetric DBS. Both approaches were successfully coupled to HCT-independent, fully automated sample preparation and online liquid chromatography coupled to tandem mass spectrometry (LC-MS/MS) analysis allowing detection of five stimulants in finger prick blood. Reproducible, selective, accurate, and precise responses meeting generally accepted regulated bioanalysis guidelines were observed over the range of 5-1000 ng/mL whole blood. The applied heated flow-through solvent desorption of the entire spot and online solid phase extraction (SPE) procedure were unaffected by the blood's HCT value within the tested range of 28.0-61.5% HCT. Enhanced stability for mephedrone on DBS compared to liquid whole blood was observed. Finger prick blood samples were collected using both volumetric sampling approaches over a time course of 25 h after intake of a single oral dose of phentermine. A pharmacokinetic curve for the incurred phentermine was successfully produced using the described validated method. These results suggest that either volumetric sample collection method may be amenable to field-use followed by fully automated, HCT-independent DBS-SPE-LC-MS/MS bioanalysis for the quantitation of these representative controlled substances. Analytical data from DBS prepared with a pipet and microfluidic-based sampling devices were comparable, but the latter is easier to operate, making this approach more suitable for sample collection by unskilled persons.

  11. Performance of Automated Software in the Assessment of Segmental Left Ventricular Function in Cardiac CT: Comparison with Cardiac Magnetic Resonance.

    PubMed

    Wang, Rui; Meinel, Felix G; Schoepf, U Joseph; Canstein, Christian; Spearman, James V; De Cecco, Carlo N

    2015-12-01

    To evaluate the accuracy, reliability and time saving potential of a novel cardiac CT (CCT)-based, automated software for the assessment of segmental left ventricular function compared to visual and manual quantitative assessment of CCT and cardiac magnetic resonance (CMR). Forty-seven patients with suspected or known coronary artery disease (CAD) were enrolled in the study. Wall thickening was calculated. Segmental LV wall motion was automatically calculated and shown as a colour-coded polar map. Processing time for each method was recorded. Mean wall thickness in both systolic and diastolic phases on polar map, CCT, and CMR was 9.2 ± 0.1 mm and 14.9 ± 0.2 mm, 8.9 ± 0.1 mm and 14.5 ± 0.1 mm, 8.3 ± 0.1 mm and 13.6 ± 0.1 mm, respectively. Mean wall thickening was 68.4 ± 1.5 %, 64.8 ± 1.4 % and 67.1 ± 1.4 %, respectively. Agreement for the assessment of LV wall motion between CCT, CMR and polar maps was good. Bland-Altman plots and ICC indicated good agreement between CCT, CMR and automated polar maps of the diastolic and systolic segmental wall thickness and thickening. The processing time using polar map was significantly decreased compared with CCT and CMR. Automated evaluation of segmental LV function with polar maps provides similar measurements to manual CCT and CMR evaluation, albeit with substantially reduced analysis time. • Cardiac computed tomography (CCT) can accurately assess segmental left ventricular wall function. • A novel automated software permits accurate and fast evaluation of wall function. • The software may improve the clinical implementation of segmental functional analysis.

  12. FBI fingerprint identification automation study: AIDS 3 evaluation report. Volume 8: Measures of effectiveness

    NASA Technical Reports Server (NTRS)

    Mulhall, B. D. L.

    1980-01-01

    The development of both quantitative criteria that were used to evaluate conceptional systems for automating the functions for the FBI Identification Division is described. Specific alternative systems for automation were compared by using these developed criteria, defined as Measures of Effectiveness (MOE), to gauge system's performance in attempting to achieve certain goals. The MOE, essentially measurement tools that were developed through the combination of suitable parameters, pertain to each conceivable area of system operation. The methods and approaches used, both in selecting the parameters and in using the resulting MOE, are described.

  13. Quantitative phosphoproteomic analysis of host responses in human lung epithelial (A549) cells during influenza virus infection.

    PubMed

    Dapat, Clyde; Saito, Reiko; Suzuki, Hiroshi; Horigome, Tsuneyoshi

    2014-01-22

    The emergence of antiviral drug-resistant influenza viruses highlights the need for alternative therapeutic strategies. Elucidation of host factors required during virus infection provides information not only on the signaling pathways involved but also on the identification of novel drug targets. RNA interference screening method had been utilized by several studies to determine these host factors; however, proteomics data on influenza host factors are currently limited. In this study, quantitative phosphoproteomic analysis of human lung cell line (A549) infected with 2009 pandemic influenza virus A (H1N1) virus was performed. Phosphopeptides were enriched from tryptic digests of total protein of infected and mock-infected cells using a titania column on an automated purification system followed by iTRAQ labeling. Identification and quantitative analysis of iTRAQ-labeled phosphopeptides were performed using LC-MS/MS. We identified 366 phosphorylation sites on 283 proteins. Of these, we detected 43 upregulated and 35 downregulated proteins during influenza virus infection. Gene ontology enrichment analysis showed that majority of the identified proteins are phosphoproteins involved in RNA processing, immune system process and response to infection. Host-virus interaction network analysis had identified 23 densely connected subnetworks. Of which, 13 subnetworks contained proteins with altered phosphorylation levels during by influenza virus infection. Our results will help to identify potential drug targets that can be pursued for influenza antiviral drug development. Copyright © 2013 Elsevier B.V. All rights reserved.

  14. Evaluation of Aution Max AX-4030 and 9UB Uriflet, 10PA Aution Sticks urine dipsticks in the automated urine test strip analysis.

    PubMed

    Rota, Cristina; Biondi, Marco; Trenti, Tommaso

    2011-09-26

    Aution Max AX-4030, a test strip analyzer recently introduced to the market, represents an upgrade of the Aution Max AX-4280 widely employed for urinalysis. This new instrument model can allocate two different test strips at the same time. In the present study the two instruments have been compared together with the usage of Uriflet 9UB and the recently produced Aution Sticks 10PA urine strips, the latter presenting an additional test area for the measurement of urinary creatinine. Imprecision and correlation between instruments and strips have been evaluated for chemical-physical parameters. Accuracy was evaluated for protein, glucose and creatinine by comparing the semi-quantitative results to those obtained by quantitative methods. The well-known interference effect of high ascorbic acid levels on urine glucose test strip determination was evaluated, ascorbic acid influence was also evaluated on protein and creatinine determination. The two instruments have demonstrated comparable performances: precision and correlation between instruments and strips, evaluated for chemical-physical parameters, were always good. Furthermore, accuracy was always very good: results of protein and glucose semi-quantitative measurements resulted to be highly correlated with those obtained by quantitative methods. Moreover, the semi-quantitative measurements of creatinine, employing Aution Sticks 10PA urine strips, were highly comparable with quantitative results. 10PA urine strips are eligible for urine creatinine determination with the possibility of correcting urinalysis results for urinary creatinine concentration, whenever necessary and calculating the protein creatinine ratio. Further studies should be carried out to evaluate effectiveness and appropriateness of the usage of creatinine semi-quantitative analysis.

  15. Computer-based fluorescence quantification: a novel approach to study nucleolar biology

    PubMed Central

    2011-01-01

    Background Nucleoli are composed of possibly several thousand different proteins and represent the most conspicuous compartments in the nucleus; they play a crucial role in the proper execution of many cellular processes. As such, nucleoli carry out ribosome biogenesis and sequester or associate with key molecules that regulate cell cycle progression, tumorigenesis, apoptosis and the stress response. Nucleoli are dynamic compartments that are characterized by a constant flux of macromolecules. Given the complex and dynamic composition of the nucleolar proteome, it is challenging to link modifications in nucleolar composition to downstream effects. Results In this contribution, we present quantitative immunofluorescence methods that rely on computer-based image analysis. We demonstrate the effectiveness of these techniques by monitoring the dynamic association of proteins and RNA with nucleoli under different physiological conditions. Thus, the protocols described by us were employed to study stress-dependent changes in the nucleolar concentration of endogenous and GFP-tagged proteins. Furthermore, our methods were applied to measure de novo RNA synthesis that is associated with nucleoli. We show that the techniques described here can be easily combined with automated high throughput screening (HTS) platforms, making it possible to obtain large data sets and analyze many of the biological processes that are located in nucleoli. Conclusions Our protocols set the stage to analyze in a quantitative fashion the kinetics of shuttling nucleolar proteins, both at the single cell level as well as for a large number of cells. Moreover, the procedures described here are compatible with high throughput image acquisition and analysis using HTS automated platforms, thereby providing the basis to quantify nucleolar components and activities for numerous samples and experimental conditions. Together with the growing amount of information obtained for the nucleolar proteome, improvements in quantitative microscopy as they are described here can be expected to produce new insights into the complex biological functions that are orchestrated by the nucleolus. PMID:21639891

  16. Development of Automated Image Analysis Software for Suspended Marine Particle Classification

    DTIC Science & Technology

    2003-09-30

    Development of Automated Image Analysis Software for Suspended Marine Particle Classification Scott Samson Center for Ocean Technology...REPORT TYPE 3. DATES COVERED 00-00-2003 to 00-00-2003 4. TITLE AND SUBTITLE Development of Automated Image Analysis Software for Suspended...objective is to develop automated image analysis software to reduce the effort and time required for manual identification of plankton images. Automated

  17. Validation of automated supervised segmentation of multibeam backscatter data from the Chatham Rise, New Zealand

    NASA Astrophysics Data System (ADS)

    Hillman, Jess I. T.; Lamarche, Geoffroy; Pallentin, Arne; Pecher, Ingo A.; Gorman, Andrew R.; Schneider von Deimling, Jens

    2018-06-01

    Using automated supervised segmentation of multibeam backscatter data to delineate seafloor substrates is a relatively novel technique. Low-frequency multibeam echosounders (MBES), such as the 12-kHz EM120, present particular difficulties since the signal can penetrate several metres into the seafloor, depending on substrate type. We present a case study illustrating how a non-targeted dataset may be used to derive information from multibeam backscatter data regarding distribution of substrate types. The results allow us to assess limitations associated with low frequency MBES where sub-bottom layering is present, and test the accuracy of automated supervised segmentation performed using SonarScope® software. This is done through comparison of predicted and observed substrate from backscatter facies-derived classes and substrate data, reinforced using quantitative statistical analysis based on a confusion matrix. We use sediment samples, video transects and sub-bottom profiles acquired on the Chatham Rise, east of New Zealand. Inferences on the substrate types are made using the Generic Seafloor Acoustic Backscatter (GSAB) model, and the extents of the backscatter classes are delineated by automated supervised segmentation. Correlating substrate data to backscatter classes revealed that backscatter amplitude may correspond to lithologies up to 4 m below the seafloor. Our results emphasise several issues related to substrate characterisation using backscatter classification, primarily because the GSAB model does not only relate to grain size and roughness properties of substrate, but also accounts for other parameters that influence backscatter. Better understanding these limitations allows us to derive first-order interpretations of sediment properties from automated supervised segmentation.

  18. Java Web Start based software for automated quantitative nuclear analysis of prostate cancer and benign prostate hyperplasia.

    PubMed

    Singh, Swaroop S; Kim, Desok; Mohler, James L

    2005-05-11

    Androgen acts via androgen receptor (AR) and accurate measurement of the levels of AR protein expression is critical for prostate research. The expression of AR in paired specimens of benign prostate and prostate cancer from 20 African and 20 Caucasian Americans was compared to demonstrate an application of this system. A set of 200 immunopositive and 200 immunonegative nuclei were collected from the images using a macro developed in Image Pro Plus. Linear Discriminant and Logistic Regression analyses were performed on the data to generate classification coefficients. Classification coefficients render the automated image analysis software independent of the type of immunostaining or image acquisition system used. The image analysis software performs local segmentation and uses nuclear shape and size to detect prostatic epithelial nuclei. AR expression is described by (a) percentage of immunopositive nuclei; (b) percentage of immunopositive nuclear area; and (c) intensity of AR expression among immunopositive nuclei or areas. The percent positive nuclei and percent nuclear area were similar by race in both benign prostate hyperplasia and prostate cancer. In prostate cancer epithelial nuclei, African Americans exhibited 38% higher levels of AR immunostaining than Caucasian Americans (two sided Student's t-tests; P < 0.05). Intensity of AR immunostaining was similar between races in benign prostate. The differences measured in the intensity of AR expression in prostate cancer were consistent with previous studies. Classification coefficients are required due to non-standardized immunostaining and image collection methods across medical institutions and research laboratories and helps customize the software for the specimen under study. The availability of a free, automated system creates new opportunities for testing, evaluation and use of this image analysis system by many research groups who study nuclear protein expression.

  19. Automated analysis of plethysmograms for functional studies of hemodynamics

    NASA Astrophysics Data System (ADS)

    Zatrudina, R. Sh.; Isupov, I. B.; Gribkov, V. Yu.

    2018-04-01

    The most promising method for the quantitative determination of cardiovascular tone indicators and of cerebral hemodynamics indicators is the method of impedance plethysmography. The accurate determination of these indicators requires the correct identification of the characteristic points in the thoracic impedance plethysmogram and the cranial impedance plethysmogram respectively. An algorithm for automatic analysis of these plethysmogram is presented. The algorithm is based on the hard temporal relationships between the phases of the cardiac cycle and the characteristic points of the plethysmogram. The proposed algorithm does not require estimation of initial data and selection of processing parameters. Use of the method on healthy subjects showed a very low detection error of characteristic points.

  20. Automated quantitative assessment of proteins' biological function in protein knowledge bases.

    PubMed

    Mayr, Gabriele; Lepperdinger, Günter; Lackner, Peter

    2008-01-01

    Primary protein sequence data are archived in databases together with information regarding corresponding biological functions. In this respect, UniProt/Swiss-Prot is currently the most comprehensive collection and it is routinely cross-examined when trying to unravel the biological role of hypothetical proteins. Bioscientists frequently extract single entries and further evaluate those on a subjective basis. In lieu of a standardized procedure for scoring the existing knowledge regarding individual proteins, we here report about a computer-assisted method, which we applied to score the present knowledge about any given Swiss-Prot entry. Applying this quantitative score allows the comparison of proteins with respect to their sequence yet highlights the comprehension of functional data. pfs analysis may be also applied for quality control of individual entries or for database management in order to rank entry listings.

  1. Quantification of whispering gallery mode spectrum variability in application to sensing nanobiophotonics

    NASA Astrophysics Data System (ADS)

    Saetchnikov, Anton; Skakun, Victor; Saetchnikov, Vladimir; Tcherniavskaia, Elina; Ostendorf, Andreas

    2017-10-01

    An approach for the automated whispering gallery mode (WGM) signal decomposition and its parameter estimation is discussed. The algorithm is based on the peak picking and can be applied for the preprocessing of the raw signal acquired from the multiplied WGM-based biosensing chips. Quantitative estimations representing physically meaningful parameters of the external disturbing factors on the WGM spectral shape are the output values. Derived parameters can be directly applied to the further deep qualitative and quantitative interpretations of the sensed disturbing factors. The algorithm is tested on both simulated and experimental data taken from the bovine serum albumin biosensing task. The proposed solution is expected to be a useful contribution to the preprocessing phase of the complete data analysis engine and is expected to push the WGM technology toward the real-live sensing nanobiophotonics.

  2. The Effect of Information Analysis Automation Display Content on Human Judgment Performance in Noisy Environments.

    PubMed

    Bass, Ellen J; Baumgart, Leigh A; Shepley, Kathryn Klein

    2013-03-01

    Displaying both the strategy that information analysis automation employs to makes its judgments and variability in the task environment may improve human judgment performance, especially in cases where this variability impacts the judgment performance of the information analysis automation. This work investigated the contribution of providing either information analysis automation strategy information, task environment information, or both, on human judgment performance in a domain where noisy sensor data are used by both the human and the information analysis automation to make judgments. In a simplified air traffic conflict prediction experiment, 32 participants made probability of horizontal conflict judgments under different display content conditions. After being exposed to the information analysis automation, judgment achievement significantly improved for all participants as compared to judgments without any of the automation's information. Participants provided with additional display content pertaining to cue variability in the task environment had significantly higher aided judgment achievement compared to those provided with only the automation's judgment of a probability of conflict. When designing information analysis automation for environments where the automation's judgment achievement is impacted by noisy environmental data, it may be beneficial to show additional task environment information to the human judge in order to improve judgment performance.

  3. Objective measurement of accommodative biometric changes using ultrasound biomicroscopy

    PubMed Central

    Ramasubramanian, Viswanathan; Glasser, Adrian

    2015-01-01

    PURPOSE To demonstrate that ultrasound biomicroscopy (UBM) can be used for objective quantitative measurements of anterior segment accommodative changes. SETTING College of Optometry, University of Houston, Houston, Texas, USA. DESIGN Prospective cross-sectional study. METHODS Anterior segment biometric changes in response to 0 to 6.0 diopters (D) of accommodative stimuli in 1.0 D steps were measured in eyes of human subjects aged 21 to 36 years. Imaging was performed in the left eye using a 35 MHz UBM (Vumax) and an A-scan ultrasound (A-5500) while the right eye viewed the accommodative stimuli. An automated Matlab image-analysis program was developed to measure the biometry parameters from the UBM images. RESULTS The UBM-measured accommodative changes in anterior chamber depth (ACD), lens thickness, anterior lens radius of curvature, posterior lens radius of curvature, and anterior segment length were statistically significantly (P < .0001) linearly correlated with accommodative stimulus amplitudes. Standard deviations of the UBM-measured parameters were independent of the accommodative stimulus demands (ACD 0.0176 mm, lens thickness 0.0294 mm, anterior lens radius of curvature 0.3350 mm, posterior lens radius of curvature 0.1580 mm, and anterior segment length 0.0340 mm). The mean difference between the A-scan and UBM measurements was −0.070 mm for ACD and 0.166 mm for lens thickness. CONCLUSIONS Accommodating phakic eyes imaged using UBM allowed visualization of the accommodative response, and automated image analysis of the UBM images allowed reliable, objective, quantitative measurements of the accommodative intraocular biometric changes. PMID:25804579

  4. Automated Quantification and Integrative Analysis of 2D and 3D Mitochondrial Shape and Network Properties

    PubMed Central

    Nikolaisen, Julie; Nilsson, Linn I. H.; Pettersen, Ina K. N.; Willems, Peter H. G. M.; Lorens, James B.; Koopman, Werner J. H.; Tronstad, Karl J.

    2014-01-01

    Mitochondrial morphology and function are coupled in healthy cells, during pathological conditions and (adaptation to) endogenous and exogenous stress. In this sense mitochondrial shape can range from small globular compartments to complex filamentous networks, even within the same cell. Understanding how mitochondrial morphological changes (i.e. “mitochondrial dynamics”) are linked to cellular (patho) physiology is currently the subject of intense study and requires detailed quantitative information. During the last decade, various computational approaches have been developed for automated 2-dimensional (2D) analysis of mitochondrial morphology and number in microscopy images. Although these strategies are well suited for analysis of adhering cells with a flat morphology they are not applicable for thicker cells, which require a three-dimensional (3D) image acquisition and analysis procedure. Here we developed and validated an automated image analysis algorithm allowing simultaneous 3D quantification of mitochondrial morphology and network properties in human endothelial cells (HUVECs). Cells expressing a mitochondria-targeted green fluorescence protein (mitoGFP) were visualized by 3D confocal microscopy and mitochondrial morphology was quantified using both the established 2D method and the new 3D strategy. We demonstrate that both analyses can be used to characterize and discriminate between various mitochondrial morphologies and network properties. However, the results from 2D and 3D analysis were not equivalent when filamentous mitochondria in normal HUVECs were compared with circular/spherical mitochondria in metabolically stressed HUVECs treated with rotenone (ROT). 2D quantification suggested that metabolic stress induced mitochondrial fragmentation and loss of biomass. In contrast, 3D analysis revealed that the mitochondrial network structure was dissolved without affecting the amount and size of the organelles. Thus, our results demonstrate that 3D imaging and quantification are crucial for proper understanding of mitochondrial shape and topology in non-flat cells. In summary, we here present an integrative method for unbiased 3D quantification of mitochondrial shape and network properties in mammalian cells. PMID:24988307

  5. metAlignID: a high-throughput software tool set for automated detection of trace level contaminants in comprehensive LECO two-dimensional gas chromatography time-of-flight mass spectrometry data.

    PubMed

    Lommen, Arjen; van der Kamp, Henk J; Kools, Harrie J; van der Lee, Martijn K; van der Weg, Guido; Mol, Hans G J

    2012-11-09

    A new alternative data processing tool set, metAlignID, is developed for automated pre-processing and library-based identification and concentration estimation of target compounds after analysis by comprehensive two-dimensional gas chromatography with mass spectrometric detection. The tool set has been developed for and tested on LECO data. The software is developed to run multi-threaded (one thread per processor core) on a standard PC (personal computer) under different operating systems and is as such capable of processing multiple data sets simultaneously. Raw data files are converted into netCDF (network Common Data Form) format using a fast conversion tool. They are then preprocessed using previously developed algorithms originating from metAlign software. Next, the resulting reduced data files are searched against a user-composed library (derived from user or commercial NIST-compatible libraries) (NIST=National Institute of Standards and Technology) and the identified compounds, including an indicative concentration, are reported in Excel format. Data can be processed batch wise. The overall time needed for conversion together with processing and searching of 30 raw data sets for 560 compounds is routinely within an hour. The screening performance is evaluated for detection of pesticides and contaminants in raw data obtained after analysis of soil and plant samples. Results are compared to the existing data-handling routine based on proprietary software (LECO, ChromaTOF). The developed software tool set, which is freely downloadable at www.metalign.nl, greatly accelerates data-analysis and offers more options for fine-tuning automated identification toward specific application needs. The quality of the results obtained is slightly better than the standard processing and also adds a quantitative estimate. The software tool set in combination with two-dimensional gas chromatography coupled to time-of-flight mass spectrometry shows great potential as a highly-automated and fast multi-residue instrumental screening method. Copyright © 2012 Elsevier B.V. All rights reserved.

  6. Quantification of Dynamic Morphological Drug Responses in 3D Organotypic Cell Cultures by Automated Image Analysis

    PubMed Central

    Härmä, Ville; Schukov, Hannu-Pekka; Happonen, Antti; Ahonen, Ilmari; Virtanen, Johannes; Siitari, Harri; Åkerfelt, Malin; Lötjönen, Jyrki; Nees, Matthias

    2014-01-01

    Glandular epithelial cells differentiate into complex multicellular or acinar structures, when embedded in three-dimensional (3D) extracellular matrix. The spectrum of different multicellular morphologies formed in 3D is a sensitive indicator for the differentiation potential of normal, non-transformed cells compared to different stages of malignant progression. In addition, single cells or cell aggregates may actively invade the matrix, utilizing epithelial, mesenchymal or mixed modes of motility. Dynamic phenotypic changes involved in 3D tumor cell invasion are sensitive to specific small-molecule inhibitors that target the actin cytoskeleton. We have used a panel of inhibitors to demonstrate the power of automated image analysis as a phenotypic or morphometric readout in cell-based assays. We introduce a streamlined stand-alone software solution that supports large-scale high-content screens, based on complex and organotypic cultures. AMIDA (Automated Morphometric Image Data Analysis) allows quantitative measurements of large numbers of images and structures, with a multitude of different spheroid shapes, sizes, and textures. AMIDA supports an automated workflow, and can be combined with quality control and statistical tools for data interpretation and visualization. We have used a representative panel of 12 prostate and breast cancer lines that display a broad spectrum of different spheroid morphologies and modes of invasion, challenged by a library of 19 direct or indirect modulators of the actin cytoskeleton which induce systematic changes in spheroid morphology and differentiation versus invasion. These results were independently validated by 2D proliferation, apoptosis and cell motility assays. We identified three drugs that primarily attenuated the invasion and formation of invasive processes in 3D, without affecting proliferation or apoptosis. Two of these compounds block Rac signalling, one affects cellular cAMP/cGMP accumulation. Our approach supports the growing needs for user-friendly, straightforward solutions that facilitate large-scale, cell-based 3D assays in basic research, drug discovery, and target validation. PMID:24810913

  7. Hippocampal volume change measurement: quantitative assessment of the reproducibility of expert manual outlining and the automated methods FreeSurfer and FIRST.

    PubMed

    Mulder, Emma R; de Jong, Remko A; Knol, Dirk L; van Schijndel, Ronald A; Cover, Keith S; Visser, Pieter J; Barkhof, Frederik; Vrenken, Hugo

    2014-05-15

    To measure hippocampal volume change in Alzheimer's disease (AD) or mild cognitive impairment (MCI), expert manual delineation is often used because of its supposed accuracy. It has been suggested that expert outlining yields poorer reproducibility as compared to automated methods, but this has not been investigated. To determine the reproducibilities of expert manual outlining and two common automated methods for measuring hippocampal atrophy rates in healthy aging, MCI and AD. From the Alzheimer's Disease Neuroimaging Initiative (ADNI), 80 subjects were selected: 20 patients with AD, 40 patients with mild cognitive impairment (MCI) and 20 healthy controls (HCs). Left and right hippocampal volume change between baseline and month-12 visit was assessed by using expert manual delineation, and by the automated software packages FreeSurfer (longitudinal processing stream) and FIRST. To assess reproducibility of the measured hippocampal volume change, both back-to-back (BTB) MPRAGE scans available for each visit were analyzed. Hippocampal volume change was expressed in μL, and as a percentage of baseline volume. Reproducibility of the 1-year hippocampal volume change was estimated from the BTB measurements by using linear mixed model to calculate the limits of agreement (LoA) of each method, reflecting its measurement uncertainty. Using the delta method, approximate p-values were calculated for the pairwise comparisons between methods. Statistical analyses were performed both with inclusion and exclusion of visibly incorrect segmentations. Visibly incorrect automated segmentation in either one or both scans of a longitudinal scan pair occurred in 7.5% of the hippocampi for FreeSurfer and in 6.9% of the hippocampi for FIRST. After excluding these failed cases, reproducibility analysis for 1-year percentage volume change yielded LoA of ±7.2% for FreeSurfer, ±9.7% for expert manual delineation, and ±10.0% for FIRST. Methods ranked the same for reproducibility of 1-year μL volume change, with LoA of ±218 μL for FreeSurfer, ±319 μL for expert manual delineation, and ±333 μL for FIRST. Approximate p-values indicated that reproducibility was better for FreeSurfer than for manual or FIRST, and that manual and FIRST did not differ. Inclusion of failed automated segmentations led to worsening of reproducibility of both automated methods for 1-year raw and percentage volume change. Quantitative reproducibility values of 1-year microliter and percentage hippocampal volume change were roughly similar between expert manual outlining, FIRST and FreeSurfer, but FreeSurfer reproducibility was statistically significantly superior to both manual outlining and FIRST after exclusion of failed segmentations. Copyright © 2014 Elsevier Inc. All rights reserved.

  8. Chronic Obstructive Pulmonary Disease Exacerbations in the COPDGene Study: Associated Radiologic Phenotypes

    PubMed Central

    Kazerooni, Ella A.; Lynch, David A.; Liu, Lyrica X.; Murray, Susan; Curtis, Jeffrey L.; Criner, Gerard J.; Kim, Victor; Bowler, Russell P.; Hanania, Nicola A.; Anzueto, Antonio R.; Make, Barry J.; Hokanson, John E.; Crapo, James D.; Silverman, Edwin K.; Martinez, Fernando J.; Washko, George R.

    2011-01-01

    Purpose: To test the hypothesis—given the increasing emphasis on quantitative computed tomographic (CT) phenotypes of chronic obstructive pulmonary disease (COPD)—that a relationship exists between COPD exacerbation frequency and quantitative CT measures of emphysema and airway disease. Materials and Methods: This research protocol was approved by the institutional review board of each participating institution, and all participants provided written informed consent. One thousand two subjects who were enrolled in the COPDGene Study and met the GOLD (Global Initiative for Chronic Obstructive Lung Disease) criteria for COPD with quantitative CT analysis were included. Total lung emphysema percentage was measured by using the attenuation mask technique with a −950-HU threshold. An automated program measured the mean wall thickness and mean wall area percentage in six segmental bronchi. The frequency of COPD exacerbation in the prior year was determined by using a questionnaire. Statistical analysis was performed to examine the relationship of exacerbation frequency with lung function and quantitative CT measurements. Results: In a multivariate analysis adjusted for lung function, bronchial wall thickness and total lung emphysema percentage were associated with COPD exacerbation frequency. Each 1-mm increase in bronchial wall thickness was associated with a 1.84-fold increase in annual exacerbation rate (P = .004). For patients with 35% or greater total emphysema, each 5% increase in emphysema was associated with a 1.18-fold increase in this rate (P = .047). Conclusion: Greater lung emphysema and airway wall thickness were associated with COPD exacerbations, independent of the severity of airflow obstruction. Quantitative CT can help identify subgroups of patients with COPD who experience exacerbations for targeted research and therapy development for individual phenotypes. © RSNA, 2011 Supplemental material: http://radiology.rsna.org/lookup/suppl/doi:10.1148/radiol.11110173/-/DC1 PMID:21788524

  9. Automated analysis of art object surfaces using time-averaged digital speckle pattern interferometry

    NASA Astrophysics Data System (ADS)

    Lukomski, Michal; Krzemien, Leszek

    2013-05-01

    Technical development and practical evaluation of a laboratory built, out-of-plane digital speckle pattern interferometer (DSPI) are reported. The instrument was used for non-invasive, non-contact detection and characterization of early-stage damage, like fracturing and layer separation, of painted objects of art. A fully automated algorithm was developed for recording and analysis of vibrating objects utilizing continuous-wave laser light. The algorithm uses direct, numerical fitting or Hilbert transformation for an independent, quantitative evaluation of the Bessel function at every point of the investigated surface. The procedure does not require phase modulation and thus can be implemented within any, even the simplest, DSPI apparatus. The proposed deformation analysis is fast and computationally inexpensive. Diagnosis of physical state of the surface of a panel painting attributed to Nicolaus Haberschrack (a late-mediaeval painter active in Krakow) from the collection of the National Museum in Krakow is presented as an example of an in situ application of the developed methodology. It has allowed the effectiveness of the deformation analysis to be evaluated for the surface of a real painting (heterogeneous colour and texture) in a conservation studio where vibration level was considerably higher than in the laboratory. It has been established that the methodology, which offers automatic analysis of the interferometric fringe patterns, has a considerable potential to facilitate and render more precise the condition surveys of works of art.

  10. Multicenter study of quantitative computed tomography analysis using a computer-aided three-dimensional system in patients with idiopathic pulmonary fibrosis.

    PubMed

    Iwasawa, Tae; Kanauchi, Tetsu; Hoshi, Toshiko; Ogura, Takashi; Baba, Tomohisa; Gotoh, Toshiyuki; Oba, Mari S

    2016-01-01

    To evaluate the feasibility of automated quantitative analysis with a three-dimensional (3D) computer-aided system (i.e., Gaussian histogram normalized correlation, GHNC) of computed tomography (CT) images from different scanners. Each institution's review board approved the research protocol. Informed patient consent was not required. The participants in this multicenter prospective study were 80 patients (65 men, 15 women) with idiopathic pulmonary fibrosis. Their mean age was 70.6 years. Computed tomography (CT) images were obtained by four different scanners set at different exposures. We measured the extent of fibrosis using GHNC, and used Pearson's correlation analysis, Bland-Altman plots, and kappa analysis to directly compare the GHNC results with manual scoring by radiologists. Multiple linear regression analysis was performed to determine the association between the CT data and forced vital capacity (FVC). For each scanner, the extent of fibrosis as determined by GHNC was significantly correlated with the radiologists' score. In multivariate analysis, the extent of fibrosis as determined by GHNC was significantly correlated with FVC (p < 0.001). There was no significant difference between the results obtained using different CT scanners. Gaussian histogram normalized correlation was feasible, irrespective of the type of CT scanner used.

  11. Automated detection of videotaped neonatal seizures based on motion segmentation methods.

    PubMed

    Karayiannis, Nicolaos B; Tao, Guozhi; Frost, James D; Wise, Merrill S; Hrachovy, Richard A; Mizrahi, Eli M

    2006-07-01

    This study was aimed at the development of a seizure detection system by training neural networks using quantitative motion information extracted by motion segmentation methods from short video recordings of infants monitored for seizures. The motion of the infants' body parts was quantified by temporal motion strength signals extracted from video recordings by motion segmentation methods based on optical flow computation. The area of each frame occupied by the infants' moving body parts was segmented by direct thresholding, by clustering of the pixel velocities, and by clustering the motion parameters obtained by fitting an affine model to the pixel velocities. The computational tools and procedures developed for automated seizure detection were tested and evaluated on 240 short video segments selected and labeled by physicians from a set of video recordings of 54 patients exhibiting myoclonic seizures (80 segments), focal clonic seizures (80 segments), and random infant movements (80 segments). The experimental study described in this paper provided the basis for selecting the most effective strategy for training neural networks to detect neonatal seizures as well as the decision scheme used for interpreting the responses of the trained neural networks. Depending on the decision scheme used for interpreting the responses of the trained neural networks, the best neural networks exhibited sensitivity above 90% or specificity above 90%. The best among the motion segmentation methods developed in this study produced quantitative features that constitute a reliable basis for detecting myoclonic and focal clonic neonatal seizures. The performance targets of this phase of the project may be achieved by combining the quantitative features described in this paper with those obtained by analyzing motion trajectory signals produced by motion tracking methods. A video system based upon automated analysis potentially offers a number of advantages. Infants who are at risk for seizures could be monitored continuously using relatively inexpensive and non-invasive video techniques that supplement direct observation by nursery personnel. This would represent a major advance in seizure surveillance and offers the possibility for earlier identification of potential neurological problems and subsequent intervention.

  12. The Effect of Information Analysis Automation Display Content on Human Judgment Performance in Noisy Environments

    PubMed Central

    Bass, Ellen J.; Baumgart, Leigh A.; Shepley, Kathryn Klein

    2014-01-01

    Displaying both the strategy that information analysis automation employs to makes its judgments and variability in the task environment may improve human judgment performance, especially in cases where this variability impacts the judgment performance of the information analysis automation. This work investigated the contribution of providing either information analysis automation strategy information, task environment information, or both, on human judgment performance in a domain where noisy sensor data are used by both the human and the information analysis automation to make judgments. In a simplified air traffic conflict prediction experiment, 32 participants made probability of horizontal conflict judgments under different display content conditions. After being exposed to the information analysis automation, judgment achievement significantly improved for all participants as compared to judgments without any of the automation's information. Participants provided with additional display content pertaining to cue variability in the task environment had significantly higher aided judgment achievement compared to those provided with only the automation's judgment of a probability of conflict. When designing information analysis automation for environments where the automation's judgment achievement is impacted by noisy environmental data, it may be beneficial to show additional task environment information to the human judge in order to improve judgment performance. PMID:24847184

  13. Despeckle filtering software toolbox for ultrasound imaging of the common carotid artery.

    PubMed

    Loizou, Christos P; Theofanous, Charoula; Pantziaris, Marios; Kasparis, Takis

    2014-04-01

    Ultrasound imaging of the common carotid artery (CCA) is a non-invasive tool used in medicine to assess the severity of atherosclerosis and monitor its progression through time. It is also used in border detection and texture characterization of the atherosclerotic carotid plaque in the CCA, the identification and measurement of the intima-media thickness (IMT) and the lumen diameter that all are very important in the assessment of cardiovascular disease (CVD). Visual perception, however, is hindered by speckle, a multiplicative noise, that degrades the quality of ultrasound B-mode imaging. Noise reduction is therefore essential for improving the visual observation quality or as a pre-processing step for further automated analysis, such as image segmentation of the IMT and the atherosclerotic carotid plaque in ultrasound images. In order to facilitate this preprocessing step, we have developed in MATLAB(®) a unified toolbox that integrates image despeckle filtering (IDF), texture analysis and image quality evaluation techniques to automate the pre-processing and complement the disease evaluation in ultrasound CCA images. The proposed software, is based on a graphical user interface (GUI) and incorporates image normalization, 10 different despeckle filtering techniques (DsFlsmv, DsFwiener, DsFlsminsc, DsFkuwahara, DsFgf, DsFmedian, DsFhmedian, DsFad, DsFnldif, DsFsrad), image intensity normalization, 65 texture features, 15 quantitative image quality metrics and objective image quality evaluation. The software is publicly available in an executable form, which can be downloaded from http://www.cs.ucy.ac.cy/medinfo/. It was validated on 100 ultrasound images of the CCA, by comparing its results with quantitative visual analysis performed by a medical expert. It was observed that the despeckle filters DsFlsmv, and DsFhmedian improved image quality perception (based on the expert's assessment and the image texture and quality metrics). It is anticipated that the system could help the physician in the assessment of cardiovascular image analysis. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  14. Automated compromised right lung segmentation method using a robust atlas-based active volume model with sparse shape composition prior in CT.

    PubMed

    Zhou, Jinghao; Yan, Zhennan; Lasio, Giovanni; Huang, Junzhou; Zhang, Baoshe; Sharma, Navesh; Prado, Karl; D'Souza, Warren

    2015-12-01

    To resolve challenges in image segmentation in oncologic patients with severely compromised lung, we propose an automated right lung segmentation framework that uses a robust, atlas-based active volume model with a sparse shape composition prior. The robust atlas is achieved by combining the atlas with the output of sparse shape composition. Thoracic computed tomography images (n=38) from patients with lung tumors were collected. The right lung in each scan was manually segmented to build a reference training dataset against which the performance of the automated segmentation method was assessed. The quantitative results of this proposed segmentation method with sparse shape composition achieved mean Dice similarity coefficient (DSC) of (0.72, 0.81) with 95% CI, mean accuracy (ACC) of (0.97, 0.98) with 95% CI, and mean relative error (RE) of (0.46, 0.74) with 95% CI. Both qualitative and quantitative comparisons suggest that this proposed method can achieve better segmentation accuracy with less variance than other atlas-based segmentation methods in the compromised lung segmentation. Published by Elsevier Ltd.

  15. Automating PACS quality control with the Vanderbilt image processing enterprise resource

    NASA Astrophysics Data System (ADS)

    Esparza, Michael L.; Welch, E. Brian; Landman, Bennett A.

    2012-02-01

    Precise image acquisition is an integral part of modern patient care and medical imaging research. Periodic quality control using standardized protocols and phantoms ensures that scanners are operating according to specifications, yet such procedures do not ensure that individual datasets are free from corruption; for example due to patient motion, transient interference, or physiological variability. If unacceptable artifacts are noticed during scanning, a technologist can repeat a procedure. Yet, substantial delays may be incurred if a problematic scan is not noticed until a radiologist reads the scans or an automated algorithm fails. Given scores of slices in typical three-dimensional scans and widevariety of potential use cases, a technologist cannot practically be expected inspect all images. In large-scale research, automated pipeline systems have had great success in achieving high throughput. However, clinical and institutional workflows are largely based on DICOM and PACS technologies; these systems are not readily compatible with research systems due to security and privacy restrictions. Hence, quantitative quality control has been relegated to individual investigators and too often neglected. Herein, we propose a scalable system, the Vanderbilt Image Processing Enterprise Resource (VIPER) to integrate modular quality control and image analysis routines with a standard PACS configuration. This server unifies image processing routines across an institutional level and provides a simple interface so that investigators can collaborate to deploy new analysis technologies. VIPER integrates with high performance computing environments has successfully analyzed all standard scans from our institutional research center over the course of the last 18 months.

  16. Robust Segmentation of Overlapping Cells in Histopathology Specimens Using Parallel Seed Detection and Repulsive Level Set

    PubMed Central

    Qi, Xin; Xing, Fuyong; Foran, David J.; Yang, Lin

    2013-01-01

    Automated image analysis of histopathology specimens could potentially provide support for early detection and improved characterization of breast cancer. Automated segmentation of the cells comprising imaged tissue microarrays (TMA) is a prerequisite for any subsequent quantitative analysis. Unfortunately, crowding and overlapping of cells present significant challenges for most traditional segmentation algorithms. In this paper, we propose a novel algorithm which can reliably separate touching cells in hematoxylin stained breast TMA specimens which have been acquired using a standard RGB camera. The algorithm is composed of two steps. It begins with a fast, reliable object center localization approach which utilizes single-path voting followed by mean-shift clustering. Next, the contour of each cell is obtained using a level set algorithm based on an interactive model. We compared the experimental results with those reported in the most current literature. Finally, performance was evaluated by comparing the pixel-wise accuracy provided by human experts with that produced by the new automated segmentation algorithm. The method was systematically tested on 234 image patches exhibiting dense overlap and containing more than 2200 cells. It was also tested on whole slide images including blood smears and tissue microarrays containing thousands of cells. Since the voting step of the seed detection algorithm is well suited for parallelization, a parallel version of the algorithm was implemented using graphic processing units (GPU) which resulted in significant speed-up over the C/C++ implementation. PMID:22167559

  17. An Automated Energy Detection Algorithm Based on Morphological Filter Processing with a Modified Watershed Transform

    DTIC Science & Technology

    2018-01-01

    collected data. These statistical techniques are under the area of descriptive statistics, which is a methodology to condense the data in quantitative ...ARL-TR-8270 ● JAN 2018 US Army Research Laboratory An Automated Energy Detection Algorithm Based on Morphological Filter...report when it is no longer needed. Do not return it to the originator. ARL-TR-8270 ● JAN 2017 US Army Research Laboratory An

  18. Automated, Quantitative Cognitive/Behavioral Screening of Mice: For Genetics, Pharmacology, Animal Cognition and Undergraduate Instruction

    PubMed Central

    Gallistel, C. R.; Balci, Fuat; Freestone, David; Kheifets, Aaron; King, Adam

    2014-01-01

    We describe a high-throughput, high-volume, fully automated, live-in 24/7 behavioral testing system for assessing the effects of genetic and pharmacological manipulations on basic mechanisms of cognition and learning in mice. A standard polypropylene mouse housing tub is connected through an acrylic tube to a standard commercial mouse test box. The test box has 3 hoppers, 2 of which are connected to pellet feeders. All are internally illuminable with an LED and monitored for head entries by infrared (IR) beams. Mice live in the environment, which eliminates handling during screening. They obtain their food during two or more daily feeding periods by performing in operant (instrumental) and Pavlovian (classical) protocols, for which we have written protocol-control software and quasi-real-time data analysis and graphing software. The data analysis and graphing routines are written in a MATLAB-based language created to simplify greatly the analysis of large time-stamped behavioral and physiological event records and to preserve a full data trail from raw data through all intermediate analyses to the published graphs and statistics within a single data structure. The data-analysis code harvests the data several times a day and subjects it to statistical and graphical analyses, which are automatically stored in the "cloud" and on in-lab computers. Thus, the progress of individual mice is visualized and quantified daily. The data-analysis code talks to the protocol-control code, permitting the automated advance from protocol to protocol of individual subjects. The behavioral protocols implemented are matching, autoshaping, timed hopper-switching, risk assessment in timed hopper-switching, impulsivity measurement, and the circadian anticipation of food availability. Open-source protocol-control and data-analysis code makes the addition of new protocols simple. Eight test environments fit in a 48 in x 24 in x 78 in cabinet; two such cabinets (16 environments) may be controlled by one computer. PMID:24637442

  19. Automated, quantitative cognitive/behavioral screening of mice: for genetics, pharmacology, animal cognition and undergraduate instruction.

    PubMed

    Gallistel, C R; Balci, Fuat; Freestone, David; Kheifets, Aaron; King, Adam

    2014-02-26

    We describe a high-throughput, high-volume, fully automated, live-in 24/7 behavioral testing system for assessing the effects of genetic and pharmacological manipulations on basic mechanisms of cognition and learning in mice. A standard polypropylene mouse housing tub is connected through an acrylic tube to a standard commercial mouse test box. The test box has 3 hoppers, 2 of which are connected to pellet feeders. All are internally illuminable with an LED and monitored for head entries by infrared (IR) beams. Mice live in the environment, which eliminates handling during screening. They obtain their food during two or more daily feeding periods by performing in operant (instrumental) and Pavlovian (classical) protocols, for which we have written protocol-control software and quasi-real-time data analysis and graphing software. The data analysis and graphing routines are written in a MATLAB-based language created to simplify greatly the analysis of large time-stamped behavioral and physiological event records and to preserve a full data trail from raw data through all intermediate analyses to the published graphs and statistics within a single data structure. The data-analysis code harvests the data several times a day and subjects it to statistical and graphical analyses, which are automatically stored in the "cloud" and on in-lab computers. Thus, the progress of individual mice is visualized and quantified daily. The data-analysis code talks to the protocol-control code, permitting the automated advance from protocol to protocol of individual subjects. The behavioral protocols implemented are matching, autoshaping, timed hopper-switching, risk assessment in timed hopper-switching, impulsivity measurement, and the circadian anticipation of food availability. Open-source protocol-control and data-analysis code makes the addition of new protocols simple. Eight test environments fit in a 48 in x 24 in x 78 in cabinet; two such cabinets (16 environments) may be controlled by one computer.

  20. Quantitative traits for the tail suspension test: automation, optimization, and BXD RI mapping.

    PubMed

    Lad, Heena V; Liu, Lin; Payá-Cano, José L; Fernandes, Cathy; Schalkwyk, Leonard C

    2007-07-01

    Immobility in the tail suspension test (TST) is considered a model of despair in a stressful situation, and acute treatment with antidepressants reduces immobility. Inbred strains of mouse exhibit widely differing baseline levels of immobility in the TST and several quantitative trait loci (QTLs) have been nominated. The labor of manual scoring and various scoring criteria make obtaining robust data and comparisons across different laboratories problematic. Several studies have validated strain gauge and video analysis methods by comparison with manual scoring. We set out to find objective criteria for automated scoring parameters that maximize the biological information obtained, using a video tracking system on tapes of tail suspension tests of 24 lines of the BXD recombinant inbred panel and the progenitor strains C57BL/6J and DBA/2J. The maximum genetic effect size is captured using the highest time resolution and a low mobility threshold. Dissecting the trait further by comparing genetic association of multiple measures reveals good evidence for loci involved in immobility on chromosomes 4 and 15. These are best seen when using a high threshold for immobility, despite the overall better heritability at the lower threshold. A second trial of the test has greater duration of immobility and a completely different genetic profile. Frequency of mobility is also an independent phenotype, with a distal chromosome 1 locus.

  1. Automated tracking of animal posture and movement during exploration and sensory orientation behaviors.

    PubMed

    Gomez-Marin, Alex; Partoune, Nicolas; Stephens, Greg J; Louis, Matthieu; Brembs, Björn

    2012-01-01

    The nervous functions of an organism are primarily reflected in the behavior it is capable of. Measuring behavior quantitatively, at high-resolution and in an automated fashion provides valuable information about the underlying neural circuit computation. Accordingly, computer-vision applications for animal tracking are becoming a key complementary toolkit to genetic, molecular and electrophysiological characterization in systems neuroscience. We present Sensory Orientation Software (SOS) to measure behavior and infer sensory experience correlates. SOS is a simple and versatile system to track body posture and motion of single animals in two-dimensional environments. In the presence of a sensory landscape, tracking the trajectory of the animal's sensors and its postural evolution provides a quantitative framework to study sensorimotor integration. To illustrate the utility of SOS, we examine the orientation behavior of fruit fly larvae in response to odor, temperature and light gradients. We show that SOS is suitable to carry out high-resolution behavioral tracking for a wide range of organisms including flatworms, fishes and mice. Our work contributes to the growing repertoire of behavioral analysis tools for collecting rich and fine-grained data to draw and test hypothesis about the functioning of the nervous system. By providing open-access to our code and documenting the software design, we aim to encourage the adaptation of SOS by a wide community of non-specialists to their particular model organism and questions of interest.

  2. Protocols for Automated Protist Analysis

    DTIC Science & Technology

    2011-12-01

    Report No: CG-D-14-13 Protocols for Automated Protist Analysis December 2011 Distribution Statement A: Approved for public...release; distribution is unlimited. Protocols for Automated Protist Analysis ii UNCLAS//Public | CG-926 RDC | B. Nelson, et al. | Public...Director United States Coast Guard Research & Development Center 1 Chelsea Street New London, CT 06320 Protocols for Automated Protist Analysis

  3. Automated assessment of medical training evaluation text.

    PubMed

    Zhang, Rui; Pakhomov, Serguei; Gladding, Sophia; Aylward, Michael; Borman-Shoap, Emily; Melton, Genevieve B

    2012-01-01

    Medical post-graduate residency training and medical student training increasingly utilize electronic systems to evaluate trainee performance based on defined training competencies with quantitative and qualitative data, the later of which typically consists of text comments. Medical education is concomitantly becoming a growing area of clinical research. While electronic systems have proliferated in number, little work has been done to help manage and analyze qualitative data from these evaluations. We explored the use of text-mining techniques to assist medical education researchers in sentiment analysis and topic analysis of residency evaluations with a sample of 812 evaluation statements. While comments were predominantly positive, sentiment analysis improved the ability to discriminate statements with 93% accuracy. Similar to other domains, Latent Dirichlet Analysis and Information Gain revealed groups of core subjects and appear to be useful for identifying topics from this data.

  4. Liquid-Crystal Point-Diffraction Interferometer for Wave-Front Measurements

    NASA Technical Reports Server (NTRS)

    Mercer, Carolyn R.; Creath, Katherine

    1996-01-01

    A new instrument, the liquid-crystal point-diffraction interferometer (LCPDI), is developed for the measurement of phase objects. This instrument maintains the compact, robust design of Linnik's point-diffraction interferometer and adds to it a phase-stepping capability for quantitative interferogram analysis. The result is a compact, simple to align, environmentally insensitive interferometer capable of accurately measuring optical wave fronts with very high data density and with automated data reduction. We describe the theory and design of the LCPDI. A focus shift was measured with the LCPDI, and the results are compared with theoretical results,

  5. High data density temperature measurement for quasi steady-state flows

    NASA Technical Reports Server (NTRS)

    Mercer, Carolyn R.; Rashidnia, Nasser; Creath, Katherine

    1995-01-01

    A new optical instrument, the liquid crystal point diffraction interferometer (LCPDI), is used to measure the temperature distribution across a heated chamber filled with silicone oil. Data taken using the LCPDI are compared to equivalent measurements made with a traversing thermocouple and the two data sets show excellent agreement This instrument maintains the compact, robust design of Linnik's point diffraction interferometer and adds to it phase stepping capability for quantitative interferogram analysis. The result is a compact, simple to align, environmentally insensitive interferometer capable of accurately measuring optical wavefronts with very high data density and with automated data reduction.

  6. High Data Density Temperature Measurement for Quasi Steady-State Flows

    NASA Technical Reports Server (NTRS)

    Mercer, C. R.; Rashidnia, N.; Creath, K.

    1996-01-01

    A new optical instrument, the liquid crystal point diffraction interferometer (LCPDI), is used to measure the temperature distribution across a heated chamber filled with silicone oil. Data taken using the LCPDI are compared to equivalent measurements made with a traversing thermo-couple and the two data sets show excellent agreement. This instrument maintains the compact, robust design of Linniks point diffraction interferometer and adds to it phase stepping capability for quantitative interferogram analysis. The result is a compact, simple to align, environmentally insensitive interferometer capable of accurately measuring optical wave-fronts with very high data density and with automated data reduction.

  7. Heterophile antibody interference in qualitative urine/serum hCG devices: Case report.

    PubMed

    Patel, Khushbu K; Gronowski, Ann M

    2016-06-01

    This case report investigates the origin of a false positive result on a serum qualitative human chorionic gonadotropin (hCG) device. A 46-year-old woman diagnosed with chronic myeloid leukemia presented with nausea and vomiting. A qualitative serum hCG test was interpreted as positive; however, a quantitative serum hCG test was negative (<5IU/L). To further investigate this discrepancy, the sample was pretreated with heterophilic blocking reagent (HBR). Additionally, the sample was tested on other qualitative hCG devices composed of antibodies from different animal sources. Blocking reagent from an automated quantitative immunoassay was also tested for its ability to inhibit the heterophile antibody interference. The qualitative test result was negative after pretreatment with heterophilic blocking reagent. Other devices composed of antibodies from different animal sources also demonstrated mixed results with the patient's sample. Blocking reagent obtained from the automated quantitative assay inhibited the heterophile antibody interference in the patient's sample. This case demonstrates that positive serum point-of-care hCG results should be interpreted with caution and confirmed with a quantitative serum hCG immunoassay when clinical suspicion is raised. Copyright © 2016 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.

  8. On the virtues of automated quantitative structure-activity relationship: the new kid on the block.

    PubMed

    de Oliveira, Marcelo T; Katekawa, Edson

    2018-02-01

    Quantitative structure-activity relationship (QSAR) has proved to be an invaluable tool in medicinal chemistry. Data availability at unprecedented levels through various databases have collaborated to a resurgence in the interest for QSAR. In this context, rapid generation of quality predictive models is highly desirable for hit identification and lead optimization. We showcase the application of an automated QSAR approach, which randomly selects multiple training/test sets and utilizes machine-learning algorithms to generate predictive models. Results demonstrate that AutoQSAR produces models of improved or similar quality to those generated by practitioners in the field but in just a fraction of the time. Despite the potential of the concept to the benefit of the community, the AutoQSAR opportunity has been largely undervalued.

  9. In vivo classification of human skin burns using machine learning and quantitative features captured by optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Singla, Neeru; Srivastava, Vishal; Singh Mehta, Dalip

    2018-02-01

    We report the first fully automated detection of human skin burn injuries in vivo, with the goal of automatic surgical margin assessment based on optical coherence tomography (OCT) images. Our proposed automated procedure entails building a machine-learning-based classifier by extracting quantitative features from normal and burn tissue images recorded by OCT. In this study, 56 samples (28 normal, 28 burned) were imaged by OCT and eight features were extracted. A linear model classifier was trained using 34 samples and 22 samples were used to test the model. Sensitivity of 91.6% and specificity of 90% were obtained. Our results demonstrate the capability of a computer-aided technique for accurately and automatically identifying burn tissue resection margins during surgical treatment.

  10. Automated quantitative micro-mineralogical characterization for environmental applications

    USGS Publications Warehouse

    Smith, Kathleen S.; Hoal, K.O.; Walton-Day, Katherine; Stammer, J.G.; Pietersen, K.

    2013-01-01

    Characterization of ore and waste-rock material using automated quantitative micro-mineralogical techniques (e.g., QEMSCAN® and MLA) has the potential to complement traditional acid-base accounting and humidity cell techniques when predicting acid generation and metal release. These characterization techniques, which most commonly are used for metallurgical, mineral-processing, and geometallurgical applications, can be broadly applied throughout the mine-life cycle to include numerous environmental applications. Critical insights into mineral liberation, mineral associations, particle size, particle texture, and mineralogical residence phase(s) of environmentally important elements can be used to anticipate potential environmental challenges. Resources spent on initial characterization result in lower uncertainties of potential environmental impacts and possible cost savings associated with remediation and closure. Examples illustrate mineralogical and textural characterization of fluvial tailings material from the upper Arkansas River in Colorado.

  11. Optical tools for high-throughput screening of abrasion resistance of combinatorial libraries of organic coatings

    NASA Astrophysics Data System (ADS)

    Potyrailo, Radislav A.; Chisholm, Bret J.; Olson, Daniel R.; Brennan, Michael J.; Molaison, Chris A.

    2002-02-01

    Design, validation, and implementation of an optical spectroscopic system for high-throughput analysis of combinatorially developed protective organic coatings are reported. Our approach replaces labor-intensive coating evaluation steps with an automated system that rapidly analyzes 8x6 arrays of coating elements that are deposited on a plastic substrate. Each coating element of the library is 10 mm in diameter and 2 to 5 micrometers thick. Performance of coatings is evaluated with respect to their resistance to wear abrasion because this parameter is one of the primary considerations in end-use applications. Upon testing, the organic coatings undergo changes that are impossible to quantitatively predict using existing knowledge. Coatings are abraded using industry-accepted abrasion test methods at single-or multiple-abrasion conditions, followed by high- throughput analysis of abrasion-induced light scatter. The developed automated system is optimized for the analysis of diffusively scattered light that corresponds to 0 to 30% haze. System precision of 0.1 to 2.5% relative standard deviation provides capability for the reliable ranking of coatings performance. While the system was implemented for high-throughput screening of combinatorially developed organic protective coatings for automotive applications, it can be applied to a variety of other applications where materials ranking can be achieved using optical spectroscopic tools.

  12. A novel method for automated assessment of megakaryocyte differentiation and proplatelet formation.

    PubMed

    Salzmann, M; Hoesel, B; Haase, M; Mussbacher, M; Schrottmaier, W C; Kral-Pointner, J B; Finsterbusch, M; Mazharian, A; Assinger, A; Schmid, J A

    2018-06-01

    Transfusion of platelet concentrates represents an important treatment for various bleeding complications. However, the short half-life and frequent contaminations with bacteria restrict the availability of platelet concentrates and raise a clear demand for platelets generated ex vivo. Therefore, in vitro platelet generation from megakaryocytes represents an important research topic. A vital step for this process represents accurate analysis of thrombopoiesis and proplatelet formation, which is usually conducted manually. We aimed to develop a novel method for automated classification and analysis of proplatelet-forming megakaryocytes in vitro. After fluorescent labelling of surface and nucleus, MKs were automatically categorized and analysed with a novel pipeline of the open source software CellProfiler. Our new workflow is able to detect and quantify four subtypes of megakaryocytes undergoing thrombopoiesis: proplatelet-forming, spreading, pseudopodia-forming and terminally differentiated, anucleated megakaryocytes. Furthermore, we were able to characterize the inhibitory effect of dasatinib on thrombopoiesis in more detail. Our new workflow enabled rapid, unbiased, quantitative and qualitative in-depth analysis of proplatelet formation based on morphological characteristics. Clinicians and basic researchers alike will benefit from this novel technique that allows reliable and unbiased quantification of proplatelet formation. It thereby provides a valuable tool for the development of methods to generate platelets ex vivo and to detect effects of drugs on megakaryocyte differentiation.

  13. A Flexible Analysis Tool for the Quantitative Acoustic Assessment of Infant Cry

    PubMed Central

    Reggiannini, Brian; Sheinkopf, Stephen J.; Silverman, Harvey F.; Li, Xiaoxue; Lester, Barry M.

    2015-01-01

    Purpose In this article, the authors describe and validate the performance of a modern acoustic analyzer specifically designed for infant cry analysis. Method Utilizing known algorithms, the authors developed a method to extract acoustic parameters describing infant cries from standard digital audio files. They used a frame rate of 25 ms with a frame advance of 12.5 ms. Cepstral-based acoustic analysis proceeded in 2 phases, computing frame-level data and then organizing and summarizing this information within cry utterances. Using signal detection methods, the authors evaluated the accuracy of the automated system to determine voicing and to detect fundamental frequency (F0) as compared to voiced segments and pitch periods manually coded from spectrogram displays. Results The system detected F0 with 88% to 95% accuracy, depending on tolerances set at 10 to 20 Hz. Receiver operating characteristic analyses demonstrated very high accuracy at detecting voicing characteristics in the cry samples. Conclusions This article describes an automated infant cry analyzer with high accuracy to detect important acoustic features of cry. A unique and important aspect of this work is the rigorous testing of the system’s accuracy as compared to ground-truth manual coding. The resulting system has implications for basic and applied research on infant cry development. PMID:23785178

  14. Automated determination of arterial input function for DCE-MRI of the prostate

    NASA Astrophysics Data System (ADS)

    Zhu, Yingxuan; Chang, Ming-Ching; Gupta, Sandeep

    2011-03-01

    Prostate cancer is one of the commonest cancers in the world. Dynamic contrast enhanced MRI (DCE-MRI) provides an opportunity for non-invasive diagnosis, staging, and treatment monitoring. Quantitative analysis of DCE-MRI relies on determination of an accurate arterial input function (AIF). Although several methods for automated AIF detection have been proposed in literature, none are optimized for use in prostate DCE-MRI, which is particularly challenging due to large spatial signal inhomogeneity. In this paper, we propose a fully automated method for determining the AIF from prostate DCE-MRI. Our method is based on modeling pixel uptake curves as gamma variate functions (GVF). First, we analytically compute bounds on GVF parameters for more robust fitting. Next, we approximate a GVF for each pixel based on local time domain information, and eliminate the pixels with false estimated AIFs using the deduced upper and lower bounds. This makes the algorithm robust to signal inhomogeneity. After that, according to spatial information such as similarity and distance between pixels, we formulate the global AIF selection as an energy minimization problem and solve it using a message passing algorithm to further rule out the weak pixels and optimize the detected AIF. Our method is fully automated without training or a priori setting of parameters. Experimental results on clinical data have shown that our method obtained promising detection accuracy (all detected pixels inside major arteries), and a very good match with expert traced manual AIF.

  15. Discrimination of Isomers of Released N- and O-Glycans Using Diagnostic Product Ions in Negative Ion PGC-LC-ESI-MS/MS

    NASA Astrophysics Data System (ADS)

    Ashwood, Christopher; Lin, Chi-Hung; Thaysen-Andersen, Morten; Packer, Nicolle H.

    2018-03-01

    Profiling cellular protein glycosylation is challenging due to the presence of highly similar glycan structures that play diverse roles in cellular physiology. As the anomericity and the exact linkage type of a single glycosidic bond can influence glycan function, there is a demand for improved and automated methods to confirm detailed structural features and to discriminate between structurally similar isomers, overcoming a significant bottleneck in the analysis of data generated by glycomics experiments. We used porous graphitized carbon-LC-ESI-MS/MS to separate and detect released N- and O-glycan isomers from mammalian model glycoproteins using negative mode resonance activation CID-MS/MS. By interrogating similar fragment spectra from closely related glycan isomers that differ only in arm position and sialyl linkage, product fragment ions for discrimination between these features were discovered. Using the Skyline software, at least two diagnostic fragment ions of high specificity were validated for automated discrimination of sialylation and arm position in N-glycan structures, and sialylation in O-glycan structures, complementing existing structural diagnostic ions. These diagnostic ions were shown to be useful for isomer discrimination using both linear and 3D ion trap mass spectrometers when analyzing complex glycan mixtures from cell lysates. Skyline was found to serve as a useful tool for automated assessment of glycan isomer discrimination. This platform-independent workflow can potentially be extended to automate the characterization and quantitation of other challenging glycan isomers. [Figure not available: see fulltext.

  16. SpArcFiRe: Scalable automated detection of spiral galaxy arm segments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Davis, Darren R.; Hayes, Wayne B., E-mail: drdavis@uci.edu, E-mail: whayes@uci.edu

    Given an approximately centered image of a spiral galaxy, we describe an entirely automated method that finds, centers, and sizes the galaxy (possibly masking nearby stars and other objects if necessary in order to isolate the galaxy itself) and then automatically extracts structural information about the spiral arms. For each arm segment found, we list the pixels in that segment, allowing image analysis on a per-arm-segment basis. We also perform a least-squares fit of a logarithmic spiral arc to the pixels in that segment, giving per-arc parameters, such as the pitch angle, arm segment length, location, etc. The algorithm takesmore » about one minute per galaxies, and can easily be scaled using parallelism. We have run it on all ∼644,000 Sloan objects that are larger than 40 pixels across and classified as 'galaxies'. We find a very good correlation between our quantitative description of a spiral structure and the qualitative description provided by Galaxy Zoo humans. Our objective, quantitative measures of structure demonstrate the difficulty in defining exactly what constitutes a spiral 'arm', leading us to prefer the term 'arm segment'. We find that pitch angle often varies significantly segment-to-segment in a single spiral galaxy, making it difficult to define the pitch angle for a single galaxy. We demonstrate how our new database of arm segments can be queried to find galaxies satisfying specific quantitative visual criteria. For example, even though our code does not explicitly find rings, a good surrogate is to look for galaxies having one long, low-pitch-angle arm—which is how our code views ring galaxies. SpArcFiRe is available at http://sparcfire.ics.uci.edu.« less

  17. Modeling the Learner in Computer-Assisted Instruction

    ERIC Educational Resources Information Center

    Fletcher, J. D.

    1975-01-01

    This paper briefly reviews relevant work in four areas: 1) quantitative models of memory; 2) regression models of performance; 3) automation models of performance; and 4) artificial intelligence. (Author/HB)

  18. Automated tracking of whiskers in videos of head fixed rodents.

    PubMed

    Clack, Nathan G; O'Connor, Daniel H; Huber, Daniel; Petreanu, Leopoldo; Hires, Andrew; Peron, Simon; Svoboda, Karel; Myers, Eugene W

    2012-01-01

    We have developed software for fully automated tracking of vibrissae (whiskers) in high-speed videos (>500 Hz) of head-fixed, behaving rodents trimmed to a single row of whiskers. Performance was assessed against a manually curated dataset consisting of 1.32 million video frames comprising 4.5 million whisker traces. The current implementation detects whiskers with a recall of 99.998% and identifies individual whiskers with 99.997% accuracy. The average processing rate for these images was 8 Mpx/s/cpu (2.6 GHz Intel Core2, 2 GB RAM). This translates to 35 processed frames per second for a 640 px×352 px video of 4 whiskers. The speed and accuracy achieved enables quantitative behavioral studies where the analysis of millions of video frames is required. We used the software to analyze the evolving whisking strategies as mice learned a whisker-based detection task over the course of 6 days (8148 trials, 25 million frames) and measure the forces at the sensory follicle that most underlie haptic perception.

  19. Automated Tracking of Whiskers in Videos of Head Fixed Rodents

    PubMed Central

    Clack, Nathan G.; O'Connor, Daniel H.; Huber, Daniel; Petreanu, Leopoldo; Hires, Andrew; Peron, Simon; Svoboda, Karel; Myers, Eugene W.

    2012-01-01

    We have developed software for fully automated tracking of vibrissae (whiskers) in high-speed videos (>500 Hz) of head-fixed, behaving rodents trimmed to a single row of whiskers. Performance was assessed against a manually curated dataset consisting of 1.32 million video frames comprising 4.5 million whisker traces. The current implementation detects whiskers with a recall of 99.998% and identifies individual whiskers with 99.997% accuracy. The average processing rate for these images was 8 Mpx/s/cpu (2.6 GHz Intel Core2, 2 GB RAM). This translates to 35 processed frames per second for a 640 px×352 px video of 4 whiskers. The speed and accuracy achieved enables quantitative behavioral studies where the analysis of millions of video frames is required. We used the software to analyze the evolving whisking strategies as mice learned a whisker-based detection task over the course of 6 days (8148 trials, 25 million frames) and measure the forces at the sensory follicle that most underlie haptic perception. PMID:22792058

  20. A multiparametric assay for quantitative nerve regeneration evaluation.

    PubMed

    Weyn, B; van Remoortere, M; Nuydens, R; Meert, T; van de Wouwer, G

    2005-08-01

    We introduce an assay for the semi-automated quantification of nerve regeneration by image analysis. Digital images of histological sections of regenerated nerves are recorded using an automated inverted microscope and merged into high-resolution mosaic images representing the entire nerve. These are analysed by a dedicated image-processing package that computes nerve-specific features (e.g. nerve area, fibre count, myelinated area) and fibre-specific features (area, perimeter, myelin sheet thickness). The assay's performance and correlation of the automatically computed data with visually obtained data are determined on a set of 140 semithin sections from the distal part of a rat tibial nerve from four different experimental treatment groups (control, sham, sutured, cut) taken at seven different time points after surgery. Results show a high correlation between the manually and automatically derived data, and a high discriminative power towards treatment. Extra value is added by the large feature set. In conclusion, the assay is fast and offers data that currently can be obtained only by a combination of laborious and time-consuming tests.

  1. Automated Protist Analysis of Complex Samples: Recent Investigations Using Motion and Thresholding

    DTIC Science & Technology

    2012-01-01

    Report No: CG-D-15-13 Automated Protist Analysis of Complex Samples: Recent Investigations Using Motion and Thresholding...Distribution Statement A: Approved for public release; distribution is unlimited. January 2012 Automated Protist Analysis of Complex Samples...Chelsea Street New London, CT 06320 Automated Protist Analysis of Complex Samples iii UNCLAS//PUBLIC | CG-926 R&DC | B. Nelson, et al

  2. Find Pairs: The Module for Protein Quantification of the PeakQuant Software Suite

    PubMed Central

    Eisenacher, Martin; Kohl, Michael; Wiese, Sebastian; Hebeler, Romano; Meyer, Helmut E.

    2012-01-01

    Abstract Accurate quantification of proteins is one of the major tasks in current proteomics research. To address this issue, a wide range of stable isotope labeling techniques have been developed, allowing one to quantitatively study thousands of proteins by means of mass spectrometry. In this article, the FindPairs module of the PeakQuant software suite is detailed. It facilitates the automatic determination of protein abundance ratios based on the automated analysis of stable isotope-coded mass spectrometric data. Furthermore, it implements statistical methods to determine outliers due to biological as well as technical variance of proteome data obtained in replicate experiments. This provides an important means to evaluate the significance in obtained protein expression data. For demonstrating the high applicability of FindPairs, we focused on the quantitative analysis of proteome data acquired in 14N/15N labeling experiments. We further provide a comprehensive overview of the features of the FindPairs software, and compare these with existing quantification packages. The software presented here supports a wide range of proteomics applications, allowing one to quantitatively assess data derived from different stable isotope labeling approaches, such as 14N/15N labeling, SILAC, and iTRAQ. The software is publicly available at http://www.medizinisches-proteom-center.de/software and free for academic use. PMID:22909347

  3. High pressure liquid chromatographic method for the separation and quantitation of water-soluble radiolabeled benzene metabolites

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sabourin, P.J.; Bechtold, W.E.; Henderson, R.F.

    1988-05-01

    The glucuronide and sulfate conjugates of benzene metabolite as well as muconic acid and pre-phenyl- and phenylmercapturic acids were separated by ion-pairing HPLC. The HPLC method developed was suitable for automated analysis of a large number of tissue or excreta samples. p-Nitrophenyl (/sup 14/C)glucuronide was used as an internal standard for quantitation of these water-soluble metabolites. Quantitation was verified by spiking liver tissue with various amounts of phenylsulfate or glucuronides of phenol, catechol, or hydroquinone and analyzing by HPLC. Values determined by HPLC analysis were within 10% of the actual amount with which the liver was spiked. The amount ofmore » metabolite present in urine following exposure to (/sup 3/H)benzene was determined using p-nitrophenyl (/sup 14/C)glucuronide as an internal standard. Phenylsulfate was the major water-soluble metabolite in the urine of F344 rats exposed to 50 ppm (/sup 3/H)benzene for 6 h. Muconic acid and an unknown metabolite which decomposed in acidic media to phenylmercapturic acid were also present. Liver, however, contained a different metabolic profile. This indicates that urinary metabolite profiles may not be a true reflection of what is seen in individual tissues.« less

  4. Clinical value of protein expression of kallikrein-related peptidase 7 (KLK7) in ovarian cancer.

    PubMed

    Dorn, Julia; Gkazepis, Apostolos; Kotzsch, Matthias; Kremer, Marcus; Propping, Corinna; Mayer, Katharina; Mengele, Karin; Diamandis, Eleftherios P; Kiechle, Marion; Magdolen, Viktor; Schmitt, Manfred

    2014-01-01

    Expression of the kallikrein-related peptidase 7 (KLK7) is dysregulated in ovarian cancer. We assessed KLK7 expression by ELISA and quantitative immunohistochemistry and analyzed its association with clinicopathological parameters and patients' outcome. KLK7 antigen concentrations were determined in tumor tissue extracts of 98 ovarian cancer patients by ELISA. For analysis of KLK7 immunoexpression in ovarian cancer tissue microarrays, a manual quantitative scoring system as well as a software tool for quantitative high-throughput automated image analysis was used. In immunohistochemical analyses, expression levels of KLK7 were not associated with patients' outcome. However, in multivariate analyses, KLK7 antigen levels in tumor tissue extracts were significantly associated with both overall and progression-free survival: ovarian cancer patients with high KLK7 levels had a significantly, 2-fold lower risk of death [hazard ratio (HR)=0.51, 95% confidence interval (CI)=0.29-0.90, p=0.019] or relapse [HR=0.47, 95% CI=0.25-0.91, p=0.024), as compared with patients who displayed low KLK7 levels. Our results indicate that - in contrast to earlier findings - high KLK7 antigen levels in tumor tissue extracts may be associated with a better prognosis of ovarian cancer patients.

  5. ConfocalCheck - A Software Tool for the Automated Monitoring of Confocal Microscope Performance

    PubMed Central

    Hng, Keng Imm; Dormann, Dirk

    2013-01-01

    Laser scanning confocal microscopy has become an invaluable tool in biomedical research but regular quality testing is vital to maintain the system’s performance for diagnostic and research purposes. Although many methods have been devised over the years to characterise specific aspects of a confocal microscope like measuring the optical point spread function or the field illumination, only very few analysis tools are available. Our aim was to develop a comprehensive quality assurance framework ranging from image acquisition to automated analysis and documentation. We created standardised test data to assess the performance of the lasers, the objective lenses and other key components required for optimum confocal operation. The ConfocalCheck software presented here analyses the data fully automatically. It creates numerous visual outputs indicating potential issues requiring further investigation. By storing results in a web browser compatible file format the software greatly simplifies record keeping allowing the operator to quickly compare old and new data and to spot developing trends. We demonstrate that the systematic monitoring of confocal performance is essential in a core facility environment and how the quantitative measurements obtained can be used for the detailed characterisation of system components as well as for comparisons across multiple instruments. PMID:24224017

  6. Automated analysis of non-mass-enhancing lesions in breast MRI based on morphological, kinetic, and spatio-temporal moments and joint segmentation-motion compensation technique

    NASA Astrophysics Data System (ADS)

    Hoffmann, Sebastian; Shutler, Jamie D.; Lobbes, Marc; Burgeth, Bernhard; Meyer-Bäse, Anke

    2013-12-01

    Dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) represents an established method for the detection and diagnosis of breast lesions. While mass-like enhancing lesions can be easily categorized according to the Breast Imaging Reporting and Data System (BI-RADS) MRI lexicon, a majority of diagnostically challenging lesions, the so called non-mass-like enhancing lesions, remain both qualitatively as well as quantitatively difficult to analyze. Thus, the evaluation of kinetic and/or morphological characteristics of non-masses represents a challenging task for an automated analysis and is of crucial importance for advancing current computer-aided diagnosis (CAD) systems. Compared to the well-characterized mass-enhancing lesions, non-masses have no well-defined and blurred tumor borders and a kinetic behavior that is not easily generalizable and thus discriminative for malignant and benign non-masses. To overcome these difficulties and pave the way for novel CAD systems for non-masses, we will evaluate several kinetic and morphological descriptors separately and a novel technique, the Zernike velocity moments, to capture the joint spatio-temporal behavior of these lesions, and additionally consider the impact of non-rigid motion compensation on a correct diagnosis.

  7. What Happened, and Why: Toward an Understanding of Human Error Based on Automated Analyses of Incident Reports. Volume 2

    NASA Technical Reports Server (NTRS)

    Ferryman, Thomas A.; Posse, Christian; Rosenthal, Loren J.; Srivastava, Ashok N.; Statler, Irving C.

    2006-01-01

    The objective of the Aviation System Monitoring and Modeling project of NASA's Aviation Safety and Security Program was to develop technologies to enable proactive management of safety risk, which entails identifying the precursor events and conditions that foreshadow most accidents. Information about what happened can be extracted from quantitative data sources, but the experiential account of the incident reporter is the best available source of information about why an incident happened. In Volume I, the concept of the Scenario was introduced as a pragmatic guide for identifying similarities of what happened based on the objective parameters that define the Context and the Outcome of a Scenario. In this Volume II, that study continues into the analyses of the free narratives to gain understanding as to why the incident occurred from the reporter s perspective. While this is just the first experiment, the results of our approach are encouraging and indicate that it will be possible to design an automated analysis process guided by the structure of the Scenario that can achieve the level of consistency and reliability of human analysis of narrative reports.

  8. In vivo imaging and quantitative analysis of changes in axon length using transgenic zebrafish embryos.

    PubMed

    Kanungo, Jyotshnabala; Lantz, Susan; Paule, Merle G

    2011-01-01

    We describe an imaging procedure to measure axon length in zebrafish embryos in vivo. Automated fluorescent image acquisition was performed with the ImageXpress Micro high content screening reader and further analysis of axon lengths was performed on archived images using AcuityXpress software. We utilized the Neurite Outgrowth Application module with a customized protocol (journal) to measure the axons. Since higher doses of ethanol (2-2.5%, v/v) have been shown to deform motor neurons and axons during development, here we used ethanol to treat transgenic [hb9:GFP (green fluorescent protein)] zebrafish embryos at 28 hpf (hours post-fertilization). These embryos express GFP in the motor neurons and their axons. Embryos after ethanol treatment were arrayed in 384-well plates for automated fluorescent image acquisition in vivo. Average axon lengths of high dose ethanol-treated embryos were significantly lower than the control. Another experiment showed that there was no significant difference in the axon lengths between the embryos grown for 24h at 22°C and 28.5°C. These test experiments demonstrate that using axon development as an end-point, compound screening can be performed in a time-efficient manner. Published by Elsevier Inc.

  9. Automated retinal vessel type classification in color fundus images

    NASA Astrophysics Data System (ADS)

    Yu, H.; Barriga, S.; Agurto, C.; Nemeth, S.; Bauman, W.; Soliz, P.

    2013-02-01

    Automated retinal vessel type classification is an essential first step toward machine-based quantitative measurement of various vessel topological parameters and identifying vessel abnormalities and alternations in cardiovascular disease risk analysis. This paper presents a new and accurate automatic artery and vein classification method developed for arteriolar-to-venular width ratio (AVR) and artery and vein tortuosity measurements in regions of interest (ROI) of 1.5 and 2.5 optic disc diameters from the disc center, respectively. This method includes illumination normalization, automatic optic disc detection and retinal vessel segmentation, feature extraction, and a partial least squares (PLS) classification. Normalized multi-color information, color variation, and multi-scale morphological features are extracted on each vessel segment. We trained the algorithm on a set of 51 color fundus images using manually marked arteries and veins. We tested the proposed method in a previously unseen test data set consisting of 42 images. We obtained an area under the ROC curve (AUC) of 93.7% in the ROI of AVR measurement and 91.5% of AUC in the ROI of tortuosity measurement. The proposed AV classification method has the potential to assist automatic cardiovascular disease early detection and risk analysis.

  10. An automated smartphone-based diagnostic assay for point-of-care semen analysis

    PubMed Central

    Kanakasabapathy, Manoj Kumar; Sadasivam, Magesh; Singh, Anupriya; Preston, Collin; Thirumalaraju, Prudhvi; Venkataraman, Maanasa; Bormann, Charles L.; Draz, Mohamed Shehata; Petrozza, John C.; Shafiee, Hadi

    2017-01-01

    Male infertility affects up to 12% of the world’s male population and is linked to various environmental and medical conditions. Manual microscope-based testing and computer-assisted semen analysis (CASA) are the current standard methods to diagnose male infertility; however, these methods are labor-intensive, expensive, and laboratory-based. Cultural and socially dominated stigma against male infertility testing hinders a large number of men from getting tested for infertility, especially in resource-limited African countries. We describe the development and clinical testing of an automated smartphone-based semen analyzer designed for quantitative measurement of sperm concentration and motility for point-of-care male infertility screening. Using a total of 350 clinical semen specimens at a fertility clinic, we have shown that our assay can analyze an unwashed, unprocessed liquefied semen sample with <5-s mean processing time and provide the user a semen quality evaluation based on the World Health Organization (WHO) guidelines with ~98% accuracy. The work suggests that the integration of microfluidics, optical sensing accessories, and advances in consumer electronics, particularly smartphone capabilities, can make remote semen quality testing accessible to people in both developed and developing countries who have access to smartphones. PMID:28330865

  11. System Design and Development of a Robotic Device for Automated Venipuncture and Diagnostic Blood Cell Analysis.

    PubMed

    Balter, Max L; Chen, Alvin I; Fromholtz, Alex; Gorshkov, Alex; Maguire, Tim J; Yarmush, Martin L

    2016-10-01

    Diagnostic blood testing is the most prevalent medical procedure performed in the world and forms the cornerstone of modern health care delivery. Yet blood tests are still predominantly carried out in centralized labs using large-volume samples acquired by manual venipuncture, and no end-to-end solution from blood draw to sample analysis exists today. Our group is developing a platform device that merges robotic phlebotomy with automated diagnostics to rapidly deliver patient information at the site of the blood draw. The system couples an image-guided venipuncture robot, designed to address the challenges of routine venous access, with a centrifuge-based blood analyzer to obtain quantitative measurements of hematology. In this paper, we first present the system design and architecture of the integrated device. We then perform a series of in vitro experiments to evaluate the cannulation accuracy of the system on blood vessel phantoms. Next, we assess the effects of vessel diameter, needle gauge, flow rate, and viscosity on the rate of sample collection. Finally, we demonstrate proof-of-concept of a white cell assay on the blood analyzer using in vitro human samples spiked with fluorescently labeled microbeads.

  12. RAPID AND AUTOMATED PROCESSING OF MALDI-FTICR/MS DATA FOR N-METABOLIC LABELING IN A SHOTGUN PROTEOMICS ANALYSIS.

    PubMed

    Jing, Li; Amster, I Jonathan

    2009-10-15

    Offline high performance liquid chromatography combined with matrix assisted laser desorption and Fourier transform ion cyclotron resonance mass spectrometry (HPLC-MALDI-FTICR/MS) provides the means to rapidly analyze complex mixtures of peptides, such as those produced by proteolytic digestion of a proteome. This method is particularly useful for making quantitative measurements of changes in protein expression by using (15)N-metabolic labeling. Proteolytic digestion of combined labeled and unlabeled proteomes produces complex mixtures that with many mass overlaps when analyzed by HPLC-MALDI-FTICR/MS. A significant challenge to data analysis is the matching of pairs of peaks which represent an unlabeled peptide and its labeled counterpart. We have developed an algorithm and incorporated it into a compute program which significantly accelerates the interpretation of (15)N metabolic labeling data by automating the process of identifying unlabeled/labeled peak pairs. The algorithm takes advantage of the high resolution and mass accuracy of FTICR mass spectrometry. The algorithm is shown to be able to successfully identify the (15)N/(14)N peptide pairs and calculate peptide relative abundance ratios in highly complex mixtures from the proteolytic digest of a whole organism protein extract.

  13. Report on Automated Semantic Analysis of Scientific and Engineering Codes

    NASA Technical Reports Server (NTRS)

    Stewart. Maark E. M.; Follen, Greg (Technical Monitor)

    2001-01-01

    The loss of the Mars Climate Orbiter due to a software error reveals what insiders know: software development is difficult and risky because, in part, current practices do not readily handle the complex details of software. Yet, for scientific software development the MCO mishap represents the tip of the iceberg; few errors are so public, and many errors are avoided with a combination of expertise, care, and testing during development and modification. Further, this effort consumes valuable time and resources even when hardware costs and execution time continually decrease. Software development could use better tools! This lack of tools has motivated the semantic analysis work explained in this report. However, this work has a distinguishing emphasis; the tool focuses on automated recognition of the fundamental mathematical and physical meaning of scientific code. Further, its comprehension is measured by quantitatively evaluating overall recognition with practical codes. This emphasis is necessary if software errors-like the MCO error-are to be quickly and inexpensively avoided in the future. This report evaluates the progress made with this problem. It presents recommendations, describes the approach, the tool's status, the challenges, related research, and a development strategy.

  14. An image analysis system for near-infrared (NIR) fluorescence lymph imaging

    NASA Astrophysics Data System (ADS)

    Zhang, Jingdan; Zhou, Shaohua Kevin; Xiang, Xiaoyan; Rasmussen, John C.; Sevick-Muraca, Eva M.

    2011-03-01

    Quantitative analysis of lymphatic function is crucial for understanding the lymphatic system and diagnosing the associated diseases. Recently, a near-infrared (NIR) fluorescence imaging system is developed for real-time imaging lymphatic propulsion by intradermal injection of microdose of a NIR fluorophore distal to the lymphatics of interest. However, the previous analysis software3, 4 is underdeveloped, requiring extensive time and effort to analyze a NIR image sequence. In this paper, we develop a number of image processing techniques to automate the data analysis workflow, including an object tracking algorithm to stabilize the subject and remove the motion artifacts, an image representation named flow map to characterize lymphatic flow more reliably, and an automatic algorithm to compute lymph velocity and frequency of propulsion. By integrating all these techniques to a system, the analysis workflow significantly reduces the amount of required user interaction and improves the reliability of the measurement.

  15. The quest for improved reproducibility in MALDI mass spectrometry.

    PubMed

    O'Rourke, Matthew B; Djordjevic, Steven P; Padula, Matthew P

    2018-03-01

    Reproducibility has been one of the biggest hurdles faced when attempting to develop quantitative protocols for MALDI mass spectrometry. The heterogeneous nature of sample recrystallization has made automated sample acquisition somewhat "hit and miss" with manual intervention needed to ensure that all sample spots have been analyzed. In this review, we explore the last 30 years of literature and anecdotal evidence that has attempted to address and improve reproducibility in MALDI MS. Though many methods have been attempted, we have discovered a significant publication history surrounding the use of nitrocellulose as a substrate to improve homogeneity of crystal formation and therefore reproducibility. We therefore propose that this is the most promising avenue of research for developing a comprehensive and universal preparation protocol for quantitative MALDI MS analysis. © 2016 Wiley Periodicals, Inc. Mass Spec Rev 37:217-228, 2018. © 2016 Wiley Periodicals, Inc.

  16. Preprocessing film-copied MRI for studying morphological brain changes.

    PubMed

    Pham, Tuan D; Eisenblätter, Uwe; Baune, Bernhard T; Berger, Klaus

    2009-06-15

    The magnetic resonance imaging (MRI) of the brain is one of the important data items for studying memory and morbidity in elderly as these images can provide useful information through the quantitative measures of various regions of interest of the brain. As an effort to fully automate the biomedical analysis of the brain that can be combined with the genetic data of the same human population and where the records of the original MRI data are missing, this paper presents two effective methods for addressing this imaging problem. The first method handles the restoration of the film-copied MRI. The second method involves the segmentation of the image data. Experimental results and comparisons with other methods suggest the usefulness of the proposed image analysis methodology.

  17. Application of automation and robotics to lunar surface human exploration operations

    NASA Technical Reports Server (NTRS)

    Woodcock, Gordon R.; Sherwood, Brent; Buddington, Patricia A.; Bares, Leona C.; Folsom, Rolfe; Mah, Robert; Lousma, Jack

    1990-01-01

    Major results of a study applying automation and robotics to lunar surface base buildup and operations concepts are reported. The study developed a reference base scenario with specific goals, equipment concepts, robot concepts, activity schedules and buildup manifests. It examined crew roles, contingency cases and system reliability, and proposed a set of technologies appropriate and necessary for effective lunar operations. This paper refers readers to four companion papers for quantitative details where appropriate.

  18. Adaptive Automation and Cue Invocation: The Effect of Cue Timing on Operator Error

    DTIC Science & Technology

    2013-05-01

    129. 5. Parasuraman, R. (2000). Designing automation for human use: Empirical studies and quantitative models. Ergonomics , 43, 931-951. 6...Prospective memory errors involve memory for intended actions that are planned to be performed at some designated point in the future [20]. In the DMOO...RESCHU) [21] was used in this study. A Navy pilot who is familiar with supervisory control tasks designed the RESCHU task and the task has been

  19. Automation of the ELISpot assay for high-throughput detection of antigen-specific T-cell responses.

    PubMed

    Almeida, Coral-Ann M; Roberts, Steven G; Laird, Rebecca; McKinnon, Elizabeth; Ahmed, Imran; Pfafferott, Katja; Turley, Joanne; Keane, Niamh M; Lucas, Andrew; Rushton, Ben; Chopra, Abha; Mallal, Simon; John, Mina

    2009-05-15

    The enzyme linked immunospot (ELISpot) assay is a fundamental tool in cellular immunology, providing both quantitative and qualitative information on cellular cytokine responses to defined antigens. It enables the comprehensive screening of patient derived peripheral blood mononuclear cells to reveal the antigenic restriction of T-cell responses and is an emerging technique in clinical laboratory investigation of certain infectious diseases. As with all cellular-based assays, the final results of the assay are dependent on a number of technical variables that may impact precision if not highly standardised between operators. When studies that are large scale or using multiple antigens are set up manually, these assays may be labour intensive, have many manual handling steps, are subject to data and sample integrity failure and may show large inter-operator variability. Here we describe the successful automated performance of the interferon (IFN)-gamma ELISpot assay from cell counting through to electronic capture of cytokine quantitation and present the results of a comparison between automated and manual performance of the ELISpot assay. The mean number of spot forming units enumerated by both methods for limiting dilutions of CMV, EBV and influenza (CEF)-derived peptides in six healthy individuals were highly correlated (r>0.83, p<0.05). The precision results from the automated system compared favourably with the manual ELISpot and further ensured electronic tracking, increased through-put and reduced turnaround time.

  20. Automated unsupervised multi-parametric classification of adipose tissue depots in skeletal muscle

    PubMed Central

    Valentinitsch, Alexander; Karampinos, Dimitrios C.; Alizai, Hamza; Subburaj, Karupppasamy; Kumar, Deepak; Link, Thomas M.; Majumdar, Sharmila

    2012-01-01

    Purpose To introduce and validate an automated unsupervised multi-parametric method for segmentation of the subcutaneous fat and muscle regions in order to determine subcutaneous adipose tissue (SAT) and intermuscular adipose tissue (IMAT) areas based on data from a quantitative chemical shift-based water-fat separation approach. Materials and Methods Unsupervised standard k-means clustering was employed to define sets of similar features (k = 2) within the whole multi-modal image after the water-fat separation. The automated image processing chain was composed of three primary stages including tissue, muscle and bone region segmentation. The algorithm was applied on calf and thigh datasets to compute SAT and IMAT areas and was compared to a manual segmentation. Results The IMAT area using the automatic segmentation had excellent agreement with the IMAT area using the manual segmentation for all the cases in the thigh (R2: 0.96) and for cases with up to moderate IMAT area in the calf (R2: 0.92). The group with the highest grade of muscle fat infiltration in the calf had the highest error in the inner SAT contour calculation. Conclusion The proposed multi-parametric segmentation approach combined with quantitative water-fat imaging provides an accurate and reliable method for an automated calculation of the SAT and IMAT areas reducing considerably the total post-processing time. PMID:23097409

  1. Fully automated, internally controlled quantification of hepatitis B Virus DNA by real-time PCR by use of the MagNA Pure LC and LightCycler instruments.

    PubMed

    Leb, Victoria; Stöcher, Markus; Valentine-Thon, Elizabeth; Hölzl, Gabriele; Kessler, Harald; Stekel, Herbert; Berg, Jörg

    2004-02-01

    We report on the development of a fully automated real-time PCR assay for the quantitative detection of hepatitis B virus (HBV) DNA in plasma with EDTA (EDTA plasma). The MagNA Pure LC instrument was used for automated DNA purification and automated preparation of PCR mixtures. Real-time PCR was performed on the LightCycler instrument. An internal amplification control was devised as a PCR competitor and was introduced into the assay at the stage of DNA purification to permit monitoring for sample adequacy. The detection limit of the assay was found to be 200 HBV DNA copies/ml, with a linear dynamic range of 8 orders of magnitude. When samples from the European Union Quality Control Concerted Action HBV Proficiency Panel 1999 were examined, the results were found to be in acceptable agreement with the HBV DNA concentrations of the panel members. In a clinical laboratory evaluation of 123 EDTA plasma samples, a significant correlation was found with the results obtained by the Roche HBV Monitor test on the Cobas Amplicor analyzer within the dynamic range of that system. In conclusion, the newly developed assay has a markedly reduced hands-on time, permits monitoring for sample adequacy, and is suitable for the quantitative detection of HBV DNA in plasma in a routine clinical laboratory.

  2. Automated facial acne assessment from smartphone images

    NASA Astrophysics Data System (ADS)

    Amini, Mohammad; Vasefi, Fartash; Valdebran, Manuel; Huang, Kevin; Zhang, Haomiao; Kemp, William; MacKinnon, Nicholas

    2018-02-01

    A smartphone mobile medical application is presented, that provides analysis of the health of skin on the face using a smartphone image and cloud-based image processing techniques. The mobile application employs the use of the camera to capture a front face image of a subject, after which the captured image is spatially calibrated based on fiducial points such as position of the iris of the eye. A facial recognition algorithm is used to identify features of the human face image, to normalize the image, and to define facial regions of interest (ROI) for acne assessment. We identify acne lesions and classify them into two categories: those that are papules and those that are pustules. Automated facial acne assessment was validated by performing tests on images of 60 digital human models and 10 real human face images. The application was able to identify 92% of acne lesions within five facial ROIs. The classification accuracy for separating papules from pustules was 98%. Combined with in-app documentation of treatment, lifestyle factors, and automated facial acne assessment, the app can be used in both cosmetic and clinical dermatology. It allows users to quantitatively self-measure acne severity and treatment efficacy on an ongoing basis to help them manage their chronic facial acne.

  3. Robotic voltammetry with carbon nanotube-based sensors: a superb blend for convenient high-quality antimicrobial trace analysis.

    PubMed

    Theanponkrang, Somjai; Suginta, Wipa; Weingart, Helge; Winterhalter, Mathias; Schulte, Albert

    2015-01-01

    A new automated pharmacoanalytical technique for convenient quantification of redox-active antibiotics has been established by combining the benefits of a carbon nanotube (CNT) sensor modification with electrocatalytic activity for analyte detection with the merits of a robotic electrochemical device that is capable of sequential nonmanual sample measurements in 24-well microtiter plates. Norfloxacin (NFX) and ciprofloxacin (CFX), two standard fluoroquinolone antibiotics, were used in automated calibration measurements by differential pulse voltammetry (DPV) and accomplished were linear ranges of 1-10 μM and 2-100 μM for NFX and CFX, respectively. The lowest detectable levels were estimated to be 0.3±0.1 μM (n=7) for NFX and 1.6±0.1 μM (n=7) for CFX. In standard solutions or tablet samples of known content, both analytes could be quantified with the robotic DPV microtiter plate assay, with recoveries within ±4% of 100%. And recoveries were as good when NFX was evaluated in human serum samples with added NFX. The use of simple instrumentation, convenience in execution, and high effectiveness in analyte quantitation suggest the merger between automated microtiter plate voltammetry and CNT-supported electrochemical drug detection as a novel methodology for antibiotic testing in pharmaceutical and clinical research and quality control laboratories.

  4. [Impact of an automated dispensing system for medical devices in cardiac surgery department].

    PubMed

    Clou, E; Dompnier, M; Kably, B; Leplay, C; Poupon, E; Archer, V; Paul, M

    2018-01-01

    To secure medical devices' management, the implementation of automated dispensing system in surgical service has been realized. The objective of this study was to evaluate security, organizational and economic impact of installing automated dispensing system for medical devices (ASDM). The implementation took place in a cardiac surgery department. Security impact was assessed by comparing traceability rate of implantable medical devices one year before and one year after installation. Questionnaire on nurses' perception and satisfaction completed this survey. Resupplying costs, stocks' evolution and investments for the implementation of ASDM were the subject of cost-benefit study. After one year, traceability rate is excellent (100%). Nursing staffs were satisfied with 87.5% by this new system. The introduction of ASDM allowed a qualitative and quantitative decrease in stocks, with a reduction of 30% for purchased medical devices and 15% for implantable medical devices in deposit-consignment. Cost-benefit analysis shows a rapid return on investment. Real stock decrease (purchased medical devices) is equivalent to 46.6% of investment. Implementation of ASDM allows to secure storage and dispensing of medical devices. This system has also an important economic impact and appreciated by users. Copyright © 2017 Académie Nationale de Pharmacie. Published by Elsevier Masson SAS. All rights reserved.

  5. Fragmented red blood cells automated measurement is a useful parameter to exclude schistocytes on the blood film.

    PubMed

    Lesesve, J-F; Asnafi, V; Braun, F; Zini, G

    2012-12-01

      The diagnosis of thrombotic microangiopathies (TMA) or disorders that may mimic their features remains difficult. Mechanical hemolytic anemia with the detection of shistocytes on the blood smear is a cornerstone finding to assess the diagnosis, but microscopic evaluation of shistocytes is still problematic with wide interobserver variations. Some of the latest generation automated blood cell counters (ABCC) propose an original quantitative approach of fragmented red cells (FRC), aiming to be equivalent to the microscopic count. This parameter has been poorly evaluated.   To assess the predictive value (PV) of this test, we conducted studies comparing automated and microscopic counts of FRC/schistocytes, based on the analysis of thousands samples in four university hospitals and using the 2 ABCC currently available (Siemens ADVIA series, Sysmex XE-2100). Reference range for FRC was <0.3% for the ADVIA and <0.5% for the XE-2100. The presence of FRC below a threshold determined at 1% (ADVIA and XE-2100) had a negative PV close to 100% to exclude the presence of schistocyte on the blood smear, but in relationship with a poor PV value. Our study validated the utility of the immediately available FRC parameter on ABCC to exclude schistocytes and the diagnosis of TMA. © 2012 Blackwell Publishing Ltd.

  6. Comparing algorithms for automated vessel segmentation in computed tomography scans of the lung: the VESSEL12 study

    PubMed Central

    Rudyanto, Rina D.; Kerkstra, Sjoerd; van Rikxoort, Eva M.; Fetita, Catalin; Brillet, Pierre-Yves; Lefevre, Christophe; Xue, Wenzhe; Zhu, Xiangjun; Liang, Jianming; Öksüz, İlkay; Ünay, Devrim; Kadipaşaogandcaron;lu, Kamuran; Estépar, Raúl San José; Ross, James C.; Washko, George R.; Prieto, Juan-Carlos; Hoyos, Marcela Hernández; Orkisz, Maciej; Meine, Hans; Hüllebrand, Markus; Stöcker, Christina; Mir, Fernando Lopez; Naranjo, Valery; Villanueva, Eliseo; Staring, Marius; Xiao, Changyan; Stoel, Berend C.; Fabijanska, Anna; Smistad, Erik; Elster, Anne C.; Lindseth, Frank; Foruzan, Amir Hossein; Kiros, Ryan; Popuri, Karteek; Cobzas, Dana; Jimenez-Carretero, Daniel; Santos, Andres; Ledesma-Carbayo, Maria J.; Helmberger, Michael; Urschler, Martin; Pienn, Michael; Bosboom, Dennis G.H.; Campo, Arantza; Prokop, Mathias; de Jong, Pim A.; Ortiz-de-Solorzano, Carlos; Muñoz-Barrutia, Arrate; van Ginneken, Bram

    2016-01-01

    The VESSEL12 (VESsel SEgmentation in the Lung) challenge objectively compares the performance of different algorithms to identify vessels in thoracic computed tomography (CT) scans. Vessel segmentation is fundamental in computer aided processing of data generated by 3D imaging modalities. As manual vessel segmentation is prohibitively time consuming, any real world application requires some form of automation. Several approaches exist for automated vessel segmentation, but judging their relative merits is difficult due to a lack of standardized evaluation. We present an annotated reference dataset containing 20 CT scans and propose nine categories to perform a comprehensive evaluation of vessel segmentation algorithms from both academia and industry. Twenty algorithms participated in the VESSEL12 challenge, held at International Symposium on Biomedical Imaging (ISBI) 2012. All results have been published at the VESSEL12 website http://vessel12.grand-challenge.org. The challenge remains ongoing and open to new participants. Our three contributions are: (1) an annotated reference dataset available online for evaluation of new algorithms; (2) a quantitative scoring system for objective comparison of algorithms; and (3) performance analysis of the strengths and weaknesses of the various vessel segmentation methods in the presence of various lung diseases. PMID:25113321

  7. Automated Control of the Organic and Inorganic Composition of Aloe vera Extracts Using (1)H NMR Spectroscopy.

    PubMed

    Monakhova, Yulia B; Randel, Gabriele; Diehl, Bernd W K

    2016-09-01

    Recent classification of Aloe vera whole-leaf extract by the International Agency for Research and Cancer as a possible carcinogen to humans as well as the continuous adulteration of A. vera's authentic material have generated renewed interest in controlling A. vera. The existing NMR spectroscopic method for the analysis of A. vera, which is based on a routine developed at Spectral Service, was extended. Apart from aloverose, glucose, malic acid, lactic acid, citric acid, whole-leaf material (WLM), acetic acid, fumaric acid, sodium benzoate, and potassium sorbate, the quantification of Mg(2+), Ca(2+), and fructose is possible with the addition of a Cs-EDTA solution to sample. The proposed methodology was automated, which includes phasing, baseline-correction, deconvolution (based on the Lorentzian function), integration, quantification, and reporting. The NMR method was applied to 41 A. vera preparations in the form of liquid A. vera juice and solid A. vera powder. The advantages of the new NMR methodology over the previous method were discussed. Correlation between the new and standard NMR methodologies was significant for aloverose, glucose, malic acid, lactic acid, citric acid, and WLM (P < 0.0001, R(2) = 0.99). NMR was found to be suitable for the automated simultaneous quantitative determination of 13 parameters in A. vera.

  8. Optimization strategies for a fluorescent dye with bimodal excitation spectra: application to semiautomated proteomics

    NASA Astrophysics Data System (ADS)

    Patton, Wayne F.; Berggren, Kiera N.; Lopez, Mary F.

    2001-04-01

    Facilities engaged in proteome analysis differ significantly in the degree that they implement automated systems for high-throughput protein characterization. Though automated workstation environments are becoming more routine in the biotechnology and pharmaceutical sectors of industry, university-based laboratories often perform these tasks manually, submitting protein spots excised from polyacrylamide gels to institutional core facilities for identification. For broad compatibility with imaging platforms, an optimized fluorescent dye developed for proteomics applications should be designed taking into account that laser scanners use visible light excitation and that charge-coupled device camera systems and gas discharge transilluminators rely upon UV excitation. The luminescent ruthenium metal complex, SYPRO Ruby protein gel stain, is compatible with a variety of excitation sources since it displays intense UV (280 nm) and visible (470 nm) absorption maxima. Localization is achieved by noncovalent, electrostatic and hydrophobic binding of dye to proteins, with signal being detected at 610 nm. Since proteins are not covalently modified by the dye, compatibility with downstream microchemical characterization techniques such as matrix-assisted laser desorption/ionization-mass spectrometry is assured. Protocols have been devised for optimizing fluorophore intensity. SYPRO Ruby dye outperforms alternatives such as silver staining in terms of quantitative capabilities, compatibility with mass spectrometry and ease of integration into automated work environments.

  9. PeptideDepot: flexible relational database for visual analysis of quantitative proteomic data and integration of existing protein information.

    PubMed

    Yu, Kebing; Salomon, Arthur R

    2009-12-01

    Recently, dramatic progress has been achieved in expanding the sensitivity, resolution, mass accuracy, and scan rate of mass spectrometers able to fragment and identify peptides through MS/MS. Unfortunately, this enhanced ability to acquire proteomic data has not been accompanied by a concomitant increase in the availability of flexible tools allowing users to rapidly assimilate, explore, and analyze this data and adapt to various experimental workflows with minimal user intervention. Here we fill this critical gap by providing a flexible relational database called PeptideDepot for organization of expansive proteomic data sets, collation of proteomic data with available protein information resources, and visual comparison of multiple quantitative proteomic experiments. Our software design, built upon the synergistic combination of a MySQL database for safe warehousing of proteomic data with a FileMaker-driven graphical user interface for flexible adaptation to diverse workflows, enables proteomic end-users to directly tailor the presentation of proteomic data to the unique analysis requirements of the individual proteomics lab. PeptideDepot may be deployed as an independent software tool or integrated directly with our high throughput autonomous proteomic pipeline used in the automated acquisition and post-acquisition analysis of proteomic data.

  10. Quantitative determination of atmospheric hydroperoxyl radical

    DOEpatents

    Springston, Stephen R.; Lloyd, Judith; Zheng, Jun

    2007-10-23

    A method for the quantitative determination of atmospheric hydroperoxyl radical comprising: (a) contacting a liquid phase atmospheric sample with a chemiluminescent compound which luminesces on contact with hydroperoxyl radical; (b) determining luminescence intensity from the liquid phase atmospheric sample; and (c) comparing said luminescence intensity from the liquid phase atmospheric sample to a standard luminescence intensity for hydroperoxyl radical. An apparatus for automating the method is also included.

  11. Sociolinguistically Informed Natural Language Processing: Automating Irony Detection

    DTIC Science & Technology

    2017-10-23

    ML and NLP technologies fail to detect ironic intent empirically. We specifically proposed to assess quantitatively (using the collected dataset...Aim 2. To analyze when existing ML and NLP technologies fail to detect ironic intent empirically. We specifically proposed to assess quantitatively ...of the embedding reddit thread, and the other comments in this thread) constitute 4 sub-reddit (URL) description number of labeled comments politics

  12. Advancing the Fork detector for quantitative spent nuclear fuel verification

    DOE PAGES

    Vaccaro, S.; Gauld, I. C.; Hu, J.; ...

    2018-01-31

    The Fork detector is widely used by the safeguards inspectorate of the European Atomic Energy Community (EURATOM) and the International Atomic Energy Agency (IAEA) to verify spent nuclear fuel. Fork measurements are routinely performed for safeguards prior to dry storage cask loading. Additionally, spent fuel verification will be required at the facilities where encapsulation is performed for acceptance in the final repositories planned in Sweden and Finland. The use of the Fork detector as a quantitative instrument has not been prevalent due to the complexity of correlating the measured neutron and gamma ray signals with fuel inventories and operator declarations.more » A spent fuel data analysis module based on the ORIGEN burnup code was recently implemented to provide automated real-time analysis of Fork detector data. This module allows quantitative predictions of expected neutron count rates and gamma units as measured by the Fork detectors using safeguards declarations and available reactor operating data. This study describes field testing of the Fork data analysis module using data acquired from 339 assemblies measured during routine dry cask loading inspection campaigns in Europe. Assemblies include both uranium oxide and mixed-oxide fuel assemblies. More recent measurements of 50 spent fuel assemblies at the Swedish Central Interim Storage Facility for Spent Nuclear Fuel are also analyzed. An evaluation of uncertainties in the Fork measurement data is performed to quantify the ability of the data analysis module to verify operator declarations and to develop quantitative go/no-go criteria for safeguards verification measurements during cask loading or encapsulation operations. The goal of this approach is to provide safeguards inspectors with reliable real-time data analysis tools to rapidly identify discrepancies in operator declarations and to detect potential partial defects in spent fuel assemblies with improved reliability and minimal false positive alarms. Finally, the results are summarized, and sources and magnitudes of uncertainties are identified, and the impact of analysis uncertainties on the ability to confirm operator declarations is quantified.« less

  13. Advancing the Fork detector for quantitative spent nuclear fuel verification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vaccaro, S.; Gauld, I. C.; Hu, J.

    The Fork detector is widely used by the safeguards inspectorate of the European Atomic Energy Community (EURATOM) and the International Atomic Energy Agency (IAEA) to verify spent nuclear fuel. Fork measurements are routinely performed for safeguards prior to dry storage cask loading. Additionally, spent fuel verification will be required at the facilities where encapsulation is performed for acceptance in the final repositories planned in Sweden and Finland. The use of the Fork detector as a quantitative instrument has not been prevalent due to the complexity of correlating the measured neutron and gamma ray signals with fuel inventories and operator declarations.more » A spent fuel data analysis module based on the ORIGEN burnup code was recently implemented to provide automated real-time analysis of Fork detector data. This module allows quantitative predictions of expected neutron count rates and gamma units as measured by the Fork detectors using safeguards declarations and available reactor operating data. This study describes field testing of the Fork data analysis module using data acquired from 339 assemblies measured during routine dry cask loading inspection campaigns in Europe. Assemblies include both uranium oxide and mixed-oxide fuel assemblies. More recent measurements of 50 spent fuel assemblies at the Swedish Central Interim Storage Facility for Spent Nuclear Fuel are also analyzed. An evaluation of uncertainties in the Fork measurement data is performed to quantify the ability of the data analysis module to verify operator declarations and to develop quantitative go/no-go criteria for safeguards verification measurements during cask loading or encapsulation operations. The goal of this approach is to provide safeguards inspectors with reliable real-time data analysis tools to rapidly identify discrepancies in operator declarations and to detect potential partial defects in spent fuel assemblies with improved reliability and minimal false positive alarms. Finally, the results are summarized, and sources and magnitudes of uncertainties are identified, and the impact of analysis uncertainties on the ability to confirm operator declarations is quantified.« less

  14. Advancing the Fork detector for quantitative spent nuclear fuel verification

    NASA Astrophysics Data System (ADS)

    Vaccaro, S.; Gauld, I. C.; Hu, J.; De Baere, P.; Peterson, J.; Schwalbach, P.; Smejkal, A.; Tomanin, A.; Sjöland, A.; Tobin, S.; Wiarda, D.

    2018-04-01

    The Fork detector is widely used by the safeguards inspectorate of the European Atomic Energy Community (EURATOM) and the International Atomic Energy Agency (IAEA) to verify spent nuclear fuel. Fork measurements are routinely performed for safeguards prior to dry storage cask loading. Additionally, spent fuel verification will be required at the facilities where encapsulation is performed for acceptance in the final repositories planned in Sweden and Finland. The use of the Fork detector as a quantitative instrument has not been prevalent due to the complexity of correlating the measured neutron and gamma ray signals with fuel inventories and operator declarations. A spent fuel data analysis module based on the ORIGEN burnup code was recently implemented to provide automated real-time analysis of Fork detector data. This module allows quantitative predictions of expected neutron count rates and gamma units as measured by the Fork detectors using safeguards declarations and available reactor operating data. This paper describes field testing of the Fork data analysis module using data acquired from 339 assemblies measured during routine dry cask loading inspection campaigns in Europe. Assemblies include both uranium oxide and mixed-oxide fuel assemblies. More recent measurements of 50 spent fuel assemblies at the Swedish Central Interim Storage Facility for Spent Nuclear Fuel are also analyzed. An evaluation of uncertainties in the Fork measurement data is performed to quantify the ability of the data analysis module to verify operator declarations and to develop quantitative go/no-go criteria for safeguards verification measurements during cask loading or encapsulation operations. The goal of this approach is to provide safeguards inspectors with reliable real-time data analysis tools to rapidly identify discrepancies in operator declarations and to detect potential partial defects in spent fuel assemblies with improved reliability and minimal false positive alarms. The results are summarized, and sources and magnitudes of uncertainties are identified, and the impact of analysis uncertainties on the ability to confirm operator declarations is quantified.

  15. Quantitative image analysis of immunohistochemical stains using a CMYK color model

    PubMed Central

    Pham, Nhu-An; Morrison, Andrew; Schwock, Joerg; Aviel-Ronen, Sarit; Iakovlev, Vladimir; Tsao, Ming-Sound; Ho, James; Hedley, David W

    2007-01-01

    Background Computer image analysis techniques have decreased effects of observer biases, and increased the sensitivity and the throughput of immunohistochemistry (IHC) as a tissue-based procedure for the evaluation of diseases. Methods We adapted a Cyan/Magenta/Yellow/Key (CMYK) model for automated computer image analysis to quantify IHC stains in hematoxylin counterstained histological sections. Results The spectral characteristics of the chromogens AEC, DAB and NovaRed as well as the counterstain hematoxylin were first determined using CMYK, Red/Green/Blue (RGB), normalized RGB and Hue/Saturation/Lightness (HSL) color models. The contrast of chromogen intensities on a 0–255 scale (24-bit image file) as well as compared to the hematoxylin counterstain was greatest using the Yellow channel of a CMYK color model, suggesting an improved sensitivity for IHC evaluation compared to other color models. An increase in activated STAT3 levels due to growth factor stimulation, quantified using the Yellow channel image analysis was associated with an increase detected by Western blotting. Two clinical image data sets were used to compare the Yellow channel automated method with observer-dependent methods. First, a quantification of DAB-labeled carbonic anhydrase IX hypoxia marker in 414 sections obtained from 138 biopsies of cervical carcinoma showed strong association between Yellow channel and positive color selection results. Second, a linear relationship was also demonstrated between Yellow intensity and visual scoring for NovaRed-labeled epidermal growth factor receptor in 256 non-small cell lung cancer biopsies. Conclusion The Yellow channel image analysis method based on a CMYK color model is independent of observer biases for threshold and positive color selection, applicable to different chromogens, tolerant of hematoxylin, sensitive to small changes in IHC intensity and is applicable to simple automation procedures. These characteristics are advantageous for both basic as well as clinical research in an unbiased, reproducible and high throughput evaluation of IHC intensity. PMID:17326824

  16. Preliminary evaluation of a fully automated quantitative framework for characterizing general breast tissue histology via color histogram and color texture analysis

    NASA Astrophysics Data System (ADS)

    Keller, Brad M.; Gastounioti, Aimilia; Batiste, Rebecca C.; Kontos, Despina; Feldman, Michael D.

    2016-03-01

    Visual characterization of histologic specimens is known to suffer from intra- and inter-observer variability. To help address this, we developed an automated framework for characterizing digitized histology specimens based on a novel application of color histogram and color texture analysis. We perform a preliminary evaluation of this framework using a set of 73 trichrome-stained, digitized slides of normal breast tissue which were visually assessed by an expert pathologist in terms of the percentage of collagenous stroma, stromal collagen density, duct-lobular unit density and the presence of elastosis. For each slide, our algorithm automatically segments the tissue region based on the lightness channel in CIELAB colorspace. Within each tissue region, a color histogram feature vector is extracted using a common color palette for trichrome images generated with a previously described method. Then, using a whole-slide, lattice-based methodology, color texture maps are generated using a set of color co-occurrence matrix statistics: contrast, correlation, energy and homogeneity. The extracted features sets are compared to the visually assessed tissue characteristics. Overall, the extracted texture features have high correlations to both the percentage of collagenous stroma (r=0.95, p<0.001) and duct-lobular unit density (r=0.71, p<0.001) seen in the tissue samples, and several individual features were associated with either collagen density and/or the presence of elastosis (p<=0.05). This suggests that the proposed framework has promise as a means to quantitatively extract descriptors reflecting tissue-level characteristics and thus could be useful in detecting and characterizing histological processes in digitized histology specimens.

  17. Semi-automated method to measure pneumonia severity in mice through computed tomography (CT) scan analysis

    NASA Astrophysics Data System (ADS)

    Johri, Ansh; Schimel, Daniel; Noguchi, Audrey; Hsu, Lewis L.

    2010-03-01

    Imaging is a crucial clinical tool for diagnosis and assessment of pneumonia, but quantitative methods are lacking. Micro-computed tomography (micro CT), designed for lab animals, provides opportunities for non-invasive radiographic endpoints for pneumonia studies. HYPOTHESIS: In vivo micro CT scans of mice with early bacterial pneumonia can be scored quantitatively by semiautomated imaging methods, with good reproducibility and correlation with bacterial dose inoculated, pneumonia survival outcome, and radiologists' scores. METHODS: Healthy mice had intratracheal inoculation of E. coli bacteria (n=24) or saline control (n=11). In vivo micro CT scans were performed 24 hours later with microCAT II (Siemens). Two independent radiologists scored the extent of airspace abnormality, on a scale of 0 (normal) to 24 (completely abnormal). Using the Amira 5.2 software (Mercury Computer Systems), a histogram distribution of voxel counts between the Hounsfield range of -510 to 0 was created and analyzed, and a segmentation procedure was devised. RESULTS: A t-test was performed to determine whether there was a significant difference in the mean voxel value of each mouse in the three experimental groups: Saline Survivors, Pneumonia Survivors, and Pneumonia Non-survivors. It was found that the voxel count method was able to statistically tell apart the Saline Survivors from the Pneumonia Survivors, the Saline Survivors from the Pneumonia Non-survivors, but not the Pneumonia Survivors vs. Pneumonia Non-survivors. The segmentation method, however, was successfully able to distinguish the two Pneumonia groups. CONCLUSION: We have pilot-tested an evaluation of early pneumonia in mice using micro CT and a semi-automated method for lung segmentation and scoring system. Statistical analysis indicates that the system is reliable and merits further evaluation.

  18. Fully automated analytical procedure for propofol determination by sequential injection technique with spectrophotometric and fluorimetric detections.

    PubMed

    Šrámková, Ivana; Amorim, Célia G; Sklenářová, Hana; Montenegro, Maria C B M; Horstkotte, Burkhard; Araújo, Alberto N; Solich, Petr

    2014-01-01

    In this work, an application of an enzymatic reaction for the determination of the highly hydrophobic drug propofol in emulsion dosage form is presented. Emulsions represent a complex and therefore challenging matrix for analysis. Ethanol was used for breakage of a lipid emulsion, which enabled optical detection. A fully automated method based on Sequential Injection Analysis was developed, allowing propofol determination without the requirement of tedious sample pre-treatment. The method was based on spectrophotometric detection after the enzymatic oxidation catalysed by horseradish peroxidase and subsequent coupling with 4-aminoantipyrine leading to a coloured product with an absorbance maximum at 485 nm. This procedure was compared with a simple fluorimetric method, which was based on the direct selective fluorescence emission of propofol in ethanol at 347 nm. Both methods provide comparable validation parameters with linear working ranges of 0.005-0.100 mg mL(-1) and 0.004-0.243 mg mL(-1) for the spectrophotometric and fluorimetric methods, respectively. The detection and quantitation limits achieved with the spectrophotometric method were 0.0016 and 0.0053 mg mL(-1), respectively. The fluorimetric method provided the detection limit of 0.0013 mg mL(-1) and limit of quantitation of 0.0043 mg mL(-1). The RSD did not exceed 5% and 2% (n=10), correspondingly. A sample throughput of approx. 14 h(-1) for the spectrophotometric and 68 h(-1) for the fluorimetric detection was achieved. Both methods proved to be suitable for the determination of propofol in pharmaceutical formulation with average recovery values of 98.1 and 98.5%. © 2013 Elsevier B.V. All rights reserved.

  19. An improved level set method for brain MR images segmentation and bias correction.

    PubMed

    Chen, Yunjie; Zhang, Jianwei; Macione, Jim

    2009-10-01

    Intensity inhomogeneities cause considerable difficulty in the quantitative analysis of magnetic resonance (MR) images. Thus, bias field estimation is a necessary step before quantitative analysis of MR data can be undertaken. This paper presents a variational level set approach to bias correction and segmentation for images with intensity inhomogeneities. Our method is based on an observation that intensities in a relatively small local region are separable, despite of the inseparability of the intensities in the whole image caused by the overall intensity inhomogeneity. We first define a localized K-means-type clustering objective function for image intensities in a neighborhood around each point. The cluster centers in this objective function have a multiplicative factor that estimates the bias within the neighborhood. The objective function is then integrated over the entire domain to define the data term into the level set framework. Our method is able to capture bias of quite general profiles. Moreover, it is robust to initialization, and thereby allows fully automated applications. The proposed method has been used for images of various modalities with promising results.

  20. An easy and inexpensive method for quantitative analysis of endothelial damage by using vital dye staining and Adobe Photoshop software.

    PubMed

    Saad, Hisham A; Terry, Mark A; Shamie, Neda; Chen, Edwin S; Friend, Daniel F; Holiman, Jeffrey D; Stoeger, Christopher

    2008-08-01

    We developed a simple, practical, and inexpensive technique to analyze areas of endothelial cell loss and/or damage over the entire corneal area after vital dye staining by using a readily available, off-the-shelf, consumer software program, Adobe Photoshop. The purpose of this article is to convey a method of quantifying areas of cell loss and/or damage. Descemet-stripping automated endothelial keratoplasty corneal transplant surgery was performed by using 5 precut corneas on a human cadaver eye. Corneas were removed and stained with trypan blue and alizarin red S and subsequently photographed. Quantitative assessment of endothelial damage was performed by using Adobe Photoshop 7.0 software. The average difference for cell area damage for analyses performed by 1 observer twice was 1.41%. For analyses performed by 2 observers, the average difference was 1.71%. Three masked observers were 100% successful in matching the randomized stained corneas to their randomized processed Adobe images. Vital dye staining of corneal endothelial cells can be combined with Adobe Photoshop software to yield a quantitative assessment of areas of acute endothelial cell loss and/or damage. This described technique holds promise for a more consistent and accurate method to evaluate the surgical trauma to the endothelial cell layer in laboratory models. This method of quantitative analysis can probably be generalized to any area of research that involves areas that are differentiated by color or contrast.

  1. Evaluation of software tools for automated identification of neuroanatomical structures in quantitative β-amyloid PET imaging to diagnose Alzheimer's disease.

    PubMed

    Tuszynski, Tobias; Rullmann, Michael; Luthardt, Julia; Butzke, Daniel; Tiepolt, Solveig; Gertz, Hermann-Josef; Hesse, Swen; Seese, Anita; Lobsien, Donald; Sabri, Osama; Barthel, Henryk

    2016-06-01

    For regional quantification of nuclear brain imaging data, defining volumes of interest (VOIs) by hand is still the gold standard. As this procedure is time-consuming and operator-dependent, a variety of software tools for automated identification of neuroanatomical structures were developed. As the quality and performance of those tools are poorly investigated so far in analyzing amyloid PET data, we compared in this project four algorithms for automated VOI definition (HERMES Brass, two PMOD approaches, and FreeSurfer) against the conventional method. We systematically analyzed florbetaben brain PET and MRI data of ten patients with probable Alzheimer's dementia (AD) and ten age-matched healthy controls (HCs) collected in a previous clinical study. VOIs were manually defined on the data as well as through the four automated workflows. Standardized uptake value ratios (SUVRs) with the cerebellar cortex as a reference region were obtained for each VOI. SUVR comparisons between ADs and HCs were carried out using Mann-Whitney-U tests, and effect sizes (Cohen's d) were calculated. SUVRs of automatically generated VOIs were correlated with SUVRs of conventionally derived VOIs (Pearson's tests). The composite neocortex SUVRs obtained by manually defined VOIs were significantly higher for ADs vs. HCs (p=0.010, d=1.53). This was also the case for the four tested automated approaches which achieved effect sizes of d=1.38 to d=1.62. SUVRs of automatically generated VOIs correlated significantly with those of the hand-drawn VOIs in a number of brain regions, with regional differences in the degree of these correlations. Best overall correlation was observed in the lateral temporal VOI for all tested software tools (r=0.82 to r=0.95, p<0.001). Automated VOI definition by the software tools tested has a great potential to substitute for the current standard procedure to manually define VOIs in β-amyloid PET data analysis.

  2. Quantitative Radiology: Automated CT Liver Volumetry Compared With Interactive Volumetry and Manual Volumetry

    PubMed Central

    Suzuki, Kenji; Epstein, Mark L.; Kohlbrenner, Ryan; Garg, Shailesh; Hori, Masatoshi; Oto, Aytekin; Baron, Richard L.

    2014-01-01

    OBJECTIVE The purpose of this study was to evaluate automated CT volumetry in the assessment of living-donor livers for transplant and to compare this technique with software-aided interactive volumetry and manual volumetry. MATERIALS AND METHODS Hepatic CT scans of 18 consecutively registered prospective liver donors were obtained under a liver transplant protocol. Automated liver volumetry was developed on the basis of 3D active-contour segmentation. To establish reference standard liver volumes, a radiologist manually traced the contour of the liver on each CT slice. We compared the results obtained with automated and interactive volumetry with those obtained with the reference standard for this study, manual volumetry. RESULTS The average interactive liver volume was 1553 ± 343 cm3, and the average automated liver volume was 1520 ± 378 cm3. The average manual volume was 1486 ± 343 cm3. Both interactive and automated volumetric results had excellent agreement with manual volumetric results (intraclass correlation coefficients, 0.96 and 0.94). The average user time for automated volumetry was 0.57 ± 0.06 min/case, whereas those for interactive and manual volumetry were 27.3 ± 4.6 and 39.4 ± 5.5 min/case, the difference being statistically significant (p < 0.05). CONCLUSION Both interactive and automated volumetry are accurate for measuring liver volume with CT, but automated volumetry is substantially more efficient. PMID:21940543

  3. Quantitative radiology: automated CT liver volumetry compared with interactive volumetry and manual volumetry.

    PubMed

    Suzuki, Kenji; Epstein, Mark L; Kohlbrenner, Ryan; Garg, Shailesh; Hori, Masatoshi; Oto, Aytekin; Baron, Richard L

    2011-10-01

    The purpose of this study was to evaluate automated CT volumetry in the assessment of living-donor livers for transplant and to compare this technique with software-aided interactive volumetry and manual volumetry. Hepatic CT scans of 18 consecutively registered prospective liver donors were obtained under a liver transplant protocol. Automated liver volumetry was developed on the basis of 3D active-contour segmentation. To establish reference standard liver volumes, a radiologist manually traced the contour of the liver on each CT slice. We compared the results obtained with automated and interactive volumetry with those obtained with the reference standard for this study, manual volumetry. The average interactive liver volume was 1553 ± 343 cm(3), and the average automated liver volume was 1520 ± 378 cm(3). The average manual volume was 1486 ± 343 cm(3). Both interactive and automated volumetric results had excellent agreement with manual volumetric results (intraclass correlation coefficients, 0.96 and 0.94). The average user time for automated volumetry was 0.57 ± 0.06 min/case, whereas those for interactive and manual volumetry were 27.3 ± 4.6 and 39.4 ± 5.5 min/case, the difference being statistically significant (p < 0.05). Both interactive and automated volumetry are accurate for measuring liver volume with CT, but automated volumetry is substantially more efficient.

  4. Application of automated image analysis to coal petrography

    USGS Publications Warehouse

    Chao, E.C.T.; Minkin, J.A.; Thompson, C.L.

    1982-01-01

    The coal petrologist seeks to determine the petrographic characteristics of organic and inorganic coal constituents and their lateral and vertical variations within a single coal bed or different coal beds of a particular coal field. Definitive descriptions of coal characteristics and coal facies provide the basis for interpretation of depositional environments, diagenetic changes, and burial history and determination of the degree of coalification or metamorphism. Numerous coal core or columnar samples must be studied in detail in order to adequately describe and define coal microlithotypes, lithotypes, and lithologic facies and their variations. The large amount of petrographic information required can be obtained rapidly and quantitatively by use of an automated image-analysis system (AIAS). An AIAS can be used to generate quantitative megascopic and microscopic modal analyses for the lithologic units of an entire columnar section of a coal bed. In our scheme for megascopic analysis, distinctive bands 2 mm or more thick are first demarcated by visual inspection. These bands consist of either nearly pure microlithotypes or lithotypes such as vitrite/vitrain or fusite/fusain, or assemblages of microlithotypes. Megascopic analysis with the aid of the AIAS is next performed to determine volume percentages of vitrite, inertite, minerals, and microlithotype mixtures in bands 0.5 to 2 mm thick. The microlithotype mixtures are analyzed microscopically by use of the AIAS to determine their modal composition in terms of maceral and optically observable mineral components. Megascopic and microscopic data are combined to describe the coal unit quantitatively in terms of (V) for vitrite, (E) for liptite, (I) for inertite or fusite, (M) for mineral components other than iron sulfide, (S) for iron sulfide, and (VEIM) for the composition of the mixed phases (Xi) i = 1,2, etc. in terms of the maceral groups vitrinite V, exinite E, inertinite I, and optically observable mineral content M. The volume percentage of each component present is indicated by a subscript. For example, a lithologic unit was determined megascopically to have the composition (V)13(I)1(S)1(X1)83(X2)2. After microscopic analysis of the mixed phases, this composition was expressed as (V)13(I)1(S)1(V63E19I14M4)83(V67E11I13M9)2. Finally, these data were combined in a description of the bulk composition as V67E16I13M3S1. An AIAS can also analyze textural characteristics and can be used for quick and reliable determination of rank (reflectance). Our AIAS is completely software based and incorporates a television (TV) camera that has optimum response characteristics in the range of reflectance less than 5%, making it particularly suitable for coal studies. Analysis of the digitized signal from the TV camera is controlled by a microprocessor having a resolution of 64 gray levels between full illumination and dark current. The processed image is reconverted for display on a TV monitor screen, on which selection of phases or features to be analyzed is readily controlled and edited by the operator through use of a lightpen. We expect that automated image analysis, because it can rapidly provide a large amount of pertinent information, will play a major role in the advancement of coal petrography. ?? 1982.

  5. Application of tissue mesodissection to molecular cancer diagnostics.

    PubMed

    Krizman, David; Adey, Nils; Parry, Robert

    2015-02-01

    To demonstrate clinical application of a mesodissection platform that was developed to combine advantages of laser-based instrumentation with the speed/ease of manual dissection for automated dissection of tissue off standard glass slides. Genomic analysis for KRAS gene mutation was performed on formalin fixed paraffin embedded (FFPE) cancer patient tissue that was dissected using the mesodissection platform. Selected reaction monitoring proteomic analysis for quantitative Her2 protein expression was performed on FFPE patient tumour tissue dissected by a laser-based instrument and the MilliSect instrument. Genomic analysis demonstrates highly confident detection of KRAS mutation specifically in lung cancer cells and not the surrounding benign, non-tumour tissue. Proteomic analysis demonstrates Her2 quantitative protein expression in breast cancer cells dissected manually, by laser-based instrumentation and by MilliSect instrumentation (mesodissection). Slide-mounted tissue dissection is commonly performed using laser-based instruments or manually scraping tissue by scalpel. Here we demonstrate that the mesodissection platform as performed by the MilliSect instrument for tissue dissection is cost-effective; it functions comparably to laser-based dissection and which can be adopted into a clinical diagnostic workflow. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  6. Automated extraction of lysergic acid diethylamide (LSD) and N-demethyl-LSD from blood, serum, plasma, and urine samples using the Zymark RapidTrace with LC/MS/MS confirmation.

    PubMed

    de Kanel, J; Vickery, W E; Waldner, B; Monahan, R M; Diamond, F X

    1998-05-01

    A forensic procedure for the quantitative confirmation of lysergic acid diethylamide (LSD) and the qualitative confirmation of its metabolite, N-demethyl-LSD, in blood, serum, plasma, and urine samples is presented. The Zymark RapidTrace was used to perform fully automated solid-phase extractions of all specimen types. After extract evaporation, confirmations were performed using liquid chromatography (LC) followed by positive electrospray ionization (ESI+) mass spectrometry/mass spectrometry (MS/MS) without derivatization. Quantitation of LSD was accomplished using LSD-d3 as an internal standard. The limit of quantitation (LOQ) for LSD was 0.05 ng/mL. The limit of detection (LOD) for both LSD and N-demethyl-LSD was 0.025 ng/mL. The recovery of LSD was greater than 95% at levels of 0.1 ng/mL and 2.0 ng/mL. For LSD at 1.0 ng/mL, the within-run and between-run (different day) relative standard deviation (RSD) was 2.2% and 4.4%, respectively.

  7. Instrumentation Automation for Concrete Structures; Report 1: Instrumentation Automation Techniques

    DTIC Science & Technology

    1986-12-01

    The internat.i..onal measuring system sets up independent standards for t:hese fundamental quanti ties. All other quanti ties (force, acceleration...measurement systems are typically composed of several fundamental performing a special function (Figure 1). 3 accuracy of a quantitative measurement is...equiJ2.ID_g_:)1t 2 J, A fundament ’’ J function of ~" i1 ’r. ’’i::rumentation system is to prese~t desired measurement data to Lne user in a form that

  8. Application of Deep Learning in Automated Analysis of Molecular Images in Cancer: A Survey

    PubMed Central

    Xue, Yong; Chen, Shihui; Liu, Yong

    2017-01-01

    Molecular imaging enables the visualization and quantitative analysis of the alterations of biological procedures at molecular and/or cellular level, which is of great significance for early detection of cancer. In recent years, deep leaning has been widely used in medical imaging analysis, as it overcomes the limitations of visual assessment and traditional machine learning techniques by extracting hierarchical features with powerful representation capability. Research on cancer molecular images using deep learning techniques is also increasing dynamically. Hence, in this paper, we review the applications of deep learning in molecular imaging in terms of tumor lesion segmentation, tumor classification, and survival prediction. We also outline some future directions in which researchers may develop more powerful deep learning models for better performance in the applications in cancer molecular imaging. PMID:29114182

  9. ThunderSTORM: a comprehensive ImageJ plug-in for PALM and STORM data analysis and super-resolution imaging

    PubMed Central

    Ovesný, Martin; Křížek, Pavel; Borkovec, Josef; Švindrych, Zdeněk; Hagen, Guy M.

    2014-01-01

    Summary: ThunderSTORM is an open-source, interactive and modular plug-in for ImageJ designed for automated processing, analysis and visualization of data acquired by single-molecule localization microscopy methods such as photo-activated localization microscopy and stochastic optical reconstruction microscopy. ThunderSTORM offers an extensive collection of processing and post-processing methods so that users can easily adapt the process of analysis to their data. ThunderSTORM also offers a set of tools for creation of simulated data and quantitative performance evaluation of localization algorithms using Monte Carlo simulations. Availability and implementation: ThunderSTORM and the online documentation are both freely accessible at https://code.google.com/p/thunder-storm/ Contact: guy.hagen@lf1.cuni.cz Supplementary information: Supplementary data are available at Bioinformatics online. PMID:24771516

  10. Fully-automated, high-throughput micro-computed tomography analysis of body composition enables therapeutic efficacy monitoring in preclinical models.

    PubMed

    Wyatt, S K; Barck, K H; Kates, L; Zavala-Solorio, J; Ross, J; Kolumam, G; Sonoda, J; Carano, R A D

    2015-11-01

    The ability to non-invasively measure body composition in mouse models of obesity and obesity-related disorders is essential for elucidating mechanisms of metabolic regulation and monitoring the effects of novel treatments. These studies aimed to develop a fully automated, high-throughput micro-computed tomography (micro-CT)-based image analysis technique for longitudinal quantitation of adipose, non-adipose and lean tissue as well as bone and demonstrate utility for assessing the effects of two distinct treatments. An initial validation study was performed in diet-induced obesity (DIO) and control mice on a vivaCT 75 micro-CT system. Subsequently, four groups of DIO mice were imaged pre- and post-treatment with an experimental agonistic antibody specific for anti-fibroblast growth factor receptor 1 (anti-FGFR1, R1MAb1), control immunoglobulin G antibody, a known anorectic antiobesity drug (rimonabant, SR141716), or solvent control. The body composition analysis technique was then ported to a faster micro-CT system (CT120) to markedly increase throughput as well as to evaluate the use of micro-CT image intensity for hepatic lipid content in DIO and control mice. Ex vivo chemical analysis and colorimetric analysis of the liver triglycerides were performed as the standard metrics for correlation with body composition and hepatic lipid status, respectively. Micro-CT-based body composition measures correlate with ex vivo chemical analysis metrics and enable distinction between DIO and control mice. R1MAb1 and rimonabant have differing effects on body composition as assessed by micro-CT. High-throughput body composition imaging is possible using a modified CT120 system. Micro-CT also provides a non-invasive assessment of hepatic lipid content. This work describes, validates and demonstrates utility of a fully automated image analysis technique to quantify in vivo micro-CT-derived measures of adipose, non-adipose and lean tissue, as well as bone. These body composition metrics highly correlate with standard ex vivo chemical analysis and enable longitudinal evaluation of body composition and therapeutic efficacy monitoring.

  11. Analysis of CERN computing infrastructure and monitoring data

    NASA Astrophysics Data System (ADS)

    Nieke, C.; Lassnig, M.; Menichetti, L.; Motesnitsalis, E.; Duellmann, D.

    2015-12-01

    Optimizing a computing infrastructure on the scale of LHC requires a quantitative understanding of a complex network of many different resources and services. For this purpose the CERN IT department and the LHC experiments are collecting a large multitude of logs and performance probes, which are already successfully used for short-term analysis (e.g. operational dashboards) within each group. The IT analytics working group has been created with the goal to bring data sources from different services and on different abstraction levels together and to implement a suitable infrastructure for mid- to long-term statistical analysis. It further provides a forum for joint optimization across single service boundaries and the exchange of analysis methods and tools. To simplify access to the collected data, we implemented an automated repository for cleaned and aggregated data sources based on the Hadoop ecosystem. This contribution describes some of the challenges encountered, such as dealing with heterogeneous data formats, selecting an efficient storage format for map reduce and external access, and will describe the repository user interface. Using this infrastructure we were able to quantitatively analyze the relationship between CPU/wall fraction, latency/throughput constraints of network and disk and the effective job throughput. In this contribution we will first describe the design of the shared analysis infrastructure and then present a summary of first analysis results from the combined data sources.

  12. Additional band broadening of peptides in the first size-exclusion chromatographic dimension of an automated stop-flow two-dimensional high performance liquid chromatography.

    PubMed

    Xu, Jucai; Sun-Waterhouse, Dongxiao; Qiu, Chaoying; Zhao, Mouming; Sun, Baoguo; Lin, Lianzhu; Su, Guowan

    2017-10-27

    The need to improve the peak capacity of liquid chromatography motivates the development of two-dimensional analysis systems. This paper presented a fully automated stop-flow two-dimensional liquid chromatography system with size exclusion chromatography followed by reversed phase liquid chromatography (SEC×RPLC) to efficiently separate peptides. The effects of different stop-flow operational parameters (stop-flow time, peak parking position, number of stop-flow periods and column temperature) on band broadening in the first dimension (1 st D) SEC column were quantitatively evaluated by using commercial small proteins and peptides. Results showed that the effects of peak parking position and the number of stop-flow periods on band broadening were relatively small. Unlike stop-flow analysis of large molecules with a long running time, additional band broadening was evidently observed for small molecule analytes due to the relatively high effective diffusion coefficient (D eff ). Therefore, shorter analysis time and lower 1 st D column temperature were suggested for analyzing small molecules. The stop-flow two-dimensional liquid chromatography (2D-LC) system was further tested on peanut peptides and an evidently improved resolution was observed for both stop-flow heart-cutting and comprehensive 2D-LC analysis (in spite of additional band broadening in SEC). The stop-flow SEC×RPLC, especially heart-cutting analysis with shorter analysis time and higher 1 st D resolution for selected fractions, offers a promising approach for efficient analysis of complex samples. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. Image-Based Quantification of Plant Immunity and Disease.

    PubMed

    Laflamme, Bradley; Middleton, Maggie; Lo, Timothy; Desveaux, Darrell; Guttman, David S

    2016-12-01

    Measuring the extent and severity of disease is a critical component of plant pathology research and crop breeding. Unfortunately, existing visual scoring systems are qualitative, subjective, and the results are difficult to transfer between research groups, while existing quantitative methods can be quite laborious. Here, we present plant immunity and disease image-based quantification (PIDIQ), a quantitative, semi-automated system to rapidly and objectively measure disease symptoms in a biologically relevant context. PIDIQ applies an ImageJ-based macro to plant photos in order to distinguish healthy tissue from tissue that has yellowed due to disease. It can process a directory of images in an automated manner and report the relative ratios of healthy to diseased leaf area, thereby providing a quantitative measure of plant health that can be statistically compared with appropriate controls. We used the Arabidopsis thaliana-Pseudomonas syringae model system to show that PIDIQ is able to identify both enhanced plant health associated with effector-triggered immunity as well as elevated disease symptoms associated with effector-triggered susceptibility. Finally, we show that the quantitative results provided by PIDIQ correspond to those obtained via traditional in planta pathogen growth assays. PIDIQ provides a simple and effective means to nondestructively quantify disease from whole plants and we believe it will be equally effective for monitoring disease on excised leaves and stems.

  14. Push-through Direction Injectin NMR Automation

    EPA Science Inventory

    Nuclear magnetic resonance (NMR) and mass spectrometry (MS) are the two major spectroscopic techniques successfully used in metabolomics studies. The non-invasive, quantitative and reproducible characteristics make NMR spectroscopy an excellent technique for detection of endogeno...

  15. Quantification of vocal fold motion using echography: application to recurrent nerve paralysis detection

    NASA Astrophysics Data System (ADS)

    Cohen, Mike-Ely; Lefort, Muriel; Bergeret-Cassagne, Héloïse; Hachi, Siham; Li, Ang; Russ, Gilles; Lazard, Diane; Menegaux, Fabrice; Leenhardt, Laurence; Trésallet, Christophe; Frouin, Frédérique

    2015-03-01

    Recurrent nerve paralysis (RP) is one of the most frequent complications of thyroid surgery. It reduces vocal fold mobility. Nasal endoscopy, a mini-invasive procedure, is the conventional way to detect RP. We suggest a new approach based on laryngeal ultrasound and a specific data analysis was designed to help with the automated detection of RP. Ten subjects were enrolled for this feasibility study: four controls, three patients with RP and three patients without RP according to nasal endoscopy. The ultrasound protocol was based on a ten seconds B-mode acquisition in a coronal plane during normal breathing. Image processing included three steps: 1) automated detection of two consecutive closing and opening images, corresponding to extreme positions of vocal folds in the sequence of B-mode images, using principal component analysis of the image sequence; 2) positioning of three landmarks and robust tracking of these points using a multi-pyramidal refined optical flow approach; 3) estimation of quantitative parameters indicating left and right fractions of mobility, and motion symmetry. Results provided by automated image processing were compared to those obtained by an expert. Detection of extreme images was accurate; tracking of landmarks was reliable in 80% of cases. Motion symmetry indices showed similar values for controls and patients without RP. Fraction of mobility was reduced in cases of RP. Thus, our CAD system helped in the detection of RP. Laryngeal ultrasound combined with appropriate image processing helped in the diagnosis of recurrent nerve paralysis and could be proposed as a first-line method.

  16. High-resolution monitoring of marine protists based on an observation strategy integrating automated on-board filtration and molecular analyses

    NASA Astrophysics Data System (ADS)

    Metfies, Katja; Schroeder, Friedhelm; Hessel, Johanna; Wollschläger, Jochen; Micheller, Sebastian; Wolf, Christian; Kilias, Estelle; Sprong, Pim; Neuhaus, Stefan; Frickenhaus, Stephan; Petersen, Wilhelm

    2016-11-01

    Information on recent biomass distribution and biogeography of photosynthetic marine protists with adequate temporal and spatial resolution is urgently needed to better understand the consequences of environmental change for marine ecosystems. Here we introduce and review a molecular-based observation strategy for high-resolution assessment of these protists in space and time. It is the result of extensive technology developments, adaptations and evaluations which are documented in a number of different publications, and the results of the recently completed field testing which are introduced in this paper. The observation strategy is organized at four different levels. At level 1, samples are collected at high spatiotemporal resolution using the remotely controlled automated filtration system AUTOFIM. Resulting samples can either be preserved for later laboratory analyses, or directly subjected to molecular surveillance of key species aboard the ship via an automated biosensor system or quantitative polymerase chain reaction (level 2). Preserved samples are analyzed at the next observational levels in the laboratory (levels 3 and 4). At level 3 this involves molecular fingerprinting methods for a quick and reliable overview of differences in protist community composition. Finally, selected samples can be used to generate a detailed analysis of taxonomic protist composition via the latest next generation sequencing technology (NGS) at level 4. An overall integrated dataset of the results based on the different analyses provides comprehensive information on the diversity and biogeography of protists, including all related size classes. At the same time the cost of the observation is optimized with respect to analysis effort and time.

  17. Planning bioinformatics workflows using an expert system.

    PubMed

    Chen, Xiaoling; Chang, Jeffrey T

    2017-04-15

    Bioinformatic analyses are becoming formidably more complex due to the increasing number of steps required to process the data, as well as the proliferation of methods that can be used in each step. To alleviate this difficulty, pipelines are commonly employed. However, pipelines are typically implemented to automate a specific analysis, and thus are difficult to use for exploratory analyses requiring systematic changes to the software or parameters used. To automate the development of pipelines, we have investigated expert systems. We created the Bioinformatics ExperT SYstem (BETSY) that includes a knowledge base where the capabilities of bioinformatics software is explicitly and formally encoded. BETSY is a backwards-chaining rule-based expert system comprised of a data model that can capture the richness of biological data, and an inference engine that reasons on the knowledge base to produce workflows. Currently, the knowledge base is populated with rules to analyze microarray and next generation sequencing data. We evaluated BETSY and found that it could generate workflows that reproduce and go beyond previously published bioinformatics results. Finally, a meta-investigation of the workflows generated from the knowledge base produced a quantitative measure of the technical burden imposed by each step of bioinformatics analyses, revealing the large number of steps devoted to the pre-processing of data. In sum, an expert system approach can facilitate exploratory bioinformatic analysis by automating the development of workflows, a task that requires significant domain expertise. https://github.com/jefftc/changlab. jeffrey.t.chang@uth.tmc.edu. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  18. Cryo-imaging in a toxicological study on mouse fetuses

    NASA Astrophysics Data System (ADS)

    Roy, Debashish; Gargesha, Madhusudhana; Sloter, Eddie; Watanabe, Michiko; Wilson, David

    2010-03-01

    We applied the Case cryo-imaging system to detect signals of developmental toxicity in transgenic mouse fetuses resulting from maternal exposure to a developmental environmental toxicant (2,3,7,8-tetrachlorodibenzo-p-dioxin, TCDD). We utilized a fluorescent transgenic mouse model that expresses Green Fluorescent Protein (GFP) exclusively in smooth muscles under the control of the smooth muscle gamma actin (SMGA) promoter (SMGA/EGFP mice kindly provided by J. Lessard, U. Cincinnati). Analysis of cryo-image data volumes, comprising of very high-resolution anatomical brightfield and molecular fluorescence block face images, revealed qualitative and quantitative morphological differences in control versus exposed fetuses. Fetuses randomly chosen from pregnant females euthanized on gestation day (GD) 18 were either manually examined or cryo-imaged. For cryo-imaging, fetuses were embedded, frozen and cryo-sectioned at 20 μm thickness and brightfield color and fluorescent block-face images were acquired with an in-plane resolution of ~15 μm. Automated 3D volume visualization schemes segmented out the black embedding medium and blended fluorescence and brightfield data to produce 3D reconstructions of all fetuses. Comparison of Treatment groups TCDD GD13, TCDD GD14 and control through automated analysis tools highlighted differences not observable by prosectors performing traditional fresh dissection. For example, severe hydronephrosis, suggestive of irreversible kidney damage, was detected by cryoimaging in fetuses exposed to TCDD. Automated quantification of total fluorescence in smooth muscles revealed suppressed fluorescence in TCDD-exposed fetuses. This application demonstrated that cryo-imaging can be utilized as a routine high-throughput screening tool to assess the effects of potential toxins on the developmental biology of small animals.

  19. Planning bioinformatics workflows using an expert system

    PubMed Central

    Chen, Xiaoling; Chang, Jeffrey T.

    2017-01-01

    Abstract Motivation: Bioinformatic analyses are becoming formidably more complex due to the increasing number of steps required to process the data, as well as the proliferation of methods that can be used in each step. To alleviate this difficulty, pipelines are commonly employed. However, pipelines are typically implemented to automate a specific analysis, and thus are difficult to use for exploratory analyses requiring systematic changes to the software or parameters used. Results: To automate the development of pipelines, we have investigated expert systems. We created the Bioinformatics ExperT SYstem (BETSY) that includes a knowledge base where the capabilities of bioinformatics software is explicitly and formally encoded. BETSY is a backwards-chaining rule-based expert system comprised of a data model that can capture the richness of biological data, and an inference engine that reasons on the knowledge base to produce workflows. Currently, the knowledge base is populated with rules to analyze microarray and next generation sequencing data. We evaluated BETSY and found that it could generate workflows that reproduce and go beyond previously published bioinformatics results. Finally, a meta-investigation of the workflows generated from the knowledge base produced a quantitative measure of the technical burden imposed by each step of bioinformatics analyses, revealing the large number of steps devoted to the pre-processing of data. In sum, an expert system approach can facilitate exploratory bioinformatic analysis by automating the development of workflows, a task that requires significant domain expertise. Availability and Implementation: https://github.com/jefftc/changlab Contact: jeffrey.t.chang@uth.tmc.edu PMID:28052928

  20. Deep convolutional neural network and 3D deformable approach for tissue segmentation in musculoskeletal magnetic resonance imaging.

    PubMed

    Liu, Fang; Zhou, Zhaoye; Jang, Hyungseok; Samsonov, Alexey; Zhao, Gengyan; Kijowski, Richard

    2018-04-01

    To describe and evaluate a new fully automated musculoskeletal tissue segmentation method using deep convolutional neural network (CNN) and three-dimensional (3D) simplex deformable modeling to improve the accuracy and efficiency of cartilage and bone segmentation within the knee joint. A fully automated segmentation pipeline was built by combining a semantic segmentation CNN and 3D simplex deformable modeling. A CNN technique called SegNet was applied as the core of the segmentation method to perform high resolution pixel-wise multi-class tissue classification. The 3D simplex deformable modeling refined the output from SegNet to preserve the overall shape and maintain a desirable smooth surface for musculoskeletal structure. The fully automated segmentation method was tested using a publicly available knee image data set to compare with currently used state-of-the-art segmentation methods. The fully automated method was also evaluated on two different data sets, which include morphological and quantitative MR images with different tissue contrasts. The proposed fully automated segmentation method provided good segmentation performance with segmentation accuracy superior to most of state-of-the-art methods in the publicly available knee image data set. The method also demonstrated versatile segmentation performance on both morphological and quantitative musculoskeletal MR images with different tissue contrasts and spatial resolutions. The study demonstrates that the combined CNN and 3D deformable modeling approach is useful for performing rapid and accurate cartilage and bone segmentation within the knee joint. The CNN has promising potential applications in musculoskeletal imaging. Magn Reson Med 79:2379-2391, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  1. Reliability of Semi-Automated Segmentations in Glioblastoma.

    PubMed

    Huber, T; Alber, G; Bette, S; Boeckh-Behrens, T; Gempt, J; Ringel, F; Alberts, E; Zimmer, C; Bauer, J S

    2017-06-01

    In glioblastoma, quantitative volumetric measurements of contrast-enhancing or fluid-attenuated inversion recovery (FLAIR) hyperintense tumor compartments are needed for an objective assessment of therapy response. The aim of this study was to evaluate the reliability of a semi-automated, region-growing segmentation tool for determining tumor volume in patients with glioblastoma among different users of the software. A total of 320 segmentations of tumor-associated FLAIR changes and contrast-enhancing tumor tissue were performed by different raters (neuroradiologists, medical students, and volunteers). All patients underwent high-resolution magnetic resonance imaging including a 3D-FLAIR and a 3D-MPRage sequence. Segmentations were done using a semi-automated, region-growing segmentation tool. Intra- and inter-rater-reliability were addressed by intra-class-correlation (ICC). Root-mean-square error (RMSE) was used to determine the precision error. Dice score was calculated to measure the overlap between segmentations. Semi-automated segmentation showed a high ICC (> 0.985) for all groups indicating an excellent intra- and inter-rater-reliability. Significant smaller precision errors and higher Dice scores were observed for FLAIR segmentations compared with segmentations of contrast-enhancement. Single rater segmentations showed the lowest RMSE for FLAIR of 3.3 % (MPRage: 8.2 %). Both, single raters and neuroradiologists had the lowest precision error for longitudinal evaluation of FLAIR changes. Semi-automated volumetry of glioblastoma was reliably performed by all groups of raters, even without neuroradiologic expertise. Interestingly, segmentations of tumor-associated FLAIR changes were more reliable than segmentations of contrast enhancement. In longitudinal evaluations, an experienced rater can detect progressive FLAIR changes of less than 15 % reliably in a quantitative way which could help to detect progressive disease earlier.

  2. Headspace gas chromatographic method for the measurement of difluoroethane in blood.

    PubMed

    Broussard, L A; Broussard, A; Pittman, T; Lafferty, D; Presley, L

    2001-01-01

    To develop a gas chromatographic assay for the analysis of difluoroethane, a volatile substance, in blood and to determine assay characteristics including linearity, limit of quantitation, precision, and specificity. Referral toxicology laboratory Difluoroethane, a colorless, odorless, highly flammable gas used as a refrigerant blend component and aerosol propellant, may be abused via inhalation. A headspace gas chromatographic procedure for the identification and quantitation of difluoroethane in blood is presented. A methanolic stock standard prepared from pure gaseous difluoroethane was used to prepare whole blood calibrators. Quantitation of difluoroethane was performed using a six-point calibration curve and an internal standard of 1-propanol. The assay is linear from 0 to 115 mg/L including a low calibrator at 4 mg/L, the limit of quantitation. Within-run coefficients of variation at mean concentrations of 13.8 mg/L and 38.5 mg/L were 5.8% and 6.8% respectively. Between-run coefficients of variation at mean concentrations of 15.9 mg/L and 45.7 mg/L were 13.4% and 9.8% respectively. Several volatile substances were tested as potential interfering compounds with propane having a retention time identical to that of difluoroethane. This method requires minimal sample preparation, is rapid and reproducible, can be modified for the quantitation of other volatiles, and could be automated using an automatic sampler/injector system.

  3. Improving membrane based multiplex immunoassays for semi-quantitative detection of multiple cytokines in a single sample

    PubMed Central

    2014-01-01

    Background Inflammatory mediators can serve as biomarkers for the monitoring of the disease progression or prognosis in many conditions. In the present study we introduce an adaptation of a membrane-based technique in which the level of up to 40 cytokines and chemokines can be determined in both human and rodent blood in a semi-quantitative way. The planar assay was modified using the LI-COR (R) detection system (fluorescence based) rather than chemiluminescence and semi-quantitative outcomes were achieved by normalizing the outcomes using the automated exposure settings of the Odyssey readout device. The results were compared to the gold standard assay, namely ELISA. Results The improved planar assay allowed the detection of a considerably higher number of analytes (n = 30 and n = 5 for fluorescent and chemiluminescent detection, respectively). The improved planar method showed high sensitivity up to 17 pg/ml and a linear correlation of the normalized fluorescence intensity with the results from the ELISA (r = 0.91). Conclusions The results show that the membrane-based technique is a semi-quantitative assay that correlates satisfactorily to the gold standard when enhanced by the use of fluorescence and subsequent semi-quantitative analysis. This promising technique can be used to investigate inflammatory profiles in multiple conditions, particularly in studies with constraints in sample sizes and/or budget. PMID:25022797

  4. High-Precision Pinpointing of Luminescent Targets in Encoder-Assisted Scanning Microscopy Allowing High-Speed Quantitative Analysis.

    PubMed

    Zheng, Xianlin; Lu, Yiqing; Zhao, Jiangbo; Zhang, Yuhai; Ren, Wei; Liu, Deming; Lu, Jie; Piper, James A; Leif, Robert C; Liu, Xiaogang; Jin, Dayong

    2016-01-19

    Compared with routine microscopy imaging of a few analytes at a time, rapid scanning through the whole sample area of a microscope slide to locate every single target object offers many advantages in terms of simplicity, speed, throughput, and potential for robust quantitative analysis. Existing techniques that accommodate solid-phase samples incorporating individual micrometer-sized targets generally rely on digital microscopy and image analysis, with intrinsically low throughput and reliability. Here, we report an advanced on-the-fly stage scanning method to achieve high-precision target location across the whole slide. By integrating X- and Y-axis linear encoders to a motorized stage as the virtual "grids" that provide real-time positional references, we demonstrate an orthogonal scanning automated microscopy (OSAM) technique which can search a coverslip area of 50 × 24 mm(2) in just 5.3 min and locate individual 15 μm lanthanide luminescent microspheres with standard deviations of 1.38 and 1.75 μm in X and Y directions. Alongside implementation of an autofocus unit that compensates the tilt of a slide in the Z-axis in real time, we increase the luminescence detection efficiency by 35% with an improved coefficient of variation. We demonstrate the capability of advanced OSAM for robust quantification of luminescence intensities and lifetimes for a variety of micrometer-scale luminescent targets, specifically single down-shifting and upconversion microspheres, crystalline microplates, and color-barcoded microrods, as well as quantitative suspension array assays of biotinylated-DNA functionalized upconversion nanoparticles.

  5. Automatic quantitative analysis of in-stent restenosis using FD-OCT in vivo intra-arterial imaging.

    PubMed

    Mandelias, Kostas; Tsantis, Stavros; Spiliopoulos, Stavros; Katsakiori, Paraskevi F; Karnabatidis, Dimitris; Nikiforidis, George C; Kagadis, George C

    2013-06-01

    A new segmentation technique is implemented for automatic lumen area extraction and stent strut detection in intravascular optical coherence tomography (OCT) images for the purpose of quantitative analysis of in-stent restenosis (ISR). In addition, a user-friendly graphical user interface (GUI) is developed based on the employed algorithm toward clinical use. Four clinical datasets of frequency-domain OCT scans of the human femoral artery were analyzed. First, a segmentation method based on fuzzy C means (FCM) clustering and wavelet transform (WT) was applied toward inner luminal contour extraction. Subsequently, stent strut positions were detected by utilizing metrics derived from the local maxima of the wavelet transform into the FCM membership function. The inner lumen contour and the position of stent strut were extracted with high precision. Compared to manual segmentation by an expert physician, the automatic lumen contour delineation had an average overlap value of 0.917 ± 0.065 for all OCT images included in the study. The strut detection procedure achieved an overall accuracy of 93.80% and successfully identified 9.57 ± 0.5 struts for every OCT image. Processing time was confined to approximately 2.5 s per OCT frame. A new fast and robust automatic segmentation technique combining FCM and WT for lumen border extraction and strut detection in intravascular OCT images was designed and implemented. The proposed algorithm integrated in a GUI represents a step forward toward the employment of automated quantitative analysis of ISR in clinical practice.

  6. Feasibility of Developing a Protocol for Automated Protist Analysis

    DTIC Science & Technology

    2010-03-01

    Acquisition Directorate Research & Development Center Report No: CG-D-02-ll Feasibility of Developing a Protocol for Automated Protist Analysis...Technical Information Service, Springfield, VA 22161. March 2010 Homeland Security Feasibility of Developing a Protocol for Automated Protist ...March 21)10 Feasibility of Developing a Protocol for Automated Protist Analysis 00 00 o CM Technical Report Documentation Page 1. Report No CG-D

  7. Phasegram Analysis of Vocal Fold Vibration Documented With Laryngeal High-speed Video Endoscopy.

    PubMed

    Herbst, Christian T; Unger, Jakob; Herzel, Hanspeter; Švec, Jan G; Lohscheller, Jörg

    2016-11-01

    In a recent publication, the phasegram, a bifurcation diagram over time, has been introduced as an intuitive visualization tool for assessing the vibratory states of oscillating systems. Here, this nonlinear dynamics approach is augmented with quantitative analysis parameters, and it is applied to clinical laryngeal high-speed video (HSV) endoscopic recordings of healthy and pathological phonations. HSV data from a total of 73 females diagnosed as healthy (n = 42), or with functional dysphonia (n = 15) or with unilateral vocal fold paralysis (n = 16), were quantitatively analyzed. Glottal area waveforms (GAW) and left and right hemi-GAWs (hGAW) were extracted from the HSV recordings. Based on Poincaré sections through phase space-embedded signals, two novel quantitative parameters were computed: the phasegram entropy (PE) and the phasegram complexity estimate (PCE), inspired by signal entropy and correlation dimension computation, respectively. Both PE and PCE assumed higher average values (suggesting more irregular vibrations) for the pathological as compared with the healthy participants, thus significantly discriminating healthy group from the paralysis group (P = 0.02 for both PE and PCE). Comparisons of individual PE or PCE data for the left and the right hGAW within each subject resulted in asymmetry measures for the regularity of vocal fold vibration. The PCE-based asymmetry measure revealed significant differences between the healthy group and the paralysis group (P = 0.03). Quantitative phasegram analysis of GAW and hGAW data is a promising tool for the automated processing of HSV data in research and in clinical practice. Copyright © 2016 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  8. Good practices for quantitative bias analysis.

    PubMed

    Lash, Timothy L; Fox, Matthew P; MacLehose, Richard F; Maldonado, George; McCandless, Lawrence C; Greenland, Sander

    2014-12-01

    Quantitative bias analysis serves several objectives in epidemiological research. First, it provides a quantitative estimate of the direction, magnitude and uncertainty arising from systematic errors. Second, the acts of identifying sources of systematic error, writing down models to quantify them, assigning values to the bias parameters and interpreting the results combat the human tendency towards overconfidence in research results, syntheses and critiques and the inferences that rest upon them. Finally, by suggesting aspects that dominate uncertainty in a particular research result or topic area, bias analysis can guide efficient allocation of sparse research resources. The fundamental methods of bias analyses have been known for decades, and there have been calls for more widespread use for nearly as long. There was a time when some believed that bias analyses were rarely undertaken because the methods were not widely known and because automated computing tools were not readily available to implement the methods. These shortcomings have been largely resolved. We must, therefore, contemplate other barriers to implementation. One possibility is that practitioners avoid the analyses because they lack confidence in the practice of bias analysis. The purpose of this paper is therefore to describe what we view as good practices for applying quantitative bias analysis to epidemiological data, directed towards those familiar with the methods. We focus on answering questions often posed to those of us who advocate incorporation of bias analysis methods into teaching and research. These include the following. When is bias analysis practical and productive? How does one select the biases that ought to be addressed? How does one select a method to model biases? How does one assign values to the parameters of a bias model? How does one present and interpret a bias analysis?. We hope that our guide to good practices for conducting and presenting bias analyses will encourage more widespread use of bias analysis to estimate the potential magnitude and direction of biases, as well as the uncertainty in estimates potentially influenced by the biases. © The Author 2014; all rights reserved. Published by Oxford University Press on behalf of the International Epidemiological Association.

  9. Noninvasive Dry Eye Assessment Using High-Technology Ophthalmic Examination Devices.

    PubMed

    Yamaguchi, Masahiko; Sakane, Yuri; Kamao, Tomoyuki; Zheng, Xiaodong; Goto, Tomoko; Shiraishi, Atsushi; Ohashi, Yuichi

    2016-11-01

    Recently, the number of dry eye cases has dramatically increased. Thus, it is important that easy screening, exact diagnoses, and suitable treatments be available. We developed 3 original and noninvasive assessments for this disorder. First, a DR-1 dry eye monitor was used to determine the tear meniscus height quantitatively by capturing a tear meniscus digital image that was analyzed by Meniscus Processor software. The DR-1 meniscus height value significantly correlated with the fluorescein meniscus height (r = 0.06, Bland-Altman analysis). At a cutoff value of 0.22 mm, sensitivity of the dry eye diagnosis was 84.1% with 90.9% specificity. Second, the Tear Stability Analysis System was used to quantitatively measure tear film stability using a topographic modeling system corneal shape analysis device. Tear film stability was objectively and quantitatively evaluated every second during sustained eye openings. The Tear Stability Analysis System is currently installed in an RT-7000 autorefractometer and topographer to automate the diagnosis of dry eye. Third, the Ocular Surface Thermographer uses ophthalmic thermography for diagnosis. The decrease in ocular surface temperature in dry eyes was significantly greater than that in normal eyes (P < 0.001) at 10 seconds after eye opening. Decreased corneal temperature correlated significantly with the tear film breakup time (r = 0.572; P < 0.001). When changes in the ocular surface temperature of the cornea were used as indicators for dry eye, sensitivity was 0.83 and specificity was 0.80 after 10 seconds. This article describes the details and potential of these 3 noninvasive dry eye assessment systems.

  10. Quantitation of heavy ion damage to the mammalian brain - Some preliminary findings

    NASA Technical Reports Server (NTRS)

    Cox, A. B.; Kraft, L. M.

    1984-01-01

    For several years, studies have been conducted regarding late effects of particulate radiations in mammalian tissues, taking into account the brains of rodents and lagomorphs. Recently, it has become feasible to quantify pathological damage and morpho-physiologic alterations accurately in large numbers of histological specimens. New investigative procedures make use of computer-assisted automated image analysis systems. Details regarding the employed methodology are discussed along with the results of the information. The radiations of high linear energy transfer (LET) cause apparently earlier and more dramatic shrinkage of olfactory glomeruli in exposed rabbit brains than comparable doses of Co-60 gamma photons.

  11. Estimating ankle rotational constraints from anatomic structure

    NASA Astrophysics Data System (ADS)

    Baker, H. H.; Bruckner, Janice S.; Langdon, John H.

    1992-09-01

    Three-dimensional biomedical data obtained through tomography provide exceptional views of biological anatomy. While visualization is one of the primary purposes for obtaining these data, other more quantitative and analytic uses are possible. These include modeling of tissue properties and interrelationships, simulation of physical processes, interactive surgical investigation, and analysis of kinematics and dynamics. As an application of our research in modeling tissue structure and function, we have been working to develop interactive and automated tools for studying joint geometry and kinematics. We focus here on discrimination of morphological variations in the foot and determining the implications of these on both hominid bipedal evolution and physical therapy treatment for foot disorders.

  12. Objective comparison of particle tracking methods.

    PubMed

    Chenouard, Nicolas; Smal, Ihor; de Chaumont, Fabrice; Maška, Martin; Sbalzarini, Ivo F; Gong, Yuanhao; Cardinale, Janick; Carthel, Craig; Coraluppi, Stefano; Winter, Mark; Cohen, Andrew R; Godinez, William J; Rohr, Karl; Kalaidzidis, Yannis; Liang, Liang; Duncan, James; Shen, Hongying; Xu, Yingke; Magnusson, Klas E G; Jaldén, Joakim; Blau, Helen M; Paul-Gilloteaux, Perrine; Roudot, Philippe; Kervrann, Charles; Waharte, François; Tinevez, Jean-Yves; Shorte, Spencer L; Willemse, Joost; Celler, Katherine; van Wezel, Gilles P; Dan, Han-Wei; Tsai, Yuh-Show; Ortiz de Solórzano, Carlos; Olivo-Marin, Jean-Christophe; Meijering, Erik

    2014-03-01

    Particle tracking is of key importance for quantitative analysis of intracellular dynamic processes from time-lapse microscopy image data. Because manually detecting and following large numbers of individual particles is not feasible, automated computational methods have been developed for these tasks by many groups. Aiming to perform an objective comparison of methods, we gathered the community and organized an open competition in which participating teams applied their own methods independently to a commonly defined data set including diverse scenarios. Performance was assessed using commonly defined measures. Although no single method performed best across all scenarios, the results revealed clear differences between the various approaches, leading to notable practical conclusions for users and developers.

  13. Hepatic fat quantification using automated six-point Dixon: Comparison with conventional chemical shift based sequences and computed tomography.

    PubMed

    Shimizu, Kie; Namimoto, Tomohiro; Nakagawa, Masataka; Morita, Kosuke; Oda, Seitaro; Nakaura, Takeshi; Utsunomiya, Daisuke; Yamashita, Yasuyuki

    To compare automated six-point Dixon (6-p-Dixon) MRI comparing with dual-echo chemical-shift-imaging (CSI) and CT for hepatic fat fraction in phantoms and clinical study. Phantoms and fifty-nine patients were examined both MRI and CT for quantitative fat measurements. In phantom study, linear regression between fat concentration and 6-p-Dixon showed good agreement. In clinical study, linear regression between 6-p-Dixon and dual-echo CSI showed good agreement. CT attenuation value was strongly correlated with 6-p-Dixon (R 2 =0.852; P<0.001) and dual-echo CSI (R 2 =0.812; P<0.001). Automated 6-p-Dixon and dual-echo CSI were accurate correlation with CT attenuation value of liver parenchyma. 6-p-Dixon has the potential for automated hepatic fat quantification. Copyright © 2017 Elsevier Inc. All rights reserved.

  14. Comparison of Two Commercial Automated Nucleic Acid Extraction and Integrated Quantitation Real-Time PCR Platforms for the Detection of Cytomegalovirus in Plasma

    PubMed Central

    Tsai, Huey-Pin; Tsai, You-Yuan; Lin, I-Ting; Kuo, Pin-Hwa; Chen, Tsai-Yun; Chang, Kung-Chao; Wang, Jen-Ren

    2016-01-01

    Quantitation of cytomegalovirus (CMV) viral load in the transplant patients has become a standard practice for monitoring the response to antiviral therapy. The cut-off values of CMV viral load assays for preemptive therapy are different due to the various assay designs employed. To establish a sensitive and reliable diagnostic assay for preemptive therapy of CMV infection, two commercial automated platforms including m2000sp extraction system integrated the Abbott RealTime (m2000rt) and the Roche COBAS AmpliPrep for extraction integrated COBAS Taqman (CAP/CTM) were evaluated using WHO international CMV standards and 110 plasma specimens from transplant patients. The performance characteristics, correlation, and workflow of the two platforms were investigated. The Abbott RealTime assay correlated well with the Roche CAP/CTM assay (R2 = 0.9379, P<0.01). The Abbott RealTime assay exhibited higher sensitivity for the detection of CMV viral load, and viral load values measured with Abbott RealTime assay were on average 0.76 log10 IU/mL higher than those measured with the Roche CAP/CTM assay (P<0.0001). Workflow analysis on a small batch size at one time, using the Roche CAP/CTM platform had a shorter hands-on time than the Abbott RealTime platform. In conclusion, these two assays can provide reliable data for different purpose in a clinical virology laboratory setting. PMID:27494707

  15. An automated voxelized dosimetry tool for radionuclide therapy based on serial quantitative SPECT/CT imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jackson, Price A.; Kron, Tomas; Beauregard, Jean-Mathieu

    2013-11-15

    Purpose: To create an accurate map of the distribution of radiation dose deposition in healthy and target tissues during radionuclide therapy.Methods: Serial quantitative SPECT/CT images were acquired at 4, 24, and 72 h for 28 {sup 177}Lu-octreotate peptide receptor radionuclide therapy (PRRT) administrations in 17 patients with advanced neuroendocrine tumors. Deformable image registration was combined with an in-house programming algorithm to interpolate pharmacokinetic uptake and clearance at a voxel level. The resultant cumulated activity image series are comprised of values representing the total number of decays within each voxel's volume. For PRRT, cumulated activity was translated to absorbed dose basedmore » on Monte Carlo-determined voxel S-values at a combination of long and short ranges. These dosimetric image sets were compared for mean radiation absorbed dose to at-risk organs using a conventional MIRD protocol (OLINDA 1.1).Results: Absorbed dose values to solid organs (liver, kidneys, and spleen) were within 10% using both techniques. Dose estimates to marrow were greater using the voxelized protocol, attributed to the software incorporating crossfire effect from nearby tumor volumes.Conclusions: The technique presented offers an efficient, automated tool for PRRT dosimetry based on serial post-therapy imaging. Following retrospective analysis, this method of high-resolution dosimetry may allow physicians to prescribe activity based on required dose to tumor volume or radiation limits to healthy tissue in individual patients.« less

  16. Automated Tracking of Animal Posture and Movement during Exploration and Sensory Orientation Behaviors

    PubMed Central

    Gomez-Marin, Alex; Partoune, Nicolas; Stephens, Greg J.; Louis, Matthieu

    2012-01-01

    Background The nervous functions of an organism are primarily reflected in the behavior it is capable of. Measuring behavior quantitatively, at high-resolution and in an automated fashion provides valuable information about the underlying neural circuit computation. Accordingly, computer-vision applications for animal tracking are becoming a key complementary toolkit to genetic, molecular and electrophysiological characterization in systems neuroscience. Methodology/Principal Findings We present Sensory Orientation Software (SOS) to measure behavior and infer sensory experience correlates. SOS is a simple and versatile system to track body posture and motion of single animals in two-dimensional environments. In the presence of a sensory landscape, tracking the trajectory of the animal's sensors and its postural evolution provides a quantitative framework to study sensorimotor integration. To illustrate the utility of SOS, we examine the orientation behavior of fruit fly larvae in response to odor, temperature and light gradients. We show that SOS is suitable to carry out high-resolution behavioral tracking for a wide range of organisms including flatworms, fishes and mice. Conclusions/Significance Our work contributes to the growing repertoire of behavioral analysis tools for collecting rich and fine-grained data to draw and test hypothesis about the functioning of the nervous system. By providing open-access to our code and documenting the software design, we aim to encourage the adaptation of SOS by a wide community of non-specialists to their particular model organism and questions of interest. PMID:22912674

  17. A Depression Prevention Intervention for Adolescents in the Emergency Department.

    PubMed

    Ranney, Megan L; Freeman, Joshua R; Connell, Gerianne; Spirito, Anthony; Boyer, Edward; Walton, Maureen; Guthrie, Kate Morrow; Cunningham, Rebecca M

    2016-10-01

    To evaluate acceptability and feasibility of a theoretically based two-part (brief in-person + 8-week automated text message) depression prevention program, "intervention for DepressiOn and Violence prevention in the Emergency department" (iDOVE), for high-risk adolescents. English-speaking emergency department (ED) patients (age 13-17, any chief complaint) were sequentially approached for consent on a convenience sample of shifts and screened for inclusion based on current depressive symptoms and past-year violence. After consent, baseline assessments were obtained; all participants were enrolled in the two-part intervention (brief in-ED + 8-week two-way text messaging). At 8 weeks, quantitative and qualitative follow-up assessments were obtained. Measures included feasibility, acceptability, and preliminary data on efficacy. Qualitative data were transcribed verbatim, double coded, and interpreted using thematic analysis. Quantitative results were analyzed descriptively and with paired t tests. As planned, 16 participants (eight each gender) were recruited (75% of those who were eligible; 66% nonwhite, 63% low income, mean age 15.4). The intervention had high feasibility and acceptability: 93.8% completed 8-week follow-up; 80% of daily text messages received responses; 31% of participants requested ≥1 "on-demand" text message. In-person and text message portions were rated as good/excellent by 87%. Qualitatively, participants articulated: (1) iDOVE was welcome and helpful, if unexpected in the ED; (2) the daily text message mood assessment was "most important"; (3) content was "uplifting"; and (4) balancing intervention "relatability" and automation was challenging. Participants' mean ΔBDI-2 (Beck Depression Inventory) from baseline to 8-week follow-up was -4.9, (p = .02). This automated preventive text message intervention is acceptable and feasible. Qualitative data emphasize the importance of creating positive, relevant, and interactive digital health tools for adolescents. Copyright © 2016 Society for Adolescent Health and Medicine. Published by Elsevier Inc. All rights reserved.

  18. Changes to Serum Sample Tube and Processing Methodology Does Not Cause Inter-Individual Variation in Automated Whole Serum N-Glycan Profiling in Health and Disease

    PubMed Central

    Shubhakar, Archana; Kalla, Rahul; Nimmo, Elaine R.; Fernandes, Daryl L.; Satsangi, Jack; Spencer, Daniel I. R.

    2015-01-01

    Introduction Serum N-glycans have been identified as putative biomarkers for numerous diseases. The impact of different serum sample tubes and processing methods on N-glycan analysis has received relatively little attention. This study aimed to determine the effect of different sample tubes and processing methods on the whole serum N-glycan profile in both health and disease. A secondary objective was to describe a robot automated N-glycan release, labeling and cleanup process for use in a biomarker discovery system. Methods 25 patients with active and quiescent inflammatory bowel disease and controls had three different serum sample tubes taken at the same draw. Two different processing methods were used for three types of tube (with and without gel-separation medium). Samples were randomised and processed in a blinded fashion. Whole serum N-glycan release, 2-aminobenzamide labeling and cleanup was automated using a Hamilton Microlab STARlet Liquid Handling robot. Samples were analysed using a hydrophilic interaction liquid chromatography/ethylene bridged hybrid(BEH) column on an ultra-high performance liquid chromatography instrument. Data were analysed quantitatively by pairwise correlation and hierarchical clustering using the area under each chromatogram peak. Qualitatively, a blinded assessor attempted to match chromatograms to each individual. Results There was small intra-individual variation in serum N-glycan profiles from samples collected using different sample processing methods. Intra-individual correlation coefficients were between 0.99 and 1. Unsupervised hierarchical clustering and principal coordinate analyses accurately matched samples from the same individual. Qualitative analysis demonstrated good chromatogram overlay and a blinded assessor was able to accurately match individuals based on chromatogram profile, regardless of disease status. Conclusions The three different serum sample tubes processed using the described methods cause minimal inter-individual variation in serum whole N-glycan profile when processed using an automated workstream. This has important implications for N-glycan biomarker discovery studies using different serum processing standard operating procedures. PMID:25831126

  19. Changes to serum sample tube and processing methodology does not cause Intra-Individual [corrected] variation in automated whole serum N-glycan profiling in health and disease.

    PubMed

    Ventham, Nicholas T; Gardner, Richard A; Kennedy, Nicholas A; Shubhakar, Archana; Kalla, Rahul; Nimmo, Elaine R; Fernandes, Daryl L; Satsangi, Jack; Spencer, Daniel I R

    2015-01-01

    Serum N-glycans have been identified as putative biomarkers for numerous diseases. The impact of different serum sample tubes and processing methods on N-glycan analysis has received relatively little attention. This study aimed to determine the effect of different sample tubes and processing methods on the whole serum N-glycan profile in both health and disease. A secondary objective was to describe a robot automated N-glycan release, labeling and cleanup process for use in a biomarker discovery system. 25 patients with active and quiescent inflammatory bowel disease and controls had three different serum sample tubes taken at the same draw. Two different processing methods were used for three types of tube (with and without gel-separation medium). Samples were randomised and processed in a blinded fashion. Whole serum N-glycan release, 2-aminobenzamide labeling and cleanup was automated using a Hamilton Microlab STARlet Liquid Handling robot. Samples were analysed using a hydrophilic interaction liquid chromatography/ethylene bridged hybrid(BEH) column on an ultra-high performance liquid chromatography instrument. Data were analysed quantitatively by pairwise correlation and hierarchical clustering using the area under each chromatogram peak. Qualitatively, a blinded assessor attempted to match chromatograms to each individual. There was small intra-individual variation in serum N-glycan profiles from samples collected using different sample processing methods. Intra-individual correlation coefficients were between 0.99 and 1. Unsupervised hierarchical clustering and principal coordinate analyses accurately matched samples from the same individual. Qualitative analysis demonstrated good chromatogram overlay and a blinded assessor was able to accurately match individuals based on chromatogram profile, regardless of disease status. The three different serum sample tubes processed using the described methods cause minimal inter-individual variation in serum whole N-glycan profile when processed using an automated workstream. This has important implications for N-glycan biomarker discovery studies using different serum processing standard operating procedures.

  20. A fully automated cell segmentation and morphometric parameter system for quantifying corneal endothelial cell morphology.

    PubMed

    Al-Fahdawi, Shumoos; Qahwaji, Rami; Al-Waisy, Alaa S; Ipson, Stanley; Ferdousi, Maryam; Malik, Rayaz A; Brahma, Arun

    2018-07-01

    Corneal endothelial cell abnormalities may be associated with a number of corneal and systemic diseases. Damage to the endothelial cells can significantly affect corneal transparency by altering hydration of the corneal stroma, which can lead to irreversible endothelial cell pathology requiring corneal transplantation. To date, quantitative analysis of endothelial cell abnormalities has been manually performed by ophthalmologists using time consuming and highly subjective semi-automatic tools, which require an operator interaction. We developed and applied a fully-automated and real-time system, termed the Corneal Endothelium Analysis System (CEAS) for the segmentation and computation of endothelial cells in images of the human cornea obtained by in vivo corneal confocal microscopy. First, a Fast Fourier Transform (FFT) Band-pass filter is applied to reduce noise and enhance the image quality to make the cells more visible. Secondly, endothelial cell boundaries are detected using watershed transformations and Voronoi tessellations to accurately quantify the morphological parameters of the human corneal endothelial cells. The performance of the automated segmentation system was tested against manually traced ground-truth images based on a database consisting of 40 corneal confocal endothelial cell images in terms of segmentation accuracy and obtained clinical features. In addition, the robustness and efficiency of the proposed CEAS system were compared with manually obtained cell densities using a separate database of 40 images from controls (n = 11), obese subjects (n = 16) and patients with diabetes (n = 13). The Pearson correlation coefficient between automated and manual endothelial cell densities is 0.9 (p < 0.0001) and a Bland-Altman plot shows that 95% of the data are between the 2SD agreement lines. We demonstrate the effectiveness and robustness of the CEAS system, and the possibility of utilizing it in a real world clinical setting to enable rapid diagnosis and for patient follow-up, with an execution time of only 6 seconds per image. Copyright © 2018 Elsevier B.V. All rights reserved.

Top