Automated Morphological Analysis of Microglia After Stroke.
Heindl, Steffanie; Gesierich, Benno; Benakis, Corinne; Llovera, Gemma; Duering, Marco; Liesz, Arthur
2018-01-01
Microglia are the resident immune cells of the brain and react quickly to changes in their environment with transcriptional regulation and morphological changes. Brain tissue injury such as ischemic stroke induces a local inflammatory response encompassing microglial activation. The change in activation status of a microglia is reflected in its gradual morphological transformation from a highly ramified into a less ramified or amoeboid cell shape. For this reason, the morphological changes of microglia are widely utilized to quantify microglial activation and studying their involvement in virtually all brain diseases. However, the currently available methods, which are mainly based on manual rating of immunofluorescent microscopic images, are often inaccurate, rater biased, and highly time consuming. To address these issues, we created a fully automated image analysis tool, which enables the analysis of microglia morphology from a confocal Z-stack and providing up to 59 morphological features. We developed the algorithm on an exploratory dataset of microglial cells from a stroke mouse model and validated the findings on an independent data set. In both datasets, we could demonstrate the ability of the algorithm to sensitively discriminate between the microglia morphology in the peri-infarct and the contralateral, unaffected cortex. Dimensionality reduction by principal component analysis allowed to generate a highly sensitive compound score for microglial shape analysis. Finally, we tested for concordance of results between the novel automated analysis tool and the conventional manual analysis and found a high degree of correlation. In conclusion, our novel method for the fully automatized analysis of microglia morphology shows excellent accuracy and time efficacy compared to traditional analysis methods. This tool, which we make openly available, could find application to study microglia morphology using fluorescence imaging in a wide range of brain disease models.
NASA Astrophysics Data System (ADS)
Krappe, Sebastian; Benz, Michaela; Wittenberg, Thomas; Haferlach, Torsten; Münzenmayer, Christian
2015-03-01
The morphological analysis of bone marrow smears is fundamental for the diagnosis of leukemia. Currently, the counting and classification of the different types of bone marrow cells is done manually with the use of bright field microscope. This is a time consuming, partly subjective and tedious process. Furthermore, repeated examinations of a slide yield intra- and inter-observer variances. For this reason an automation of morphological bone marrow analysis is pursued. This analysis comprises several steps: image acquisition and smear detection, cell localization and segmentation, feature extraction and cell classification. The automated classification of bone marrow cells is depending on the automated cell segmentation and the choice of adequate features extracted from different parts of the cell. In this work we focus on the evaluation of support vector machines (SVMs) and random forests (RFs) for the differentiation of bone marrow cells in 16 different classes, including immature and abnormal cell classes. Data sets of different segmentation quality are used to test the two approaches. Automated solutions for the morphological analysis for bone marrow smears could use such a classifier to pre-classify bone marrow cells and thereby shortening the examination duration.
Automated kidney morphology measurements from ultrasound images using texture and edge analysis
NASA Astrophysics Data System (ADS)
Ravishankar, Hariharan; Annangi, Pavan; Washburn, Michael; Lanning, Justin
2016-04-01
In a typical ultrasound scan, a sonographer measures Kidney morphology to assess renal abnormalities. Kidney morphology can also help to discriminate between chronic and acute kidney failure. The caliper placements and volume measurements are often time consuming and an automated solution will help to improve accuracy, repeatability and throughput. In this work, we developed an automated Kidney morphology measurement solution from long axis Ultrasound scans. Automated kidney segmentation is challenging due to wide variability in kidney shape, size, weak contrast of the kidney boundaries and presence of strong edges like diaphragm, fat layers. To address the challenges and be able to accurately localize and detect kidney regions, we present a two-step algorithm that makes use of edge and texture information in combination with anatomical cues. First, we use an edge analysis technique to localize kidney region by matching the edge map with predefined templates. To accurately estimate the kidney morphology, we use textural information in a machine learning algorithm framework using Haar features and Gradient boosting classifier. We have tested the algorithm on 45 unseen cases and the performance against ground truth is measured by computing Dice overlap, % error in major and minor axis of kidney. The algorithm shows successful performance on 80% cases.
Shibuta, Mayu; Tamura, Masato; Kanie, Kei; Yanagisawa, Masumi; Matsui, Hirofumi; Satoh, Taku; Takagi, Toshiyuki; Kanamori, Toshiyuki; Sugiura, Shinji; Kato, Ryuji
2018-06-09
Cellular morphology on and in a scaffold composed of extracellular matrix generally represents the cellular phenotype. Therefore, morphology-based cell separation should be interesting method that is applicable to cell separation without staining surface markers in contrast to conventional cell separation methods (e.g., fluorescence activated cell sorting and magnetic activated cell sorting). In our previous study, we have proposed a cloning technology using a photodegradable gelatin hydrogel to separate the individual cells on and in hydrogels. To further expand the applicability of this photodegradable hydrogel culture platform, we here report an image-based cell separation system imaging cell picker for the morphology-based cell separation on a photodegradable hydrogel. We have developed the platform which enables the automated workflow of image acquisition, image processing and morphology analysis, and collection of a target cells. We have shown the performance of the morphology-based cell separation through the optimization of the critical parameters that determine the system's performance, such as (i) culture conditions, (ii) imaging conditions, and (iii) the image analysis scheme, to actually clone the cells of interest. Furthermore, we demonstrated the morphology-based cloning performance of cancer cells in the mixture of cells by automated hydrogel degradation by light irradiation and pipetting. Copyright © 2018 The Society for Biotechnology, Japan. Published by Elsevier B.V. All rights reserved.
Nikolaisen, Julie; Nilsson, Linn I. H.; Pettersen, Ina K. N.; Willems, Peter H. G. M.; Lorens, James B.; Koopman, Werner J. H.; Tronstad, Karl J.
2014-01-01
Mitochondrial morphology and function are coupled in healthy cells, during pathological conditions and (adaptation to) endogenous and exogenous stress. In this sense mitochondrial shape can range from small globular compartments to complex filamentous networks, even within the same cell. Understanding how mitochondrial morphological changes (i.e. “mitochondrial dynamics”) are linked to cellular (patho) physiology is currently the subject of intense study and requires detailed quantitative information. During the last decade, various computational approaches have been developed for automated 2-dimensional (2D) analysis of mitochondrial morphology and number in microscopy images. Although these strategies are well suited for analysis of adhering cells with a flat morphology they are not applicable for thicker cells, which require a three-dimensional (3D) image acquisition and analysis procedure. Here we developed and validated an automated image analysis algorithm allowing simultaneous 3D quantification of mitochondrial morphology and network properties in human endothelial cells (HUVECs). Cells expressing a mitochondria-targeted green fluorescence protein (mitoGFP) were visualized by 3D confocal microscopy and mitochondrial morphology was quantified using both the established 2D method and the new 3D strategy. We demonstrate that both analyses can be used to characterize and discriminate between various mitochondrial morphologies and network properties. However, the results from 2D and 3D analysis were not equivalent when filamentous mitochondria in normal HUVECs were compared with circular/spherical mitochondria in metabolically stressed HUVECs treated with rotenone (ROT). 2D quantification suggested that metabolic stress induced mitochondrial fragmentation and loss of biomass. In contrast, 3D analysis revealed that the mitochondrial network structure was dissolved without affecting the amount and size of the organelles. Thus, our results demonstrate that 3D imaging and quantification are crucial for proper understanding of mitochondrial shape and topology in non-flat cells. In summary, we here present an integrative method for unbiased 3D quantification of mitochondrial shape and network properties in mammalian cells. PMID:24988307
[The actual possibilities of robotic microscopy in analysis automation and laboratory telemedicine].
Medovyĭ, V S; Piatnitskiĭ, A M; Sokolinskiĭ, B Z; Balugian, R Sh
2012-10-01
The article discusses the possibilities of automation microscopy complexes manufactured by Cellavision and MEKOS to perform the medical analyses of blood films and other biomaterials. The joint work of the complex and physician in the regimen of automatic load stages, screening, sampling and sorting on types with simple morphology, visual sorting of sub-sample with complex morphology provides significant increase of method sensitivity, load decrease and enhancement of physician work conditions. The information technologies, the virtual slides and laboratory telemedicine included permit to develop the representative samples of rare types and pathologies to promote automation methods and medical research targets.
2011-01-01
Uncovering the mechanisms that regulate dendritic spine morphology has been limited, in part, by the lack of efficient and unbiased methods for analyzing spines. Here, we describe an automated 3D spine morphometry method and its application to spine remodeling in live neurons and spine abnormalities in a disease model. We anticipate that this approach will advance studies of synapse structure and function in brain development, plasticity, and disease. PMID:21982080
2018-01-01
ARL-TR-8270 ● JAN 2018 US Army Research Laboratory An Automated Energy Detection Algorithm Based on Morphological Filter...Automated Energy Detection Algorithm Based on Morphological Filter Processing with a Modified Watershed Transform by Kwok F Tom Sensors and Electron...1 October 2016–30 September 2017 4. TITLE AND SUBTITLE An Automated Energy Detection Algorithm Based on Morphological Filter Processing with a
Larrabide, Ignacio; Cruz Villa-Uriol, Maria; Cárdenes, Rubén; Pozo, Jose Maria; Macho, Juan; San Roman, Luis; Blasco, Jordi; Vivas, Elio; Marzo, Alberto; Hose, D Rod; Frangi, Alejandro F
2011-05-01
Morphological descriptors are practical and essential biomarkers for diagnosis and treatment selection for intracranial aneurysm management according to the current guidelines in use. Nevertheless, relatively little work has been dedicated to improve the three-dimensional quantification of aneurysmal morphology, to automate the analysis, and hence to reduce the inherent intra and interobserver variability of manual analysis. In this paper we propose a methodology for the automated isolation and morphological quantification of saccular intracranial aneurysms based on a 3D representation of the vascular anatomy. This methodology is based on the analysis of the vasculature skeleton's topology and the subsequent application of concepts from deformable cylinders. These are expanded inside the parent vessel to identify different regions and discriminate the aneurysm sac from the parent vessel wall. The method renders as output the surface representation of the isolated aneurysm sac, which can then be quantified automatically. The proposed method provides the means for identifying the aneurysm neck in a deterministic way. The results obtained by the method were assessed in two ways: they were compared to manual measurements obtained by three independent clinicians as normally done during diagnosis and to automated measurements from manually isolated aneurysms by three independent operators, nonclinicians, experts in vascular image analysis. All the measurements were obtained using in-house tools. The results were qualitatively and quantitatively compared for a set of the saccular intracranial aneurysms (n = 26). Measurements performed on a synthetic phantom showed that the automated measurements obtained from manually isolated aneurysms where the most accurate. The differences between the measurements obtained by the clinicians and the manually isolated sacs were statistically significant (neck width: p <0.001, sac height: p = 0.002). When comparing clinicians' measurements to automatically isolated sacs, only the differences for the neck width were significant (neck width: p <0.001, sac height: p = 0.95). However, the correlation and agreement between the measurements obtained from manually and automatically isolated aneurysms for the neck width: p = 0.43 and sac height: p = 0.95 where found. The proposed method allows the automated isolation of intracranial aneurysms, eliminating the interobserver variability. In average, the computational cost of the automated method (2 min 36 s) was similar to the time required by a manual operator (measurement by clinicians: 2 min 51 s, manual isolation: 2 min 21 s) but eliminating human interaction. The automated measurements are irrespective of the viewing angle, eliminating any bias or difference between the observer criteria. Finally, the qualitative assessment of the results showed acceptable agreement between manually and automatically isolated aneurysms.
Measurements of Cuspal Slope Inclination Angles in Palaeoanthropological Applications
NASA Astrophysics Data System (ADS)
Gaboutchian, A. V.; Knyaz, V. A.; Leybova, N. A.
2017-05-01
Tooth crown morphological features, studied in palaeoanthropology, provide valuable information about human evolution and development of civilization. Tooth crown morphology represents biological and historical data of high taxonomical value as it characterizes genetically conditioned tooth relief features averse to substantial changes under environmental factors during lifetime. Palaeoanthropological studies are still based mainly on descriptive techniques and manual measurements of limited number of morphological parameters. Feature evaluation and measurement result analysis are expert-based. Development of new methods and techniques in 3D imaging creates a background provides for better value of palaeoanthropological data processing, analysis and distribution. The goals of the presented research are to propose new features for automated odontometry and to explore their applicability to paleoanthropological studies. A technique for automated measuring of given morphological tooth parameters needed for anthropological study is developed. It is based on using original photogrammetric system as a teeth 3D models acquisition device and on a set of algorithms for given tooth parameters estimation.
An Automated Energy Detection Algorithm Based on Morphological and Statistical Processing Techniques
2018-01-09
ARL-TR-8272 ● JAN 2018 US Army Research Laboratory An Automated Energy Detection Algorithm Based on Morphological and...is no longer needed. Do not return it to the originator. ARL-TR-8272 ● JAN 2018 US Army Research Laboratory An Automated Energy ...4. TITLE AND SUBTITLE An Automated Energy Detection Algorithm Based on Morphological and Statistical Processing Techniques 5a. CONTRACT NUMBER
Discrimination between smiling faces: Human observers vs. automated face analysis.
Del Líbano, Mario; Calvo, Manuel G; Fernández-Martín, Andrés; Recio, Guillermo
2018-05-11
This study investigated (a) how prototypical happy faces (with happy eyes and a smile) can be discriminated from blended expressions with a smile but non-happy eyes, depending on type and intensity of the eye expression; and (b) how smile discrimination differs for human perceivers versus automated face analysis, depending on affective valence and morphological facial features. Human observers categorized faces as happy or non-happy, or rated their valence. Automated analysis (FACET software) computed seven expressions (including joy/happiness) and 20 facial action units (AUs). Physical properties (low-level image statistics and visual saliency) of the face stimuli were controlled. Results revealed, first, that some blended expressions (especially, with angry eyes) had lower discrimination thresholds (i.e., they were identified as "non-happy" at lower non-happy eye intensities) than others (especially, with neutral eyes). Second, discrimination sensitivity was better for human perceivers than for automated FACET analysis. As an additional finding, affective valence predicted human discrimination performance, whereas morphological AUs predicted FACET discrimination. FACET can be a valid tool for categorizing prototypical expressions, but is currently more limited than human observers for discrimination of blended expressions. Configural processing facilitates detection of in/congruence(s) across regions, and thus detection of non-genuine smiling faces (due to non-happy eyes). Copyright © 2018 Elsevier B.V. All rights reserved.
Spectral Analysis of Breast Cancer on Tissue Microarrays: Seeing Beyond Morphology
2005-04-01
Harvey N., Szymanski J.J., Bloch J.J., Mitchell M. investigation of image feature extraction by a genetic algorithm. Proc. SPIE 1999;3812:24-31. 11...automated feature extraction using multiple data sources. Proc. SPIE 2003;5099:190-200. 15 4 Spectral-Spatial Analysis of Urine Cytology Angeletti et al...Appendix Contents: 1. Harvey, N.R., Levenson, R.M., Rimm, D.L. (2003) Investigation of Automated Feature Extraction Techniques for Applications in
Shen, Simon; Syal, Karan; Tao, Nongjian; Wang, Shaopeng
2015-12-01
We present a Single-Cell Motion Characterization System (SiCMoCS) to automatically extract bacterial cell morphological features from microscope images and use those features to automatically classify cell motion for rod shaped motile bacterial cells. In some imaging based studies, bacteria cells need to be attached to the surface for time-lapse observation of cellular processes such as cell membrane-protein interactions and membrane elasticity. These studies often generate large volumes of images. Extracting accurate bacterial cell morphology features from these images is critical for quantitative assessment. Using SiCMoCS, we demonstrated simultaneous and automated motion tracking and classification of hundreds of individual cells in an image sequence of several hundred frames. This is a significant improvement from traditional manual and semi-automated approaches to segmenting bacterial cells based on empirical thresholds, and a first attempt to automatically classify bacterial motion types for motile rod shaped bacterial cells, which enables rapid and quantitative analysis of various types of bacterial motion.
Automated Image Registration Using Morphological Region of Interest Feature Extraction
NASA Technical Reports Server (NTRS)
Plaza, Antonio; LeMoigne, Jacqueline; Netanyahu, Nathan S.
2005-01-01
With the recent explosion in the amount of remotely sensed imagery and the corresponding interest in temporal change detection and modeling, image registration has become increasingly important as a necessary first step in the integration of multi-temporal and multi-sensor data for applications such as the analysis of seasonal and annual global climate changes, as well as land use/cover changes. The task of image registration can be divided into two major components: (1) the extraction of control points or features from images; and (2) the search among the extracted features for the matching pairs that represent the same feature in the images to be matched. Manual control feature extraction can be subjective and extremely time consuming, and often results in few usable points. Automated feature extraction is a solution to this problem, where desired target features are invariant, and represent evenly distributed landmarks such as edges, corners and line intersections. In this paper, we develop a novel automated registration approach based on the following steps. First, a mathematical morphology (MM)-based method is used to obtain a scale-orientation morphological profile at each image pixel. Next, a spectral dissimilarity metric such as the spectral information divergence is applied for automated extraction of landmark chips, followed by an initial approximate matching. This initial condition is then refined using a hierarchical robust feature matching (RFM) procedure. Experimental results reveal that the proposed registration technique offers a robust solution in the presence of seasonal changes and other interfering factors. Keywords-Automated image registration, multi-temporal imagery, mathematical morphology, robust feature matching.
Härmä, Ville; Schukov, Hannu-Pekka; Happonen, Antti; Ahonen, Ilmari; Virtanen, Johannes; Siitari, Harri; Åkerfelt, Malin; Lötjönen, Jyrki; Nees, Matthias
2014-01-01
Glandular epithelial cells differentiate into complex multicellular or acinar structures, when embedded in three-dimensional (3D) extracellular matrix. The spectrum of different multicellular morphologies formed in 3D is a sensitive indicator for the differentiation potential of normal, non-transformed cells compared to different stages of malignant progression. In addition, single cells or cell aggregates may actively invade the matrix, utilizing epithelial, mesenchymal or mixed modes of motility. Dynamic phenotypic changes involved in 3D tumor cell invasion are sensitive to specific small-molecule inhibitors that target the actin cytoskeleton. We have used a panel of inhibitors to demonstrate the power of automated image analysis as a phenotypic or morphometric readout in cell-based assays. We introduce a streamlined stand-alone software solution that supports large-scale high-content screens, based on complex and organotypic cultures. AMIDA (Automated Morphometric Image Data Analysis) allows quantitative measurements of large numbers of images and structures, with a multitude of different spheroid shapes, sizes, and textures. AMIDA supports an automated workflow, and can be combined with quality control and statistical tools for data interpretation and visualization. We have used a representative panel of 12 prostate and breast cancer lines that display a broad spectrum of different spheroid morphologies and modes of invasion, challenged by a library of 19 direct or indirect modulators of the actin cytoskeleton which induce systematic changes in spheroid morphology and differentiation versus invasion. These results were independently validated by 2D proliferation, apoptosis and cell motility assays. We identified three drugs that primarily attenuated the invasion and formation of invasive processes in 3D, without affecting proliferation or apoptosis. Two of these compounds block Rac signalling, one affects cellular cAMP/cGMP accumulation. Our approach supports the growing needs for user-friendly, straightforward solutions that facilitate large-scale, cell-based 3D assays in basic research, drug discovery, and target validation. PMID:24810913
Merouane, Amine; Rey-Villamizar, Nicolas; Lu, Yanbin; Liadi, Ivan; Romain, Gabrielle; Lu, Jennifer; Singh, Harjeet; Cooper, Laurence J N; Varadarajan, Navin; Roysam, Badrinath
2015-10-01
There is a need for effective automated methods for profiling dynamic cell-cell interactions with single-cell resolution from high-throughput time-lapse imaging data, especially, the interactions between immune effector cells and tumor cells in adoptive immunotherapy. Fluorescently labeled human T cells, natural killer cells (NK), and various target cells (NALM6, K562, EL4) were co-incubated on polydimethylsiloxane arrays of sub-nanoliter wells (nanowells), and imaged using multi-channel time-lapse microscopy. The proposed cell segmentation and tracking algorithms account for cell variability and exploit the nanowell confinement property to increase the yield of correctly analyzed nanowells from 45% (existing algorithms) to 98% for wells containing one effector and a single target, enabling automated quantification of cell locations, morphologies, movements, interactions, and deaths without the need for manual proofreading. Automated analysis of recordings from 12 different experiments demonstrated automated nanowell delineation accuracy >99%, automated cell segmentation accuracy >95%, and automated cell tracking accuracy of 90%, with default parameters, despite variations in illumination, staining, imaging noise, cell morphology, and cell clustering. An example analysis revealed that NK cells efficiently discriminate between live and dead targets by altering the duration of conjugation. The data also demonstrated that cytotoxic cells display higher motility than non-killers, both before and during contact. broysam@central.uh.edu or nvaradar@central.uh.edu Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Collins, Adam; Huett, Alan
2018-05-15
We present a high-content screen (HCS) for the simultaneous analysis of multiple phenotypes in HeLa cells expressing an autophagy reporter (mcherry-LC3) and one of 224 GFP-fused proteins from the Crohn's Disease (CD)-associated bacterium, Adherent Invasive E. coli (AIEC) strain LF82. Using automated confocal microscopy and image analysis (CellProfiler), we localised GFP fusions within cells, and monitored their effects upon autophagy (an important innate cellular defence mechanism), cellular and nuclear morphology, and the actin cytoskeleton. This data will provide an atlas for the localisation of 224 AIEC proteins within human cells, as well as a dataset to analyse their effects upon many aspects of host cell morphology. We also describe an open-source, automated, image-analysis workflow to identify bacterial effectors and their roles via the perturbations induced in reporter cell lines when candidate effectors are exogenously expressed.
Automated Feature Extraction of Foredune Morphology from Terrestrial Lidar Data
NASA Astrophysics Data System (ADS)
Spore, N.; Brodie, K. L.; Swann, C.
2014-12-01
Foredune morphology is often described in storm impact prediction models using the elevation of the dune crest and dune toe and compared with maximum runup elevations to categorize the storm impact and predicted responses. However, these parameters do not account for other foredune features that may make them more or less erodible, such as alongshore variations in morphology, vegetation coverage, or compaction. The goal of this work is to identify other descriptive features that can be extracted from terrestrial lidar data that may affect the rate of dune erosion under wave attack. Daily, mobile-terrestrial lidar surveys were conducted during a 6-day nor'easter (Hs = 4 m in 6 m water depth) along 20km of coastline near Duck, North Carolina which encompassed a variety of foredune forms in close proximity to each other. This abstract will focus on the tools developed for the automated extraction of the morphological features from terrestrial lidar data, while the response of the dune will be presented by Brodie and Spore as an accompanying abstract. Raw point cloud data can be dense and is often under-utilized due to time and personnel constraints required for analysis, since many algorithms are not fully automated. In our approach, the point cloud is first projected into a local coordinate system aligned with the coastline, and then bare earth points are interpolated onto a rectilinear 0.5 m grid creating a high resolution digital elevation model. The surface is analyzed by identifying features along each cross-shore transect. Surface curvature is used to identify the position of the dune toe, and then beach and berm morphology is extracted shoreward of the dune toe, and foredune morphology is extracted landward of the dune toe. Changes in, and magnitudes of, cross-shore slope, curvature, and surface roughness are used to describe the foredune face and each cross-shore transect is then classified using its pre-storm morphology for storm-response analysis.
NASA Astrophysics Data System (ADS)
Agrawal, Ritu; Sharma, Manisha; Singh, Bikesh Kumar
2018-04-01
Manual segmentation and analysis of lesions in medical images is time consuming and subjected to human errors. Automated segmentation has thus gained significant attention in recent years. This article presents a hybrid approach for brain lesion segmentation in different imaging modalities by combining median filter, k means clustering, Sobel edge detection and morphological operations. Median filter is an essential pre-processing step and is used to remove impulsive noise from the acquired brain images followed by k-means segmentation, Sobel edge detection and morphological processing. The performance of proposed automated system is tested on standard datasets using performance measures such as segmentation accuracy and execution time. The proposed method achieves a high accuracy of 94% when compared with manual delineation performed by an expert radiologist. Furthermore, the statistical significance test between lesion segmented using automated approach and that by expert delineation using ANOVA and correlation coefficient achieved high significance values of 0.986 and 1 respectively. The experimental results obtained are discussed in lieu of some recently reported studies.
An Automated Solar Synoptic Analysis Software System
NASA Astrophysics Data System (ADS)
Hong, S.; Lee, S.; Oh, S.; Kim, J.; Lee, J.; Kim, Y.; Lee, J.; Moon, Y.; Lee, D.
2012-12-01
We have developed an automated software system of identifying solar active regions, filament channels, and coronal holes, those are three major solar sources causing the space weather. Space weather forecasters of NOAA Space Weather Prediction Center produce the solar synoptic drawings as a daily basis to predict solar activities, i.e., solar flares, filament eruptions, high speed solar wind streams, and co-rotating interaction regions as well as their possible effects to the Earth. As an attempt to emulate this process with a fully automated and consistent way, we developed a software system named ASSA(Automated Solar Synoptic Analysis). When identifying solar active regions, ASSA uses high-resolution SDO HMI intensitygram and magnetogram as inputs and providing McIntosh classification and Mt. Wilson magnetic classification of each active region by applying appropriate image processing techniques such as thresholding, morphology extraction, and region growing. At the same time, it also extracts morphological and physical properties of active regions in a quantitative way for the short-term prediction of flares and CMEs. When identifying filament channels and coronal holes, images of global H-alpha network and SDO AIA 193 are used for morphological identification and also SDO HMI magnetograms for quantitative verification. The output results of ASSA are routinely checked and validated against NOAA's daily SRS(Solar Region Summary) and UCOHO(URSIgram code for coronal hole information). A couple of preliminary scientific results are to be presented using available output results. ASSA will be deployed at the Korean Space Weather Center and serve its customers in an operational status by the end of 2012.
Takemura, Hiroyuki; Ai, Tomohiko; Kimura, Konobu; Nagasaka, Kaori; Takahashi, Toshihiro; Tsuchiya, Koji; Yang, Haeun; Konishi, Aya; Uchihashi, Kinya; Horii, Takashi; Tabe, Yoko; Ohsaka, Akimichi
2018-01-01
The XN series automated hematology analyzer has been equipped with a body fluid (BF) mode to count and differentiate leukocytes in BF samples including cerebrospinal fluid (CSF). However, its diagnostic accuracy is not reliable for CSF samples with low cell concentration at the border between normal and pathologic level. To overcome this limitation, a new flow cytometry-based technology, termed "high sensitive analysis (hsA) mode," has been developed. In addition, the XN series analyzer has been equipped with the automated digital cell imaging analyzer DI-60 to classify cell morphology including normal leukocytes differential and abnormal malignant cells detection. Using various BF samples, we evaluated the performance of the XN-hsA mode and DI-60 compared to manual microscopic examination. The reproducibility of the XN-hsA mode showed good results in samples with low cell densities (coefficient of variation; % CV: 7.8% for 6 cells/μL). The linearity of the XN-hsA mode was established up to 938 cells/μL. The cell number obtained using the XN-hsA mode correlated highly with the corresponding microscopic examination. Good correlation was also observed between the DI-60 analyses and manual microscopic classification for all leukocyte types, except monocytes. In conclusion, the combined use of cell counting with the XN-hsA mode and automated morphological analyses using the DI-60 mode is potentially useful for the automated analysis of BF cells.
Muto, Satoru; Sugiura, Syo-Ichiro; Nakajima, Akiko; Horiuchi, Akira; Inoue, Masahiro; Saito, Keisuke; Isotani, Shuji; Yamaguchi, Raizo; Ide, Hisamitsu; Horie, Shigeo
2014-10-01
We aimed to identify patients with a chief complaint of hematuria who could safely avoid unnecessary radiation and instrumentation in the diagnosis of bladder cancer (BC), using automated urine flow cytometry to detect isomorphic red blood cells (RBCs) in urine. We acquired urine samples from 134 patients over the age of 35 years with a chief complaint of hematuria and a positive urine occult blood test or microhematuria. The data were analyzed using the UF-1000i (®) (Sysmex Co., Ltd., Kobe, Japan) automated urine flow cytometer to determine RBC morphology, which was classified as isomorphic or dysmorphic. The patients were divided into two groups (BC versus non-BC) for statistical analysis. Multivariate logistic regression analysis was used to determine the predictive value of flow cytometry versus urine cytology, the bladder tumor antigen test, occult blood in urine test, and microhematuria test. BC was confirmed in 26 of 134 patients (19.4 %). The area under the curve for RBC count using the automated urine flow cytometer was 0.94, representing the highest reference value obtained in this study. Isomorphic RBCs were detected in all patients in the BC group. On multivariate logistic regression analysis, only isomorphic RBC morphology was significantly predictive for BC (p < 0.001). Analytical parameters such as sensitivity, specificity, positive predictive value, and negative predictive value of isomorphic RBCs in urine were 100.0, 91.7, 74.3, and 100.0 %, respectively. Detection of urinary isomorphic RBCs using automated urine flow cytometry is a reliable method in the diagnosis of BC with hematuria.
Quantitative analysis of cardiovascular MR images.
van der Geest, R J; de Roos, A; van der Wall, E E; Reiber, J H
1997-06-01
The diagnosis of cardiovascular disease requires the precise assessment of both morphology and function. Nearly all aspects of cardiovascular function and flow can be quantified nowadays with fast magnetic resonance (MR) imaging techniques. Conventional and breath-hold cine MR imaging allow the precise and highly reproducible assessment of global and regional left ventricular function. During the same examination, velocity encoded cine (VEC) MR imaging provides measurements of blood flow in the heart and great vessels. Quantitative image analysis often still relies on manual tracing of contours in the images. Reliable automated or semi-automated image analysis software would be very helpful to overcome the limitations associated with the manual and tedious processing of the images. Recent progress in MR imaging of the coronary arteries and myocardial perfusion imaging with contrast media, along with the further development of faster imaging sequences, suggest that MR imaging could evolve into a single technique ('one stop shop') for the evaluation of many aspects of heart disease. As a result, it is very likely that the need for automated image segmentation and analysis software algorithms will further increase. In this paper the developments directed towards the automated image analysis and semi-automated contour detection for cardiovascular MR imaging are presented.
Mirsky, Simcha K; Barnea, Itay; Levi, Mattan; Greenspan, Hayit; Shaked, Natan T
2017-09-01
Currently, the delicate process of selecting sperm cells to be used for in vitro fertilization (IVF) is still based on the subjective, qualitative analysis of experienced clinicians using non-quantitative optical microscopy techniques. In this work, a method was developed for the automated analysis of sperm cells based on the quantitative phase maps acquired through use of interferometric phase microscopy (IPM). Over 1,400 human sperm cells from 8 donors were imaged using IPM, and an algorithm was designed to digitally isolate sperm cell heads from the quantitative phase maps while taking into consideration both the cell 3D morphology and contents, as well as acquire features describing sperm head morphology. A subset of these features was used to train a support vector machine (SVM) classifier to automatically classify sperm of good and bad morphology. The SVM achieves an area under the receiver operating characteristic curve of 88.59% and an area under the precision-recall curve of 88.67%, as well as precisions of 90% or higher. We believe that our automatic analysis can become the basis for objective and automatic sperm cell selection in IVF. © 2017 International Society for Advancement of Cytometry. © 2017 International Society for Advancement of Cytometry.
Mathematical morphology for automated analysis of remotely sensed objects in radar images
NASA Technical Reports Server (NTRS)
Daida, Jason M.; Vesecky, John F.
1991-01-01
A symbiosis of pyramidal segmentation and morphological transmission is described. The pyramidal segmentation portion of the symbiosis has resulted in low (2.6 percent) misclassification error rate for a one-look simulation. Other simulations indicate lower error rates (1.8 percent for a four-look image). The morphological transformation portion has resulted in meaningful partitions with a minimal loss of fractal boundary information. An unpublished version of Thicken, suitable for watersheds transformations of fractal objects, is also presented. It is demonstrated that the proposed symbiosis works with SAR (synthetic aperture radar) images: in this case, a four-look Seasat image of sea ice. It is concluded that the symbiotic forms of both segmentation and morphological transformation seem well suited for unsupervised geophysical analysis.
Kozlowski, Cleopatra; Jeet, Surinder; Beyer, Joseph; Guerrero, Steve; Lesch, Justin; Wang, Xiaoting; DeVoss, Jason; Diehl, Lauri
2013-01-01
SUMMARY The DSS (dextran sulfate sodium) model of colitis is a mouse model of inflammatory bowel disease. Microscopic symptoms include loss of crypt cells from the gut lining and infiltration of inflammatory cells into the colon. An experienced pathologist requires several hours per study to score histological changes in selected regions of the mouse gut. In order to increase the efficiency of scoring, Definiens Developer software was used to devise an entirely automated method to quantify histological changes in the whole H&E slide. When the algorithm was applied to slides from historical drug-discovery studies, automated scores classified 88% of drug candidates in the same way as pathologists’ scores. In addition, another automated image analysis method was developed to quantify colon-infiltrating macrophages, neutrophils, B cells and T cells in immunohistochemical stains of serial sections of the H&E slides. The timing of neutrophil and macrophage infiltration had the highest correlation to pathological changes, whereas T and B cell infiltration occurred later. Thus, automated image analysis enables quantitative comparisons between tissue morphology changes and cell-infiltration dynamics. PMID:23580198
Ross, James D.; Cullen, D. Kacy; Harris, James P.; LaPlaca, Michelle C.; DeWeerth, Stephen P.
2015-01-01
Three-dimensional (3-D) image analysis techniques provide a powerful means to rapidly and accurately assess complex morphological and functional interactions between neural cells. Current software-based identification methods of neural cells generally fall into two applications: (1) segmentation of cell nuclei in high-density constructs or (2) tracing of cell neurites in single cell investigations. We have developed novel methodologies to permit the systematic identification of populations of neuronal somata possessing rich morphological detail and dense neurite arborization throughout thick tissue or 3-D in vitro constructs. The image analysis incorporates several novel automated features for the discrimination of neurites and somata by initially classifying features in 2-D and merging these classifications into 3-D objects; the 3-D reconstructions automatically identify and adjust for over and under segmentation errors. Additionally, the platform provides for software-assisted error corrections to further minimize error. These features attain very accurate cell boundary identifications to handle a wide range of morphological complexities. We validated these tools using confocal z-stacks from thick 3-D neural constructs where neuronal somata had varying degrees of neurite arborization and complexity, achieving an accuracy of ≥95%. We demonstrated the robustness of these algorithms in a more complex arena through the automated segmentation of neural cells in ex vivo brain slices. These novel methods surpass previous techniques by improving the robustness and accuracy by: (1) the ability to process neurites and somata, (2) bidirectional segmentation correction, and (3) validation via software-assisted user input. This 3-D image analysis platform provides valuable tools for the unbiased analysis of neural tissue or tissue surrogates within a 3-D context, appropriate for the study of multi-dimensional cell-cell and cell-extracellular matrix interactions. PMID:26257609
NASA Astrophysics Data System (ADS)
Dong, Di; Li, Ziwei; Liu, Zhaoqin; Yu, Yang
2014-03-01
This paper focuses on automated extraction and monitoring of coastlines by remote sensing techniques using multi-temporal Landsat imagery along Caofeidian, China. Caofeidian, as one of the active economic regions in China, has experienced dramatic change due to enhanced human activities, such as land reclamation. These processes have caused morphological changes of the Caofeidian shoreline. In this study, shoreline extraction and change analysis are researched. An algorithm based on image texture and mathematical morphology is proposed to automate coastline extraction. We tested this approach and found that it's capable of extracting coastlines from TM and ETM+ images with little human modifications. Then, the detected coastline vectors are imported into Arcgis software, and the Digital Shoreline Analysis System (DSAS) is used to calculate the change rate (the end point rate and linear regression rate). The results show that in some parts of the research area, remarkable coastline changes are observed, especially the accretion rate. The abnormal accretion is mostly attributed to the large-scale land reclamation during 2003 and 2004 in Caofeidian. So we can conclude that various construction projects, especially the land reclamation project, have made Caofeidian shorelines change greatly, far above the normal.
Sun, Wanxin; Chang, Shi; Tai, Dean C S; Tan, Nancy; Xiao, Guangfa; Tang, Huihuan; Yu, Hanry
2008-01-01
Liver fibrosis is associated with an abnormal increase in an extracellular matrix in chronic liver diseases. Quantitative characterization of fibrillar collagen in intact tissue is essential for both fibrosis studies and clinical applications. Commonly used methods, histological staining followed by either semiquantitative or computerized image analysis, have limited sensitivity, accuracy, and operator-dependent variations. The fibrillar collagen in sinusoids of normal livers could be observed through second-harmonic generation (SHG) microscopy. The two-photon excited fluorescence (TPEF) images, recorded simultaneously with SHG, clearly revealed the hepatocyte morphology. We have systematically optimized the parameters for the quantitative SHG/TPEF imaging of liver tissue and developed fully automated image analysis algorithms to extract the information of collagen changes and cell necrosis. Subtle changes in the distribution and amount of collagen and cell morphology are quantitatively characterized in SHG/TPEF images. By comparing to traditional staining, such as Masson's trichrome and Sirius red, SHG/TPEF is a sensitive quantitative tool for automated collagen characterization in liver tissue. Our system allows for enhanced detection and quantification of sinusoidal collagen fibers in fibrosis research and clinical diagnostics.
T-wave morphology can distinguish healthy controls from LQTS patients.
Immanuel, S A; Sadrieh, A; Baumert, M; Couderc, J P; Zareba, W; Hill, A P; Vandenberg, J I
2016-09-01
Long QT syndrome (LQTS) is an inherited disorder associated with prolongation of the QT/QTc interval on the surface electrocardiogram (ECG) and a markedly increased risk of sudden cardiac death due to cardiac arrhythmias. Up to 25% of genotype-positive LQTS patients have QT/QTc intervals in the normal range. These patients are, however, still at increased risk of life-threatening events compared to their genotype-negative siblings. Previous studies have shown that analysis of T-wave morphology may enhance discrimination between control and LQTS patients. In this study we tested the hypothesis that automated analysis of T-wave morphology from Holter ECG recordings could distinguish between control and LQTS patients with QTc values in the range 400-450 ms. Holter ECGs were obtained from the Telemetric and Holter ECG Warehouse (THEW) database. Frequency binned averaged ECG waveforms were obtained and extracted T-waves were fitted with a combination of 3 sigmoid functions (upslope, downslope and switch) or two 9th order polynomial functions (upslope and downslope). Neural network classifiers, based on parameters obtained from the sigmoid or polynomial fits to the 1 Hz and 1.3 Hz ECG waveforms, were able to achieve up to 92% discrimination between control and LQTS patients and 88% discrimination between LQTS1 and LQTS2 patients. When we analysed a subgroup of subjects with normal QT intervals (400-450 ms, 67 controls and 61 LQTS), T-wave morphology based parameters enabled 90% discrimination between control and LQTS patients, compared to only 71% when the groups were classified based on QTc alone. In summary, our Holter ECG analysis algorithms demonstrate the feasibility of using automated analysis of T-wave morphology to distinguish LQTS patients, even those with normal QTc, from healthy controls.
NASA Astrophysics Data System (ADS)
Wollman, Adam J. M.; Miller, Helen; Foster, Simon; Leake, Mark C.
2016-10-01
Staphylococcus aureus is an important pathogen, giving rise to antimicrobial resistance in cell strains such as Methicillin Resistant S. aureus (MRSA). Here we report an image analysis framework for automated detection and image segmentation of cells in S. aureus cell clusters, and explicit identification of their cell division planes. We use a new combination of several existing analytical tools of image analysis to detect cellular and subcellular morphological features relevant to cell division from millisecond time scale sampled images of live pathogens at a detection precision of single molecules. We demonstrate this approach using a fluorescent reporter GFP fused to the protein EzrA that localises to a mid-cell plane during division and is involved in regulation of cell size and division. This image analysis framework presents a valuable platform from which to study candidate new antimicrobials which target the cell division machinery, but may also have more general application in detecting morphologically complex structures of fluorescently labelled proteins present in clusters of other types of cells.
Grimes, Carolyn N; Fry, Michael M
2014-12-01
This study sought to develop customized morphology flagging thresholds for canine erythrocyte volume and hemoglobin concentration [Hgb] on the ADVIA 120 hematology analyzer; compare automated morphology flagging with results of microscopic blood smear evaluation; and examine effects of customized thresholds on morphology flagging results. Customized thresholds were determined using data from 52 clinically healthy dogs. Blood smear evaluation and automated morphology flagging results were correlated with mean cell volume (MCV) and cellular hemoglobin concentration mean (CHCM) in 26 dogs. Customized thresholds were applied retroactively to complete blood (cell) count (CBC) data from 5 groups of dogs, including a reference sample group, clinical cases, and animals with experimentally induced iron deficiency anemia. Automated morphology flagging correlated more highly with MCV or CHCM than did blood smear evaluation; correlation with MCV was highest using customized thresholds. Customized morphology flagging thresholds resulted in more sensitive detection of microcytosis, macrocytosis, and hypochromasia than default thresholds.
NASA Astrophysics Data System (ADS)
Srivastava, Vishal; Dalal, Devjyoti; Kumar, Anuj; Prakash, Surya; Dalal, Krishna
2018-06-01
Moisture content is an important feature of fruits and vegetables. As 80% of apple content is water, so decreasing the moisture content will degrade the quality of apples (Golden Delicious). The computational and texture features of the apples were extracted from optical coherence tomography (OCT) images. A support vector machine with a Gaussian kernel model was used to perform automated classification. To evaluate the quality of wax coated apples during storage in vivo, our proposed method opens up the possibility of fully automated quantitative analysis based on the morphological features of apples. Our results demonstrate that the analysis of the computational and texture features of OCT images may be a good non-destructive method for the assessment of the quality of apples.
An automated approach for extracting Barrier Island morphology from digital elevation models
NASA Astrophysics Data System (ADS)
Wernette, Phillipe; Houser, Chris; Bishop, Michael P.
2016-06-01
The response and recovery of a barrier island to extreme storms depends on the elevation of the dune base and crest, both of which can vary considerably alongshore and through time. Quantifying the response to and recovery from storms requires that we can first identify and differentiate the dune(s) from the beach and back-barrier, which in turn depends on accurate identification and delineation of the dune toe, crest and heel. The purpose of this paper is to introduce a multi-scale automated approach for extracting beach, dune (dune toe, dune crest and dune heel), and barrier island morphology. The automated approach introduced here extracts the shoreline and back-barrier shoreline based on elevation thresholds, and extracts the dune toe, dune crest and dune heel based on the average relative relief (RR) across multiple spatial scales of analysis. The multi-scale automated RR approach to extracting dune toe, dune crest, and dune heel based upon relative relief is more objective than traditional approaches because every pixel is analyzed across multiple computational scales and the identification of features is based on the calculated RR values. The RR approach out-performed contemporary approaches and represents a fast objective means to define important beach and dune features for predicting barrier island response to storms. The RR method also does not require that the dune toe, crest, or heel are spatially continuous, which is important because dune morphology is likely naturally variable alongshore.
Bargetzi, M J
2006-01-01
The first step in evaluating leukopenia is the analysis of the different leukocyte subpopulations. The automated total blood cell count gives a first impression of the decreased leukocyte subtype and if erythrocytes and/or platelets are involved. Microscopic interpretation of the blood smear verifies the automated differential and allows a statement on the morphology of the individual cells. Differential diagnosis of the decreased leukocyte subpopulation is vast and in many cases leukopenia is only an epiphenomenona of a systemic disease. Therefore therapy is always directed towards the underlying disorder.
NASA Astrophysics Data System (ADS)
Hoffmann, Sebastian; Shutler, Jamie D.; Lobbes, Marc; Burgeth, Bernhard; Meyer-Bäse, Anke
2013-12-01
Dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) represents an established method for the detection and diagnosis of breast lesions. While mass-like enhancing lesions can be easily categorized according to the Breast Imaging Reporting and Data System (BI-RADS) MRI lexicon, a majority of diagnostically challenging lesions, the so called non-mass-like enhancing lesions, remain both qualitatively as well as quantitatively difficult to analyze. Thus, the evaluation of kinetic and/or morphological characteristics of non-masses represents a challenging task for an automated analysis and is of crucial importance for advancing current computer-aided diagnosis (CAD) systems. Compared to the well-characterized mass-enhancing lesions, non-masses have no well-defined and blurred tumor borders and a kinetic behavior that is not easily generalizable and thus discriminative for malignant and benign non-masses. To overcome these difficulties and pave the way for novel CAD systems for non-masses, we will evaluate several kinetic and morphological descriptors separately and a novel technique, the Zernike velocity moments, to capture the joint spatio-temporal behavior of these lesions, and additionally consider the impact of non-rigid motion compensation on a correct diagnosis.
Fully automated corneal endothelial morphometry of images captured by clinical specular microscopy
NASA Astrophysics Data System (ADS)
Bucht, Curry; Söderberg, Per; Manneberg, Göran
2009-02-01
The corneal endothelium serves as the posterior barrier of the cornea. Factors such as clarity and refractive properties of the cornea are in direct relationship to the quality of the endothelium. The endothelial cell density is considered the most important morphological factor. Morphometry of the corneal endothelium is presently done by semi-automated analysis of pictures captured by a Clinical Specular Microscope (CSM). Because of the occasional need of operator involvement, this process can be tedious, having a negative impact on sampling size. This study was dedicated to the development of fully automated analysis of images of the corneal endothelium, captured by CSM, using Fourier analysis. Software was developed in the mathematical programming language Matlab. Pictures of the corneal endothelium, captured by CSM, were read into the analysis software. The software automatically performed digital enhancement of the images. The digitally enhanced images of the corneal endothelium were transformed, using the fast Fourier transform (FFT). Tools were developed and applied for identification and analysis of relevant characteristics of the Fourier transformed images. The data obtained from each Fourier transformed image was used to calculate the mean cell density of its corresponding corneal endothelium. The calculation was based on well known diffraction theory. Results in form of estimated cell density of the corneal endothelium were obtained, using fully automated analysis software on images captured by CSM. The cell density obtained by the fully automated analysis was compared to the cell density obtained from classical, semi-automated analysis and a relatively large correlation was found.
L, Frère; I, Paul-Pont; J, Moreau; P, Soudant; C, Lambert; A, Huvet; E, Rinnert
2016-12-15
Every step of microplastic analysis (collection, extraction and characterization) is time-consuming, representing an obstacle to the implementation of large scale monitoring. This study proposes a semi-automated Raman micro-spectroscopy method coupled to static image analysis that allows the screening of a large quantity of microplastic in a time-effective way with minimal machine operator intervention. The method was validated using 103 particles collected at the sea surface spiked with 7 standard plastics: morphological and chemical characterization of particles was performed in <3h. The method was then applied to a larger environmental sample (n=962 particles). The identification rate was 75% and significantly decreased as a function of particle size. Microplastics represented 71% of the identified particles and significant size differences were observed: polystyrene was mainly found in the 2-5mm range (59%), polyethylene in the 1-2mm range (40%) and polypropylene in the 0.335-1mm range (42%). Copyright © 2016 Elsevier Ltd. All rights reserved.
Little, Daniel; Luft, Christin; Mosaku, Olukunbi; Lorvellec, Maëlle; Yao, Zhi; Paillusson, Sébastien; Kriston-Vizi, Janos; Gandhi, Sonia; Abramov, Andrey Y; Ketteler, Robin; Devine, Michael J; Gissen, Paul
2018-06-13
Mitochondrial dysfunction is implicated in many neurodegenerative diseases including Parkinson's disease (PD). Induced pluripotent stem cells (iPSCs) provide a unique cell model for studying neurological diseases. We have established a high-content assay that can simultaneously measure mitochondrial function, morphology and cell viability in iPSC-derived dopaminergic neurons. iPSCs from PD patients with mutations in SNCA and unaffected controls were differentiated into dopaminergic neurons, seeded in 384-well plates and stained with the mitochondrial membrane potential dependent dye TMRM, alongside Hoechst-33342 and Calcein-AM. Images were acquired using an automated confocal screening microscope and single cells were analysed using automated image analysis software. PD neurons displayed reduced mitochondrial membrane potential and altered mitochondrial morphology compared to control neurons. This assay demonstrates that high content screening techniques can be applied to the analysis of mitochondria in iPSC-derived neurons. This technique could form part of a drug discovery platform to test potential new therapeutics for PD and other neurodegenerative diseases.
Neuronal Morphology goes Digital: A Research Hub for Cellular and System Neuroscience
Parekh, Ruchi; Ascoli, Giorgio A.
2013-01-01
Summary The importance of neuronal morphology in brain function has been recognized for over a century. The broad applicability of “digital reconstructions” of neuron morphology across neuroscience sub-disciplines has stimulated the rapid development of numerous synergistic tools for data acquisition, anatomical analysis, three-dimensional rendering, electrophysiological simulation, growth models, and data sharing. Here we discuss the processes of histological labeling, microscopic imaging, and semi-automated tracing. Moreover, we provide an annotated compilation of currently available resources in this rich research “ecosystem” as a central reference for experimental and computational neuroscience. PMID:23522039
Shingrani, Rahul; Krenz, Gary; Molthen, Robert
2010-01-01
With advances in medical imaging scanners, it has become commonplace to generate large multidimensional datasets. These datasets require tools for a rapid, thorough analysis. To address this need, we have developed an automated algorithm for morphometric analysis incorporating A Visualization Workshop computational and image processing libraries for three-dimensional segmentation, vascular tree generation and structural hierarchical ordering with a two-stage numeric optimization procedure for estimating vessel diameters. We combine this new technique with our mathematical models of pulmonary vascular morphology to quantify structural and functional attributes of lung arterial trees. Our physiological studies require repeated measurements of vascular structure to determine differences in vessel biomechanical properties between animal models of pulmonary disease. Automation provides many advantages including significantly improved speed and minimized operator interaction and biasing. The results are validated by comparison with previously published rat pulmonary arterial micro-CT data analysis techniques, in which vessels were manually mapped and measured using intense operator intervention. Published by Elsevier Ireland Ltd.
Image segmentation and dynamic lineage analysis in single-cell fluorescence microscopy.
Wang, Quanli; Niemi, Jarad; Tan, Chee-Meng; You, Lingchong; West, Mike
2010-01-01
An increasingly common component of studies in synthetic and systems biology is analysis of dynamics of gene expression at the single-cell level, a context that is heavily dependent on the use of time-lapse movies. Extracting quantitative data on the single-cell temporal dynamics from such movies remains a major challenge. Here, we describe novel methods for automating key steps in the analysis of single-cell, fluorescent images-segmentation and lineage reconstruction-to recognize and track individual cells over time. The automated analysis iteratively combines a set of extended morphological methods for segmentation, and uses a neighborhood-based scoring method for frame-to-frame lineage linking. Our studies with bacteria, budding yeast and human cells, demonstrate the portability and usability of these methods, whether using phase, bright field or fluorescent images. These examples also demonstrate the utility of our integrated approach in facilitating analyses of engineered and natural cellular networks in diverse settings. The automated methods are implemented in freely available, open-source software.
Automated detection of exudates for diabetic retinopathy screening
NASA Astrophysics Data System (ADS)
Fleming, Alan D.; Philip, Sam; Goatman, Keith A.; Williams, Graeme J.; Olson, John A.; Sharp, Peter F.
2007-12-01
Automated image analysis is being widely sought to reduce the workload required for grading images resulting from diabetic retinopathy screening programmes. The recognition of exudates in retinal images is an important goal for automated analysis since these are one of the indicators that the disease has progressed to a stage requiring referral to an ophthalmologist. Candidate exudates were detected using a multi-scale morphological process. Based on local properties, the likelihoods of a candidate being a member of classes exudate, drusen or background were determined. This leads to a likelihood of the image containing exudates which can be thresholded to create a binary decision. Compared to a clinical reference standard, images containing exudates were detected with sensitivity 95.0% and specificity 84.6% in a test set of 13 219 images of which 300 contained exudates. Depending on requirements, this method could form part of an automated system to detect images showing either any diabetic retinopathy or referable diabetic retinopathy.
2018-01-01
statistical moments of order 2, 3, and 4. The probability density function (PDF) of the vibrational time series of a good bearing has a Gaussian...ARL-TR-8271 ● JAN 2018 US Army Research Laboratory An Automated Energy Detection Algorithm Based on Morphological Filter...when it is no longer needed. Do not return it to the originator. ARL-TR-8271 ● JAN 2018 US Army Research Laboratory An Automated
Automated Shape Analysis of Teeth from the Archaelogical Site of Nerqin Naver
NASA Astrophysics Data System (ADS)
Gaboutchian, A.; Simonyan, H.; Knyaz, V.; Petrosyan, G.; Ter-Vardanyan, L.; Leybova, N. A.; Apresyan, S. V.
2018-05-01
Traditional odontometry currently suggests a limited number of measurements on tooth coronal parts, including estimation of mesio-distal and vestibular-oral diameters, or dimension, through usually a single measurement of the maximal parameter. Taking into consideration the complexity, irregularity and variability of tooth shapes we find such measurements insufficient for interpreting tooth morphology. Thus we propose odontotomic approach of obtaining data from a series of parallel equally spaced sections in combination with automated detection of landmarks used for measurements. These sections allow locating maximal dimensions of teeth as well as collecting data from all parts of the tooth to describe it morphologically. Referring odontometric data to the whole tooth we obtain more precise and objective records which have proved to be informative in a series of dental and anthropological studies. Growing interest and implementing of digital technology in odontometric studies calls for studies ensuring transition to new methods. The current research is aimed to undertake a comparative study of the traditional and automated digital odontometry. The influence of various approaches to odontotomy (number and direction of sections) on odontometric data is subjected to studies as well. The above-mentioned tooth shape analysis is applied to samples from the archaeological site of Nerqin Naver to contribute to complicated odontological studies from the Early Bronze burials.
Automated image analysis of placental villi and syncytial knots in histological sections.
Kidron, Debora; Vainer, Ifat; Fisher, Yael; Sharony, Reuven
2017-05-01
Delayed villous maturation and accelerated villous maturation diagnosed in histologic sections are morphologic manifestations of pathophysiological conditions. The inter-observer agreement among pathologists in assessing these conditions is moderate at best. We investigated whether automated image analysis of placental villi and syncytial knots could improve standardization in diagnosing these conditions. Placentas of antepartum fetal death at or near term were diagnosed as normal, delayed or accelerated villous maturation. Histologic sections of 5 cases per group were photographed at ×10 magnification. Automated image analysis of villi and syncytial knots was performed, using ImageJ public domain software. Analysis of hundreds of histologic images was carried out within minutes on a personal computer, using macro commands. Compared to normal placentas, villi from delayed maturation were larger and fewer, with fewer and smaller syncytial knots. Villi from accelerated maturation were smaller. The data were further analyzed according to horizontal placental zones and groups of villous size. Normal placentas can be discriminated from placentas of delayed or accelerated villous maturation using automated image analysis. Automated image analysis of villi and syncytial knots is not equivalent to interpretation by the human eye. Each method has advantages and disadvantages in assessing the 2-dimensional histologic sections representing the complex, 3-dimensional villous tree. Image analysis of placentas provides quantitative data that might help in standardizing and grading of placentas for diagnostic and research purposes. Copyright © 2017 Elsevier Ltd. All rights reserved.
Throughout development neurons undergo a number of morphological changes including neurite outgrowth from the cell body. Exposure to neurotoxic chemicals that interfere with this process may result in permanent deficits in nervous system function. Traditionally, rodent primary ne...
During development neurons undergo a number of morphological changes including neurite outgrowth from the cell body. Exposure to neurotoxicants that interfere with this process may cause in permanent deficits in nervous system function. While many studies have used rodent primary...
Automated quantitative 3D analysis of aorta size, morphology, and mural calcification distributions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kurugol, Sila, E-mail: sila.kurugol@childrens.harvard.edu; Come, Carolyn E.; Diaz, Alejandro A.
Purpose: The purpose of this work is to develop a fully automated pipeline to compute aorta morphology and calcification measures in large cohorts of CT scans that can be used to investigate the potential of these measures as imaging biomarkers of cardiovascular disease. Methods: The first step of the automated pipeline is aorta segmentation. The algorithm the authors propose first detects an initial aorta boundary by exploiting cross-sectional circularity of aorta in axial slices and aortic arch in reformatted oblique slices. This boundary is then refined by a 3D level-set segmentation that evolves the boundary to the location of nearbymore » edges. The authors then detect the aortic calcifications with thresholding and filter out the false positive regions due to nearby high intensity structures based on their anatomical location. The authors extract the centerline and oblique cross sections of the segmented aortas and compute the aorta morphology and calcification measures of the first 2500 subjects from COPDGene study. These measures include volume and number of calcified plaques and measures of vessel morphology such as average cross-sectional area, tortuosity, and arch width. Results: The authors computed the agreement between the algorithm and expert segmentations on 45 CT scans and obtained a closest point mean error of 0.62 ± 0.09 mm and a Dice coefficient of 0.92 ± 0.01. The calcification detection algorithm resulted in an improved true positive detection rate of 0.96 compared to previous work. The measurements of aorta size agreed with the measurements reported in previous work. The initial results showed associations of aorta morphology with calcification and with aging. These results may indicate aorta stiffening and unwrapping with calcification and aging. Conclusions: The authors have developed an objective tool to assess aorta morphology and aortic calcium plaques on CT scans that may be used to provide information about the presence of cardiovascular disease and its clinical impact in smokers.« less
Automated quantitative 3D analysis of aorta size, morphology, and mural calcification distributions.
Kurugol, Sila; Come, Carolyn E; Diaz, Alejandro A; Ross, James C; Kinney, Greg L; Black-Shinn, Jennifer L; Hokanson, John E; Budoff, Matthew J; Washko, George R; San Jose Estepar, Raul
2015-09-01
The purpose of this work is to develop a fully automated pipeline to compute aorta morphology and calcification measures in large cohorts of CT scans that can be used to investigate the potential of these measures as imaging biomarkers of cardiovascular disease. The first step of the automated pipeline is aorta segmentation. The algorithm the authors propose first detects an initial aorta boundary by exploiting cross-sectional circularity of aorta in axial slices and aortic arch in reformatted oblique slices. This boundary is then refined by a 3D level-set segmentation that evolves the boundary to the location of nearby edges. The authors then detect the aortic calcifications with thresholding and filter out the false positive regions due to nearby high intensity structures based on their anatomical location. The authors extract the centerline and oblique cross sections of the segmented aortas and compute the aorta morphology and calcification measures of the first 2500 subjects from COPDGene study. These measures include volume and number of calcified plaques and measures of vessel morphology such as average cross-sectional area, tortuosity, and arch width. The authors computed the agreement between the algorithm and expert segmentations on 45 CT scans and obtained a closest point mean error of 0.62 ± 0.09 mm and a Dice coefficient of 0.92 ± 0.01. The calcification detection algorithm resulted in an improved true positive detection rate of 0.96 compared to previous work. The measurements of aorta size agreed with the measurements reported in previous work. The initial results showed associations of aorta morphology with calcification and with aging. These results may indicate aorta stiffening and unwrapping with calcification and aging. The authors have developed an objective tool to assess aorta morphology and aortic calcium plaques on CT scans that may be used to provide information about the presence of cardiovascular disease and its clinical impact in smokers.
Automated quantitative 3D analysis of aorta size, morphology, and mural calcification distributions
Kurugol, Sila; Come, Carolyn E.; Diaz, Alejandro A.; Ross, James C.; Kinney, Greg L.; Black-Shinn, Jennifer L.; Hokanson, John E.; Budoff, Matthew J.; Washko, George R.; San Jose Estepar, Raul
2015-01-01
Purpose: The purpose of this work is to develop a fully automated pipeline to compute aorta morphology and calcification measures in large cohorts of CT scans that can be used to investigate the potential of these measures as imaging biomarkers of cardiovascular disease. Methods: The first step of the automated pipeline is aorta segmentation. The algorithm the authors propose first detects an initial aorta boundary by exploiting cross-sectional circularity of aorta in axial slices and aortic arch in reformatted oblique slices. This boundary is then refined by a 3D level-set segmentation that evolves the boundary to the location of nearby edges. The authors then detect the aortic calcifications with thresholding and filter out the false positive regions due to nearby high intensity structures based on their anatomical location. The authors extract the centerline and oblique cross sections of the segmented aortas and compute the aorta morphology and calcification measures of the first 2500 subjects from COPDGene study. These measures include volume and number of calcified plaques and measures of vessel morphology such as average cross-sectional area, tortuosity, and arch width. Results: The authors computed the agreement between the algorithm and expert segmentations on 45 CT scans and obtained a closest point mean error of 0.62 ± 0.09 mm and a Dice coefficient of 0.92 ± 0.01. The calcification detection algorithm resulted in an improved true positive detection rate of 0.96 compared to previous work. The measurements of aorta size agreed with the measurements reported in previous work. The initial results showed associations of aorta morphology with calcification and with aging. These results may indicate aorta stiffening and unwrapping with calcification and aging. Conclusions: The authors have developed an objective tool to assess aorta morphology and aortic calcium plaques on CT scans that may be used to provide information about the presence of cardiovascular disease and its clinical impact in smokers. PMID:26328995
NASA Astrophysics Data System (ADS)
Susrama, I. G.; Purnama, K. E.; Purnomo, M. H.
2016-01-01
Oligospermia is a male fertility issue defined as a low sperm concentration in the ejaculate. Normally the sperm concentration is 20-120 million/ml, while Oligospermia patients has sperm concentration less than 20 million/ml. Sperm test done in the fertility laboratory to determine oligospermia by checking fresh sperm according to WHO standards in 2010 [9]. The sperm seen in a microscope using a Neubauer improved counting chamber and manually count the number of sperm. In order to be counted automatically, this research made an automation system to analyse and count the sperm concentration called Automated Analysis of Sperm Concentration Counters (A2SC2) using Otsu threshold segmentation process and morphology. Data sperm used is the fresh sperm directly in the analysis in the laboratory from 10 people. The test results using A2SC2 method obtained an accuracy of 91%. Thus in this study, A2SC2 can be used to calculate the amount and concentration of sperm automatically
Brief communication: Landslide motion from cross correlation of UAV-derived morphological attributes
NASA Astrophysics Data System (ADS)
Peppa, Maria V.; Mills, Jon P.; Moore, Phil; Miller, Pauline E.; Chambers, Jonathan E.
2017-12-01
Unmanned aerial vehicles (UAVs) can provide observations of high spatio-temporal resolution to enable operational landslide monitoring. In this research, the construction of digital elevation models (DEMs) and orthomosaics from UAV imagery is achieved using structure-from-motion (SfM) photogrammetric procedures. The study examines the additional value that the morphological attribute of openness
, amongst others, can provide to surface deformation analysis. Image-cross-correlation functions and DEM subtraction techniques are applied to the SfM outputs. Through the proposed integrated analysis, the automated quantification of a landslide's motion over time is demonstrated, with implications for the wider interpretation of landslide kinematics via UAV surveys.
Digital microfluidics for automated hanging drop cell spheroid culture.
Aijian, Andrew P; Garrell, Robin L
2015-06-01
Cell spheroids are multicellular aggregates, grown in vitro, that mimic the three-dimensional morphology of physiological tissues. Although there are numerous benefits to using spheroids in cell-based assays, the adoption of spheroids in routine biomedical research has been limited, in part, by the tedious workflow associated with spheroid formation and analysis. Here we describe a digital microfluidic platform that has been developed to automate liquid-handling protocols for the formation, maintenance, and analysis of multicellular spheroids in hanging drop culture. We show that droplets of liquid can be added to and extracted from through-holes, or "wells," and fabricated in the bottom plate of a digital microfluidic device, enabling the formation and assaying of hanging drops. Using this digital microfluidic platform, spheroids of mouse mesenchymal stem cells were formed and maintained in situ for 72 h, exhibiting good viability (>90%) and size uniformity (% coefficient of variation <10% intraexperiment, <20% interexperiment). A proof-of-principle drug screen was performed on human colorectal adenocarcinoma spheroids to demonstrate the ability to recapitulate physiologically relevant phenomena such as insulin-induced drug resistance. With automatable and flexible liquid handling, and a wide range of in situ sample preparation and analysis capabilities, the digital microfluidic platform provides a viable tool for automating cell spheroid culture and analysis. © 2014 Society for Laboratory Automation and Screening.
CHARACTERIZATION OF THE COMPLETE FIBER NETWORK TOPOLOGY OF PLANAR FIBROUS TISSUES AND SCAFFOLDS
D'Amore, Antonio; Stella, John A.; Wagner, William R.; Sacks, Michael S.
2010-01-01
Understanding how engineered tissue scaffold architecture affects cell morphology, metabolism, phenotypic expression, as well as predicting material mechanical behavior have recently received increased attention. In the present study, an image-based analysis approach that provides an automated tool to characterize engineered tissue fiber network topology is presented. Micro-architectural features that fully defined fiber network topology were detected and quantified, which include fiber orientation, connectivity, intersection spatial density, and diameter. Algorithm performance was tested using scanning electron microscopy (SEM) images of electrospun poly(ester urethane)urea (ES-PEUU) scaffolds. SEM images of rabbit mesenchymal stem cell (MSC) seeded collagen gel scaffolds and decellularized rat carotid arteries were also analyzed to further evaluate the ability of the algorithm to capture fiber network morphology regardless of scaffold type and the evaluated size scale. The image analysis procedure was validated qualitatively and quantitatively, comparing fiber network topology manually detected by human operators (n=5) with that automatically detected by the algorithm. Correlation values between manual detected and algorithm detected results for the fiber angle distribution and for the fiber connectivity distribution were 0.86 and 0.93 respectively. Algorithm detected fiber intersections and fiber diameter values were comparable (within the mean ± standard deviation) with those detected by human operators. This automated approach identifies and quantifies fiber network morphology as demonstrated for three relevant scaffold types and provides a means to: (1) guarantee objectivity, (2) significantly reduce analysis time, and (3) potentiate broader analysis of scaffold architecture effects on cell behavior and tissue development both in vitro and in vivo. PMID:20398930
Automated classification of cell morphology by coherence-controlled holographic microscopy
NASA Astrophysics Data System (ADS)
Strbkova, Lenka; Zicha, Daniel; Vesely, Pavel; Chmelik, Radim
2017-08-01
In the last few years, classification of cells by machine learning has become frequently used in biology. However, most of the approaches are based on morphometric (MO) features, which are not quantitative in terms of cell mass. This may result in poor classification accuracy. Here, we study the potential contribution of coherence-controlled holographic microscopy enabling quantitative phase imaging for the classification of cell morphologies. We compare our approach with the commonly used method based on MO features. We tested both classification approaches in an experiment with nutritionally deprived cancer tissue cells, while employing several supervised machine learning algorithms. Most of the classifiers provided higher performance when quantitative phase features were employed. Based on the results, it can be concluded that the quantitative phase features played an important role in improving the performance of the classification. The methodology could be valuable help in refining the monitoring of live cells in an automated fashion. We believe that coherence-controlled holographic microscopy, as a tool for quantitative phase imaging, offers all preconditions for the accurate automated analysis of live cell behavior while enabling noninvasive label-free imaging with sufficient contrast and high-spatiotemporal phase sensitivity.
Automated classification of cell morphology by coherence-controlled holographic microscopy.
Strbkova, Lenka; Zicha, Daniel; Vesely, Pavel; Chmelik, Radim
2017-08-01
In the last few years, classification of cells by machine learning has become frequently used in biology. However, most of the approaches are based on morphometric (MO) features, which are not quantitative in terms of cell mass. This may result in poor classification accuracy. Here, we study the potential contribution of coherence-controlled holographic microscopy enabling quantitative phase imaging for the classification of cell morphologies. We compare our approach with the commonly used method based on MO features. We tested both classification approaches in an experiment with nutritionally deprived cancer tissue cells, while employing several supervised machine learning algorithms. Most of the classifiers provided higher performance when quantitative phase features were employed. Based on the results, it can be concluded that the quantitative phase features played an important role in improving the performance of the classification. The methodology could be valuable help in refining the monitoring of live cells in an automated fashion. We believe that coherence-controlled holographic microscopy, as a tool for quantitative phase imaging, offers all preconditions for the accurate automated analysis of live cell behavior while enabling noninvasive label-free imaging with sufficient contrast and high-spatiotemporal phase sensitivity. (2017) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).
Gilhodes, Jean-Claude; Julé, Yvon; Kreuz, Sebastian; Stierstorfer, Birgit; Stiller, Detlef; Wollin, Lutz
2017-01-01
Current literature on pulmonary fibrosis induced in animal models highlights the need of an accurate, reliable and reproducible histological quantitative analysis. One of the major limits of histological scoring concerns the fact that it is observer-dependent and consequently subject to variability, which may preclude comparative studies between different laboratories. To achieve a reliable and observer-independent quantification of lung fibrosis we developed an automated software histological image analysis performed from digital image of entire lung sections. This automated analysis was compared to standard evaluation methods with regard to its validation as an end-point measure of fibrosis. Lung fibrosis was induced in mice by intratracheal administration of bleomycin (BLM) at 0.25, 0.5, 0.75 and 1 mg/kg. A detailed characterization of BLM-induced fibrosis was performed 14 days after BLM administration using lung function testing, micro-computed tomography and Ashcroft scoring analysis. Quantification of fibrosis by automated analysis was assessed based on pulmonary tissue density measured from thousands of micro-tiles processed from digital images of entire lung sections. Prior to analysis, large bronchi and vessels were manually excluded from the original images. Measurement of fibrosis has been expressed by two indexes: the mean pulmonary tissue density and the high pulmonary tissue density frequency. We showed that tissue density indexes gave access to a very accurate and reliable quantification of morphological changes induced by BLM even for the lowest concentration used (0.25 mg/kg). A reconstructed 2D-image of the entire lung section at high resolution (3.6 μm/pixel) has been performed from tissue density values allowing the visualization of their distribution throughout fibrotic and non-fibrotic regions. A significant correlation (p<0.0001) was found between automated analysis and the above standard evaluation methods. This correlation establishes automated analysis as a novel end-point measure of BLM-induced lung fibrosis in mice, which will be very valuable for future preclinical drug explorations.
Gilhodes, Jean-Claude; Kreuz, Sebastian; Stierstorfer, Birgit; Stiller, Detlef; Wollin, Lutz
2017-01-01
Current literature on pulmonary fibrosis induced in animal models highlights the need of an accurate, reliable and reproducible histological quantitative analysis. One of the major limits of histological scoring concerns the fact that it is observer-dependent and consequently subject to variability, which may preclude comparative studies between different laboratories. To achieve a reliable and observer-independent quantification of lung fibrosis we developed an automated software histological image analysis performed from digital image of entire lung sections. This automated analysis was compared to standard evaluation methods with regard to its validation as an end-point measure of fibrosis. Lung fibrosis was induced in mice by intratracheal administration of bleomycin (BLM) at 0.25, 0.5, 0.75 and 1 mg/kg. A detailed characterization of BLM-induced fibrosis was performed 14 days after BLM administration using lung function testing, micro-computed tomography and Ashcroft scoring analysis. Quantification of fibrosis by automated analysis was assessed based on pulmonary tissue density measured from thousands of micro-tiles processed from digital images of entire lung sections. Prior to analysis, large bronchi and vessels were manually excluded from the original images. Measurement of fibrosis has been expressed by two indexes: the mean pulmonary tissue density and the high pulmonary tissue density frequency. We showed that tissue density indexes gave access to a very accurate and reliable quantification of morphological changes induced by BLM even for the lowest concentration used (0.25 mg/kg). A reconstructed 2D-image of the entire lung section at high resolution (3.6 μm/pixel) has been performed from tissue density values allowing the visualization of their distribution throughout fibrotic and non-fibrotic regions. A significant correlation (p<0.0001) was found between automated analysis and the above standard evaluation methods. This correlation establishes automated analysis as a novel end-point measure of BLM-induced lung fibrosis in mice, which will be very valuable for future preclinical drug explorations. PMID:28107543
[Advances in automatic detection technology for images of thin blood film of malaria parasite].
Juan-Sheng, Zhang; Di-Qiang, Zhang; Wei, Wang; Xiao-Guang, Wei; Zeng-Guo, Wang
2017-05-05
This paper reviews the computer vision and image analysis studies aiming at automated diagnosis or screening of malaria in microscope images of thin blood film smears. On the basis of introducing the background and significance of automatic detection technology, the existing detection technologies are summarized and divided into several steps, including image acquisition, pre-processing, morphological analysis, segmentation, count, and pattern classification components. Then, the principles and implementation methods of each step are given in detail. In addition, the promotion and application in automatic detection technology of thick blood film smears are put forwarded as questions worthy of study, and a perspective of the future work for realization of automated microscopy diagnosis of malaria is provided.
NASA Astrophysics Data System (ADS)
Krappe, Sebastian; Wittenberg, Thomas; Haferlach, Torsten; Münzenmayer, Christian
2016-03-01
The morphological differentiation of bone marrow is fundamental for the diagnosis of leukemia. Currently, the counting and classification of the different types of bone marrow cells is done manually under the use of bright field microscopy. This is a time-consuming, subjective, tedious and error-prone process. Furthermore, repeated examinations of a slide may yield intra- and inter-observer variances. For that reason a computer assisted diagnosis system for bone marrow differentiation is pursued. In this work we focus (a) on a new method for the separation of nucleus and plasma parts and (b) on a knowledge-based hierarchical tree classifier for the differentiation of bone marrow cells in 16 different classes. Classification trees are easily interpretable and understandable and provide a classification together with an explanation. Using classification trees, expert knowledge (i.e. knowledge about similar classes and cell lines in the tree model of hematopoiesis) is integrated in the structure of the tree. The proposed segmentation method is evaluated with more than 10,000 manually segmented cells. For the evaluation of the proposed hierarchical classifier more than 140,000 automatically segmented bone marrow cells are used. Future automated solutions for the morphological analysis of bone marrow smears could potentially apply such an approach for the pre-classification of bone marrow cells and thereby shortening the examination time.
Fully automated corneal endothelial morphometry of images captured by clinical specular microscopy
NASA Astrophysics Data System (ADS)
Bucht, Curry; Söderberg, Per; Manneberg, Göran
2010-02-01
The corneal endothelium serves as the posterior barrier of the cornea. Factors such as clarity and refractive properties of the cornea are in direct relationship to the quality of the endothelium. The endothelial cell density is considered the most important morphological factor of the corneal endothelium. Pathological conditions and physical trauma may threaten the endothelial cell density to such an extent that the optical property of the cornea and thus clear eyesight is threatened. Diagnosis of the corneal endothelium through morphometry is an important part of several clinical applications. Morphometry of the corneal endothelium is presently carried out by semi automated analysis of pictures captured by a Clinical Specular Microscope (CSM). Because of the occasional need of operator involvement, this process can be tedious, having a negative impact on sampling size. This study was dedicated to the development and use of fully automated analysis of a very large range of images of the corneal endothelium, captured by CSM, using Fourier analysis. Software was developed in the mathematical programming language Matlab. Pictures of the corneal endothelium, captured by CSM, were read into the analysis software. The software automatically performed digital enhancement of the images, normalizing lights and contrasts. The digitally enhanced images of the corneal endothelium were Fourier transformed, using the fast Fourier transform (FFT) and stored as new images. Tools were developed and applied for identification and analysis of relevant characteristics of the Fourier transformed images. The data obtained from each Fourier transformed image was used to calculate the mean cell density of its corresponding corneal endothelium. The calculation was based on well known diffraction theory. Results in form of estimated cell density of the corneal endothelium were obtained, using fully automated analysis software on 292 images captured by CSM. The cell density obtained by the fully automated analysis was compared to the cell density obtained from classical, semi-automated analysis and a relatively large correlation was found.
IFDOTMETER: A New Software Application for Automated Immunofluorescence Analysis.
Rodríguez-Arribas, Mario; Pizarro-Estrella, Elisa; Gómez-Sánchez, Rubén; Yakhine-Diop, S M S; Gragera-Hidalgo, Antonio; Cristo, Alejandro; Bravo-San Pedro, Jose M; González-Polo, Rosa A; Fuentes, José M
2016-04-01
Most laboratories interested in autophagy use different imaging software for managing and analyzing heterogeneous parameters in immunofluorescence experiments (e.g., LC3-puncta quantification and determination of the number and size of lysosomes). One solution would be software that works on a user's laptop or workstation that can access all image settings and provide quick and easy-to-use analysis of data. Thus, we have designed and implemented an application called IFDOTMETER, which can run on all major operating systems because it has been programmed using JAVA (Sun Microsystems). Briefly, IFDOTMETER software has been created to quantify a variety of biological hallmarks, including mitochondrial morphology and nuclear condensation. The program interface is intuitive and user-friendly, making it useful for users not familiar with computer handling. By setting previously defined parameters, the software can automatically analyze a large number of images without the supervision of the researcher. Once analysis is complete, the results are stored in a spreadsheet. Using software for high-throughput cell image analysis offers researchers the possibility of performing comprehensive and precise analysis of a high number of images in an automated manner, making this routine task easier. © 2015 Society for Laboratory Automation and Screening.
Semi-Automated Digital Image Analysis of Pick’s Disease and TDP-43 Proteinopathy
Irwin, David J.; Byrne, Matthew D.; McMillan, Corey T.; Cooper, Felicia; Arnold, Steven E.; Lee, Edward B.; Van Deerlin, Vivianna M.; Xie, Sharon X.; Lee, Virginia M.-Y.; Grossman, Murray; Trojanowski, John Q.
2015-01-01
Digital image analysis of histology sections provides reliable, high-throughput methods for neuropathological studies but data is scant in frontotemporal lobar degeneration (FTLD), which has an added challenge of study due to morphologically diverse pathologies. Here, we describe a novel method of semi-automated digital image analysis in FTLD subtypes including: Pick’s disease (PiD, n=11) with tau-positive intracellular inclusions and neuropil threads, and TDP-43 pathology type C (FTLD-TDPC, n=10), defined by TDP-43-positive aggregates predominantly in large dystrophic neurites. To do this, we examined three FTLD-associated cortical regions: mid-frontal gyrus (MFG), superior temporal gyrus (STG) and anterior cingulate gyrus (ACG) by immunohistochemistry. We used a color deconvolution process to isolate signal from the chromogen and applied both object detection and intensity thresholding algorithms to quantify pathological burden. We found object-detection algorithms had good agreement with gold-standard manual quantification of tau- and TDP-43-positive inclusions. Our sampling method was reliable across three separate investigators and we obtained similar results in a pilot analysis using open-source software. Regional comparisons using these algorithms finds differences in regional anatomic disease burden between PiD and FTLD-TDP not detected using traditional ordinal scale data, suggesting digital image analysis is a powerful tool for clinicopathological studies in morphologically diverse FTLD syndromes. PMID:26538548
Semi-Automated Digital Image Analysis of Pick's Disease and TDP-43 Proteinopathy.
Irwin, David J; Byrne, Matthew D; McMillan, Corey T; Cooper, Felicia; Arnold, Steven E; Lee, Edward B; Van Deerlin, Vivianna M; Xie, Sharon X; Lee, Virginia M-Y; Grossman, Murray; Trojanowski, John Q
2016-01-01
Digital image analysis of histology sections provides reliable, high-throughput methods for neuropathological studies but data is scant in frontotemporal lobar degeneration (FTLD), which has an added challenge of study due to morphologically diverse pathologies. Here, we describe a novel method of semi-automated digital image analysis in FTLD subtypes including: Pick's disease (PiD, n=11) with tau-positive intracellular inclusions and neuropil threads, and TDP-43 pathology type C (FTLD-TDPC, n=10), defined by TDP-43-positive aggregates predominantly in large dystrophic neurites. To do this, we examined three FTLD-associated cortical regions: mid-frontal gyrus (MFG), superior temporal gyrus (STG) and anterior cingulate gyrus (ACG) by immunohistochemistry. We used a color deconvolution process to isolate signal from the chromogen and applied both object detection and intensity thresholding algorithms to quantify pathological burden. We found object-detection algorithms had good agreement with gold-standard manual quantification of tau- and TDP-43-positive inclusions. Our sampling method was reliable across three separate investigators and we obtained similar results in a pilot analysis using open-source software. Regional comparisons using these algorithms finds differences in regional anatomic disease burden between PiD and FTLD-TDP not detected using traditional ordinal scale data, suggesting digital image analysis is a powerful tool for clinicopathological studies in morphologically diverse FTLD syndromes. © The Author(s) 2015.
Integrating human and machine intelligence in galaxy morphology classification tasks
NASA Astrophysics Data System (ADS)
Beck, Melanie R.; Scarlata, Claudia; Fortson, Lucy F.; Lintott, Chris J.; Simmons, B. D.; Galloway, Melanie A.; Willett, Kyle W.; Dickinson, Hugh; Masters, Karen L.; Marshall, Philip J.; Wright, Darryl
2018-06-01
Quantifying galaxy morphology is a challenging yet scientifically rewarding task. As the scale of data continues to increase with upcoming surveys, traditional classification methods will struggle to handle the load. We present a solution through an integration of visual and automated classifications, preserving the best features of both human and machine. We demonstrate the effectiveness of such a system through a re-analysis of visual galaxy morphology classifications collected during the Galaxy Zoo 2 (GZ2) project. We reprocess the top-level question of the GZ2 decision tree with a Bayesian classification aggregation algorithm dubbed SWAP, originally developed for the Space Warps gravitational lens project. Through a simple binary classification scheme, we increase the classification rate nearly 5-fold classifying 226 124 galaxies in 92 d of GZ2 project time while reproducing labels derived from GZ2 classification data with 95.7 per cent accuracy. We next combine this with a Random Forest machine learning algorithm that learns on a suite of non-parametric morphology indicators widely used for automated morphologies. We develop a decision engine that delegates tasks between human and machine and demonstrate that the combined system provides at least a factor of 8 increase in the classification rate, classifying 210 803 galaxies in just 32 d of GZ2 project time with 93.1 per cent accuracy. As the Random Forest algorithm requires a minimal amount of computational cost, this result has important implications for galaxy morphology identification tasks in the era of Euclid and other large-scale surveys.
Chitsaz, Daryan; Morales, Daniel; Law, Chris; Kania, Artur
2015-01-01
During neural circuit development, attractive or repulsive guidance cue molecules direct growth cones (GCs) to their targets by eliciting cytoskeletal remodeling, which is reflected in their morphology. The experimental power of in vitro neuronal cultures to assay this process and its molecular mechanisms is well established, however, a method to rapidly find and quantify multiple morphological aspects of GCs is lacking. To this end, we have developed a free, easy to use, and fully automated Fiji macro, Conographer, which accurately identifies and measures many morphological parameters of GCs in 2D explant culture images. These measurements are then subjected to principle component analysis and k-means clustering to mathematically classify the GCs as “collapsed” or “extended”. The morphological parameters measured for each GC are found to be significantly different between collapsed and extended GCs, and are sufficient to classify GCs as such with the same level of accuracy as human observers. Application of a known collapse-inducing ligand results in significant changes in all parameters, resulting in an increase in ‘collapsed’ GCs determined by k-means clustering, as expected. Our strategy provides a powerful tool for exploring the relationship between GC morphology and guidance cue signaling, which in particular will greatly facilitate high-throughput studies of the effects of drugs, gene silencing or overexpression, or any other experimental manipulation in the context of an in vitro axon guidance assay. PMID:26496644
Automated retinal image quality assessment on the UK Biobank dataset for epidemiological studies.
Welikala, R A; Fraz, M M; Foster, P J; Whincup, P H; Rudnicka, A R; Owen, C G; Strachan, D P; Barman, S A
2016-04-01
Morphological changes in the retinal vascular network are associated with future risk of many systemic and vascular diseases. However, uncertainty over the presence and nature of some of these associations exists. Analysis of data from large population based studies will help to resolve these uncertainties. The QUARTZ (QUantitative Analysis of Retinal vessel Topology and siZe) retinal image analysis system allows automated processing of large numbers of retinal images. However, an image quality assessment module is needed to achieve full automation. In this paper, we propose such an algorithm, which uses the segmented vessel map to determine the suitability of retinal images for use in the creation of vessel morphometric data suitable for epidemiological studies. This includes an effective 3-dimensional feature set and support vector machine classification. A random subset of 800 retinal images from UK Biobank (a large prospective study of 500,000 middle aged adults; where 68,151 underwent retinal imaging) was used to examine the performance of the image quality algorithm. The algorithm achieved a sensitivity of 95.33% and a specificity of 91.13% for the detection of inadequate images. The strong performance of this image quality algorithm will make rapid automated analysis of vascular morphometry feasible on the entire UK Biobank dataset (and other large retinal datasets), with minimal operator involvement, and at low cost. Copyright © 2016 Elsevier Ltd. All rights reserved.
SuperSegger: robust image segmentation, analysis and lineage tracking of bacterial cells.
Stylianidou, Stella; Brennan, Connor; Nissen, Silas B; Kuwada, Nathan J; Wiggins, Paul A
2016-11-01
Many quantitative cell biology questions require fast yet reliable automated image segmentation to identify and link cells from frame-to-frame, and characterize the cell morphology and fluorescence. We present SuperSegger, an automated MATLAB-based image processing package well-suited to quantitative analysis of high-throughput live-cell fluorescence microscopy of bacterial cells. SuperSegger incorporates machine-learning algorithms to optimize cellular boundaries and automated error resolution to reliably link cells from frame-to-frame. Unlike existing packages, it can reliably segment microcolonies with many cells, facilitating the analysis of cell-cycle dynamics in bacteria as well as cell-contact mediated phenomena. This package has a range of built-in capabilities for characterizing bacterial cells, including the identification of cell division events, mother, daughter and neighbouring cells, and computing statistics on cellular fluorescence, the location and intensity of fluorescent foci. SuperSegger provides a variety of postprocessing data visualization tools for single cell and population level analysis, such as histograms, kymographs, frame mosaics, movies and consensus images. Finally, we demonstrate the power of the package by analyzing lag phase growth with single cell resolution. © 2016 John Wiley & Sons Ltd.
Soleymani, Ali; Pennekamp, Frank; Petchey, Owen L.; Weibel, Robert
2015-01-01
Recent advances in tracking technologies such as GPS or video tracking systems describe the movement paths of individuals in unprecedented details and are increasingly used in different fields, including ecology. However, extracting information from raw movement data requires advanced analysis techniques, for instance to infer behaviors expressed during a certain period of the recorded trajectory, or gender or species identity in case data is obtained from remote tracking. In this paper, we address how different movement features affect the ability to automatically classify the species identity, using a dataset of unicellular microbes (i.e., ciliates). Previously, morphological attributes and simple movement metrics, such as speed, were used for classifying ciliate species. Here, we demonstrate that adding advanced movement features, in particular such based on discrete wavelet transform, to morphological features can improve classification. These results may have practical applications in automated monitoring of waste water facilities as well as environmental monitoring of aquatic systems. PMID:26680591
An Automated Baseline Correction Method Based on Iterative Morphological Operations.
Chen, Yunliang; Dai, Liankui
2018-05-01
Raman spectra usually suffer from baseline drift caused by fluorescence or other reasons. Therefore, baseline correction is a necessary and crucial step that must be performed before subsequent processing and analysis of Raman spectra. An automated baseline correction method based on iterative morphological operations is proposed in this work. The method can adaptively determine the structuring element first and then gradually remove the spectral peaks during iteration to get an estimated baseline. Experiments on simulated data and real-world Raman data show that the proposed method is accurate, fast, and flexible for handling different kinds of baselines in various practical situations. The comparison of the proposed method with some state-of-the-art baseline correction methods demonstrates its advantages over the existing methods in terms of accuracy, adaptability, and flexibility. Although only Raman spectra are investigated in this paper, the proposed method is hopefully to be used for the baseline correction of other analytical instrumental signals, such as IR spectra and chromatograms.
2018-01-01
collected data. These statistical techniques are under the area of descriptive statistics, which is a methodology to condense the data in quantitative ...ARL-TR-8270 ● JAN 2018 US Army Research Laboratory An Automated Energy Detection Algorithm Based on Morphological Filter...report when it is no longer needed. Do not return it to the originator. ARL-TR-8270 ● JAN 2017 US Army Research Laboratory An
Automated classification of articular cartilage surfaces based on surface texture.
Stachowiak, G P; Stachowiak, G W; Podsiadlo, P
2006-11-01
In this study the automated classification system previously developed by the authors was used to classify articular cartilage surfaces with different degrees of wear. This automated system classifies surfaces based on their texture. Plug samples of sheep cartilage (pins) were run on stainless steel discs under various conditions using a pin-on-disc tribometer. Testing conditions were specifically designed to produce different severities of cartilage damage due to wear. Environmental scanning electron microscope (SEM) (ESEM) images of cartilage surfaces, that formed a database for pattern recognition analysis, were acquired. The ESEM images of cartilage were divided into five groups (classes), each class representing different wear conditions or wear severity. Each class was first examined and assessed visually. Next, the automated classification system (pattern recognition) was applied to all classes. The results of the automated surface texture classification were compared to those based on visual assessment of surface morphology. It was shown that the texture-based automated classification system was an efficient and accurate method of distinguishing between various cartilage surfaces generated under different wear conditions. It appears that the texture-based classification method has potential to become a useful tool in medical diagnostics.
Automated analysis and classification of melanocytic tumor on skin whole slide images.
Xu, Hongming; Lu, Cheng; Berendt, Richard; Jha, Naresh; Mandal, Mrinal
2018-06-01
This paper presents a computer-aided technique for automated analysis and classification of melanocytic tumor on skin whole slide biopsy images. The proposed technique consists of four main modules. First, skin epidermis and dermis regions are segmented by a multi-resolution framework. Next, epidermis analysis is performed, where a set of epidermis features reflecting nuclear morphologies and spatial distributions is computed. In parallel with epidermis analysis, dermis analysis is also performed, where dermal cell nuclei are segmented and a set of textural and cytological features are computed. Finally, the skin melanocytic image is classified into different categories such as melanoma, nevus or normal tissue by using a multi-class support vector machine (mSVM) with extracted epidermis and dermis features. Experimental results on 66 skin whole slide images indicate that the proposed technique achieves more than 95% classification accuracy, which suggests that the technique has the potential to be used for assisting pathologists on skin biopsy image analysis and classification. Copyright © 2018 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Gasecka, Alicja; Tanti, Arnaud; Lutz, Pierre-Eric; Mechawar, Naguib; Cote, Daniel C.
2017-02-01
Adverse childhood experiences have lasting detrimental effects on mental health and are strongly associated with impaired cognition and increased risk of developing psychopathologies. Preclinical and neuroimaging studies have suggested that traumatic events during brain development can affect cerebral myelination particularly in areas and tracts implicated in mood and emotion. Although current neuroimaging techniques are quite powerful, they lack the resolution to infer myelin integrity at the cellular level. Recently demonstrated coherent Raman microscopy has accomplished cellular level imaging of myelin sheaths in the nervous system. However, a quantitative morphometric analysis of nerve fibers still remains a challenge. In particular, in brain, where fibres exhibit small diameters and varying local orientation. In this work, we developed an automated myelin identification and analysis method that is capable of providing a complete picture of axonal myelination and morphology in brain samples. This method performs three main procedures 1) detects molecular anisotropy of membrane phospholipids based on polarization resolved coherent Raman microscopy, 2) identifies regions of different molecular organization, 3) calculates morphometric features of myelinated axons (e.g. myelin thickness, g-ratio). We applied this method to monitor white matter areas from suicides adults that suffered from early live adversity and depression compared to depressed suicides adults and psychiatrically healthy controls. We demonstrate that our method allows for the rapid acquisition and automated analysis of neuronal networks morphology and myelination. This is especially useful for clinical and comparative studies, and may greatly enhance the understanding of processes underlying the neurobiological and psychopathological consequences of child abuse.
Karaçalı, Bilge; Vamvakidou, Alexandra P; Tözeren, Aydın
2007-01-01
Background Three-dimensional in vitro culture of cancer cells are used to predict the effects of prospective anti-cancer drugs in vivo. In this study, we present an automated image analysis protocol for detailed morphological protein marker profiling of tumoroid cross section images. Methods Histologic cross sections of breast tumoroids developed in co-culture suspensions of breast cancer cell lines, stained for E-cadherin and progesterone receptor, were digitized and pixels in these images were classified into five categories using k-means clustering. Automated segmentation was used to identify image regions composed of cells expressing a given biomarker. Synthesized images were created to check the accuracy of the image processing system. Results Accuracy of automated segmentation was over 95% in identifying regions of interest in synthesized images. Image analysis of adjacent histology slides stained, respectively, for Ecad and PR, accurately predicted regions of different cell phenotypes. Image analysis of tumoroid cross sections from different tumoroids obtained under the same co-culture conditions indicated the variation of cellular composition from one tumoroid to another. Variations in the compositions of cross sections obtained from the same tumoroid were established by parallel analysis of Ecad and PR-stained cross section images. Conclusion Proposed image analysis methods offer standardized high throughput profiling of molecular anatomy of tumoroids based on both membrane and nuclei markers that is suitable to rapid large scale investigations of anti-cancer compounds for drug development. PMID:17822559
Kopanja, Lazar; Kovacevic, Zorana; Tadic, Marin; Žužek, Monika Cecilija; Vrecl, Milka; Frangež, Robert
2018-04-23
Detailed shape analysis of cells is important to better understand the physiological mechanisms of toxins and determine their effects on cell morphology. This study aimed to develop a procedure for accurate morphological analysis of cell shape and use it as a tool to estimate toxin activity. With the aim of optimizing the method of cell morphology analysis, we determined the influence of ostreolysin A and pleurotolysin B complex (OlyA/PlyB) on the morphology of murine neuronal NG108-15 cells. A computational method was introduced and successfully applied to quantify morphological attributes of the NG108-15 cell line before and after 30 and 60 min exposure to OlyA/PlyB using confocal microscopy. The modified circularity measure [Formula: see text] for shape analysis was applied, which defines the degree to which the shape of the neuron differs from a perfect circle. It enables better detection of small changes in the shape of cells, making the outcome easily detectable numerically. Additionally, we analyzed the influence of OlyA/PlyB on the cell area, allowing us to detect the cells with blebs. This is important because the formation of plasma membrane protrusions such as blebs often reflects cell injury that leads to necrotic cell death. In summary, we offer a novel analytical method of neuronal cell shape analysis and its correlation with the toxic effects of the pore-forming OlyA/PlyB toxin in situ.
Automated Dermoscopy Image Analysis of Pigmented Skin Lesions
Baldi, Alfonso; Quartulli, Marco; Murace, Raffaele; Dragonetti, Emanuele; Manganaro, Mario; Guerra, Oscar; Bizzi, Stefano
2010-01-01
Dermoscopy (dermatoscopy, epiluminescence microscopy) is a non-invasive diagnostic technique for the in vivo observation of pigmented skin lesions (PSLs), allowing a better visualization of surface and subsurface structures (from the epidermis to the papillary dermis). This diagnostic tool permits the recognition of morphologic structures not visible by the naked eye, thus opening a new dimension in the analysis of the clinical morphologic features of PSLs. In order to reduce the learning-curve of non-expert clinicians and to mitigate problems inherent in the reliability and reproducibility of the diagnostic criteria used in pattern analysis, several indicative methods based on diagnostic algorithms have been introduced in the last few years. Recently, numerous systems designed to provide computer-aided analysis of digital images obtained by dermoscopy have been reported in the literature. The goal of this article is to review these systems, focusing on the most recent approaches based on content-based image retrieval systems (CBIR). PMID:24281070
Automated selection of BI-RADS lesion descriptors for reporting calcifications in mammograms
NASA Astrophysics Data System (ADS)
Paquerault, Sophie; Jiang, Yulei; Nishikawa, Robert M.; Schmidt, Robert A.; D'Orsi, Carl J.; Vyborny, Carl J.; Newstead, Gillian M.
2003-05-01
We are developing an automated computer technique to describe calcifications in mammograms according to the BI-RADS lexicon. We evaluated this technique by its agreement with radiologists' description of the same lesions. Three expert mammographers reviewed our database of 90 cases of digitized mammograms containing clustered microcalcifications and described the calcifications according to BI-RADS. In our study, the radiologists used only 4 of the 5 calcification distribution descriptors and 5 of the 14 calcification morphology descriptors contained in BI-RADS. Our computer technique was therefore designed specifically for these 4 calcification distribution descriptors and 5 calcification morphology descriptors. For calcification distribution, 4 linear discriminant analysis (LDA) classifiers were developed using 5 computer-extracted features to produce scores of how well each descriptor describes a cluster. Similarly, for calcification morphology, 5 LDAs were designed using 10 computer-extracted features. We trained the LDAs using only the BI-RADS data reported by the first radiologist and compared the computer output to the descriptor data reported by all 3 radiologists (for the first radiologist, the leave-one-out method was used). The computer output consisted of the best calcification distribution descriptor and the best 2 calcification morphology descriptors. The results of the comparison with the data from each radiologist, respectively, were: for calcification distribution, percent agreement, 74%, 66%, and 73%, kappa value, 0.44, 0.36, and 0.46; for calcification morphology, percent agreement, 83%, 77%, and 57%, kappa value, 0.78, 0.70, and 0.44. These results indicate that the proposed computer technique can select BI-RADS descriptors in good agreement with radiologists.
1998-01-01
its underlying mechanism. The morphologies and associated terminology of the ferrography wear atlas (13), have been adopted almost universally by...connected to the World-Wide Web (WWW). What has emerged from the more recent developments is that, whereas a universal atlas , coupled to a coding...D.W., ’Wear Particle Atlas ,(Revised)’ Naval Air Eng. Centre Report No. NAEC 92 163 (1982) 14. Ruff A.W. ’Characterisation of debris particles
Bray, Mark-Anthony; Singh, Shantanu; Han, Han; Davis, Chadwick T.; Borgeson, Blake; Hartland, Cathy; Kost-Alimova, Maria; Gustafsdottir, Sigrun M.; Gibson, Christopher C.; Carpenter, Anne E.
2016-01-01
In morphological profiling, quantitative data are extracted from microscopy images of cells to identify biologically relevant similarities and differences among samples based on these profiles. This protocol describes the design and execution of experiments using Cell Painting, a morphological profiling assay multiplexing six fluorescent dyes imaged in five channels, to reveal eight broadly relevant cellular components or organelles. Cells are plated in multi-well plates, perturbed with the treatments to be tested, stained, fixed, and imaged on a high-throughput microscope. Then, automated image analysis software identifies individual cells and measures ~1,500 morphological features (various measures of size, shape, texture, intensity, etc.) to produce a rich profile suitable for detecting subtle phenotypes. Profiles of cell populations treated with different experimental perturbations can be compared to suit many goals, such as identifying the phenotypic impact of chemical or genetic perturbations, grouping compounds and/or genes into functional pathways, and identifying signatures of disease. Cell culture and image acquisition takes two weeks; feature extraction and data analysis take an additional 1-2 weeks. PMID:27560178
NASA Astrophysics Data System (ADS)
Hutchings, Joanne; Kendall, Catherine; Shepherd, Neil; Barr, Hugh; Stone, Nicholas
2010-11-01
Rapid Raman mapping has the potential to be used for automated histopathology diagnosis, providing an adjunct technique to histology diagnosis. The aim of this work is to evaluate the feasibility of automated and objective pathology classification of Raman maps using linear discriminant analysis. Raman maps of esophageal tissue sections are acquired. Principal component (PC)-fed linear discriminant analysis (LDA) is carried out using subsets of the Raman map data (6483 spectra). An overall (validated) training classification model performance of 97.7% (sensitivity 95.0 to 100% and specificity 98.6 to 100%) is obtained. The remainder of the map spectra (131,672 spectra) are projected onto the classification model resulting in Raman images, demonstrating good correlation with contiguous hematoxylin and eosin (HE) sections. Initial results suggest that LDA has the potential to automate pathology diagnosis of esophageal Raman images, but since the classification of test spectra is forced into existing training groups, further work is required to optimize the training model. A small pixel size is advantageous for developing the training datasets using mapping data, despite lengthy mapping times, due to additional morphological information gained, and could facilitate differentiation of further tissue groups, such as the basal cells/lamina propria, in the future, but larger pixels sizes (and faster mapping) may be more feasible for clinical application.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jang, S. A.; Lee, H. J.; Oh, Y. J., E-mail: yjoh@hanbat.ac.kr
We analyzed the effect of crystallographic anisotropy on the morphological evolution of a 12-nm-thick gold film during solid-state dewetting at high temperatures using automated indexing tool in a transmission electron microscopy. Dewetting initiated at grain-boundary triple junctions adjacent to large grains resulting from abnormal grain growth driven by (111) texture development. Voids at the junctions developed shapes with faceted edges bounded by low-index crystal planes. The kinetic mobility of the edges varied with the crystal orientation normal to the edges, with a predominance of specific edges with the slowest retraction rates as the annealing time was increased.
Integrating Human and Machine Intelligence in Galaxy Morphology Classification Tasks
NASA Astrophysics Data System (ADS)
Beck, Melanie Renee
The large flood of data flowing from observatories presents significant challenges to astronomy and cosmology--challenges that will only be magnified by projects currently under development. Growth in both volume and velocity of astrophysics data is accelerating: whereas the Sloan Digital Sky Survey (SDSS) has produced 60 terabytes of data in the last decade, the upcoming Large Synoptic Survey Telescope (LSST) plans to register 30 terabytes per night starting in the year 2020. Additionally, the Euclid Mission will acquire imaging for 5 x 107 resolvable galaxies. The field of galaxy evolution faces a particularly challenging future as complete understanding often cannot be reached without analysis of detailed morphological galaxy features. Historically, morphological analysis has relied on visual classification by astronomers, accessing the human brains capacity for advanced pattern recognition. However, this accurate but inefficient method falters when confronted with many thousands (or millions) of images. In the SDSS era, efforts to automate morphological classifications of galaxies (e.g., Conselice et al., 2000; Lotz et al., 2004) are reasonably successful and can distinguish between elliptical and disk-dominated galaxies with accuracies of 80%. While this is statistically very useful, a key problem with these methods is that they often cannot say which 80% of their samples are accurate. Furthermore, when confronted with the more complex task of identifying key substructure within galaxies, automated classification algorithms begin to fail. The Galaxy Zoo project uses a highly innovative approach to solving the scalability problem of visual classification. Displaying images of SDSS galaxies to volunteers via a simple and engaging web interface, www.galaxyzoo.org asks people to classify images by eye. Within the first year hundreds of thousands of members of the general public had classified each of the 1 million SDSS galaxies an average of 40 times. Galaxy Zoo thus solved both the visual classification problem of time efficiency and improved accuracy by producing a distribution of independent classifications for each galaxy. While crowd-sourced galaxy classifications have proven their worth, challenges remain before establishing this method as a critical and standard component of the data processing pipelines for the next generation of surveys. In particular, though innovative, crowd-sourcing techniques do not have the capacity to handle the data volume and rates expected in the next generation of surveys. These algorithms will be delegated to handle the majority of the classification tasks, freeing citizen scientists to contribute their efforts on subtler and more complex assignments. This thesis presents a solution through an integration of visual and automated classifications, preserving the best features of both human and machine. We demonstrate the effectiveness of such a system through a re-analysis of visual galaxy morphology classifications collected during the Galaxy Zoo 2 (GZ2) project. We reprocess the top-level question of the GZ2 decision tree with a Bayesian classification aggregation algorithm dubbed SWAP, originally developed for the Space Warps gravitational lens project. Through a simple binary classification scheme we increase the classification rate nearly 5-fold classifying 226,124 galaxies in 92 days of GZ2 project time while reproducing labels derived from GZ2 classification data with 95.7% accuracy. We next combine this with a Random Forest machine learning algorithm that learns on a suite of non-parametric morphology indicators widely used for automated morphologies. We develop a decision engine that delegates tasks between human and machine and demonstrate that the combined system provides a factor of 11.4 increase in the classification rate, classifying 210,803 galaxies in just 32 days of GZ2 project time with 93.1% accuracy. As the Random Forest algorithm requires a minimal amount of computational cost, this result has important implications for galaxy morphology identification tasks in the era of Euclid and other large-scale surveys.
NASA Astrophysics Data System (ADS)
Jorge, Marco G.; Brennand, Tracy A.; Perkins, Andrew J.; Neudorf, Christina; Hillier, John K.; Cripps, Jonathan E.; Spagnolo, Matteo; Dinney, Meaghan; Storrar, Robert D.
2016-04-01
Mapper-dependent (subjective) differences in drumlin morphometry have received little attention even though over one-hundred thousand drumlins have been manually mapped and used to characterize drumlin morphometry and infer drumlin genesis, and several obstacles to objectivity in drumlin mapping can be identified. Due to uncertainty in drumlin genesis, drumlins remain putative morphogenetic landforms, yet still lack a complete single morphological definition. Additionally, post-formational degradation of relict subglacial landscapes challenges our ability: 1) to identify all drumlins in the landscape (some [potential] drumlins may be too degraded to be mapped and are thus excluded from the inventory), with implications for the analysis of field properties (e.g., spatial arrangement and autocorrelation); and 2) to accurately map the original footprint (i.e., shape and size). These issues (definitional ambiguity; degradation of original drumlin topography) are a problem for both manual and automated mapping. Automation is touted as the solution to the subjectivity of manual mapping, but the quality of any automated method directly depends on the quality of the operational definition (ruleset) it draws upon; if drumlin definitions are subjective (expert-dependent), so will be the automated algorithms relying on them. Additionally, recognizing highly degraded drumlins is, arguably, more difficult automatedly than manually (visually). Because a single morphologic definition is missing, mapping is expert-dependent. Therefore, quantifying the magnitude of inter-mapper differences is important for fully understanding the morphology of drumlins, constraining the robustness of drumlin morphometric inventories and assisting in the development of stricter operational definitions/mapping guidelines. We present the results of an experiment to quantify inter-mapper differences in mapped drumlin morphometry. All participants mapped 42 morphologically diverse drumlins in the Puget Lowland, WA at 2 spatial resolutions (1.8 m and 10.8 m cell size DEMs) in a GIS, using exactly the same base maps (analytical hillshade; semi-transparent elevation; contours) and informed by the same loose operational definition (e.g., drumlins delimited at their base by concave breaks in slope). Preliminary results (3 mappers) indicate that differences between manual mappers are substantial. For example, for the footprints mapped from the 10.8 m terrain data: average length ranges from 4603 m to 5454 m, and the mean absolute difference in length from 693 m to 1101 m; average elongation ratio (ER) ranges from 5.0 to 6.1; average footprint area ranges from 0.39 km2 to 0.50 km2.
A volumetric pulmonary CT segmentation method with applications in emphysema assessment
NASA Astrophysics Data System (ADS)
Silva, José Silvestre; Silva, Augusto; Santos, Beatriz S.
2006-03-01
A segmentation method is a mandatory pre-processing step in many automated or semi-automated analysis tasks such as region identification and densitometric analysis, or even for 3D visualization purposes. In this work we present a fully automated volumetric pulmonary segmentation algorithm based on intensity discrimination and morphologic procedures. Our method first identifies the trachea as well as primary bronchi and then the pulmonary region is identified by applying a threshold and morphologic operations. When both lungs are in contact, additional procedures are performed to obtain two separated lung volumes. To evaluate the performance of the method, we compared contours extracted from 3D lung surfaces with reference contours, using several figures of merit. Results show that the worst case generally occurs at the middle sections of high resolution CT exams, due the presence of aerial and vascular structures. Nevertheless, the average error is inferior to the average error associated with radiologist inter-observer variability, which suggests that our method produces lung contours similar to those drawn by radiologists. The information created by our segmentation algorithm is used by an identification and representation method in pulmonary emphysema that also classifies emphysema according to its severity degree. Two clinically proved thresholds are applied which identify regions with severe emphysema, and with highly severe emphysema. Based on this thresholding strategy, an application for volumetric emphysema assessment was developed offering new display paradigms concerning the visualization of classification results. This framework is easily extendable to accommodate other classifiers namely those related with texture based segmentation as it is often the case with interstitial diseases.
Automated analysis of high-content microscopy data with deep learning.
Kraus, Oren Z; Grys, Ben T; Ba, Jimmy; Chong, Yolanda; Frey, Brendan J; Boone, Charles; Andrews, Brenda J
2017-04-18
Existing computational pipelines for quantitative analysis of high-content microscopy data rely on traditional machine learning approaches that fail to accurately classify more than a single dataset without substantial tuning and training, requiring extensive analysis. Here, we demonstrate that the application of deep learning to biological image data can overcome the pitfalls associated with conventional machine learning classifiers. Using a deep convolutional neural network (DeepLoc) to analyze yeast cell images, we show improved performance over traditional approaches in the automated classification of protein subcellular localization. We also demonstrate the ability of DeepLoc to classify highly divergent image sets, including images of pheromone-arrested cells with abnormal cellular morphology, as well as images generated in different genetic backgrounds and in different laboratories. We offer an open-source implementation that enables updating DeepLoc on new microscopy datasets. This study highlights deep learning as an important tool for the expedited analysis of high-content microscopy data. © 2017 The Authors. Published under the terms of the CC BY 4.0 license.
An automated assay for the assessment of cardiac arrest in fish embryo.
Puybareau, Elodie; Genest, Diane; Barbeau, Emilie; Léonard, Marc; Talbot, Hugues
2017-02-01
Studies on fish embryo models are widely developed in research. They are used in several research fields including drug discovery or environmental toxicology. In this article, we propose an entirely automated assay to detect cardiac arrest in Medaka (Oryzias latipes) based on image analysis. We propose a multi-scale pipeline based on mathematical morphology. Starting from video sequences of entire wells in 24-well plates, we focus on the embryo, detect its heart, and ascertain whether or not the heart is beating based on intensity variation analysis. Our image analysis pipeline only uses commonly available operators. It has a low computational cost, allowing analysis at the same rate as acquisition. From an initial dataset of 3192 videos, 660 were discarded as unusable (20.7%), 655 of them correctly so (99.25%) and only 5 incorrectly so (0.75%). The 2532 remaining videos were used for our test. On these, 45 errors were made, leading to a success rate of 98.23%. Copyright © 2016 Elsevier Ltd. All rights reserved.
Unsupervised automated high throughput phenotyping of RNAi time-lapse movies.
Failmezger, Henrik; Fröhlich, Holger; Tresch, Achim
2013-10-04
Gene perturbation experiments in combination with fluorescence time-lapse cell imaging are a powerful tool in reverse genetics. High content applications require tools for the automated processing of the large amounts of data. These tools include in general several image processing steps, the extraction of morphological descriptors, and the grouping of cells into phenotype classes according to their descriptors. This phenotyping can be applied in a supervised or an unsupervised manner. Unsupervised methods are suitable for the discovery of formerly unknown phenotypes, which are expected to occur in high-throughput RNAi time-lapse screens. We developed an unsupervised phenotyping approach based on Hidden Markov Models (HMMs) with multivariate Gaussian emissions for the detection of knockdown-specific phenotypes in RNAi time-lapse movies. The automated detection of abnormal cell morphologies allows us to assign a phenotypic fingerprint to each gene knockdown. By applying our method to the Mitocheck database, we show that a phenotypic fingerprint is indicative of a gene's function. Our fully unsupervised HMM-based phenotyping is able to automatically identify cell morphologies that are specific for a certain knockdown. Beyond the identification of genes whose knockdown affects cell morphology, phenotypic fingerprints can be used to find modules of functionally related genes.
Comparison of the Cellient(™) automated cell block system and agar cell block method.
Kruger, A M; Stevens, M W; Kerley, K J; Carter, C D
2014-12-01
To compare the Cellient(TM) automated cell block system with the agar cell block method in terms of quantity and quality of diagnostic material and morphological, histochemical and immunocytochemical features. Cell blocks were prepared from 100 effusion samples using the agar method and Cellient system, and routinely sectioned and stained for haematoxylin and eosin and periodic acid-Schiff with diastase (PASD). A preliminary immunocytochemical study was performed on selected cases (27/100 cases). Sections were evaluated using a three-point grading system to compare a set of morphological parameters. Statistical analysis was performed using Fisher's exact test. Parameters assessing cellularity, presence of single cells and definition of nuclear membrane, nucleoli, chromatin and cytoplasm showed a statistically significant improvement on Cellient cell blocks compared with agar cell blocks (P < 0.05). No significant difference was seen for definition of cell groups, PASD staining or the intensity or clarity of immunocytochemical staining. A discrepant immunocytochemistry (ICC) result was seen in 21% (13/63) of immunostains. The Cellient technique is comparable with the agar method, with statistically significant results achieved for important morphological features. It demonstrates potential as an alternative cell block preparation method which is relevant for the rapid processing of fine needle aspiration samples, malignant effusions and low-cellularity specimens, where optimal cell morphology and architecture are essential. Further investigation is required to optimize immunocytochemical staining using the Cellient method. © 2014 John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Ringenberg, Jordan; Deo, Makarand; Devabhaktuni, Vijay; Filgueiras-Rama, David; Pizarro, Gonzalo; Ibañez, Borja; Berenfeld, Omer; Boyers, Pamela; Gold, Jeffrey
2012-12-01
This paper presents an automated method to segment left ventricle (LV) tissues from functional and delayed-enhancement (DE) cardiac magnetic resonance imaging (MRI) scans using a sequential multi-step approach. First, a region of interest (ROI) is computed to create a subvolume around the LV using morphological operations and image arithmetic. From the subvolume, the myocardial contours are automatically delineated using difference of Gaussians (DoG) filters and GSV snakes. These contours are used as a mask to identify pathological tissues, such as fibrosis or scar, within the DE-MRI. The presented automated technique is able to accurately delineate the myocardium and identify the pathological tissue in patient sets. The results were validated by two expert cardiologists, and in one set the automated results are quantitatively and qualitatively compared with expert manual delineation. Furthermore, the method is patient-specific, performed on an entire patient MRI series. Thus, in addition to providing a quick analysis of individual MRI scans, the fully automated segmentation method is used for effectively tagging regions in order to reconstruct computerized patient-specific 3D cardiac models. These models can then be used in electrophysiological studies and surgical strategy planning.
Al-Fahdawi, Shumoos; Qahwaji, Rami; Al-Waisy, Alaa S; Ipson, Stanley; Ferdousi, Maryam; Malik, Rayaz A; Brahma, Arun
2018-07-01
Corneal endothelial cell abnormalities may be associated with a number of corneal and systemic diseases. Damage to the endothelial cells can significantly affect corneal transparency by altering hydration of the corneal stroma, which can lead to irreversible endothelial cell pathology requiring corneal transplantation. To date, quantitative analysis of endothelial cell abnormalities has been manually performed by ophthalmologists using time consuming and highly subjective semi-automatic tools, which require an operator interaction. We developed and applied a fully-automated and real-time system, termed the Corneal Endothelium Analysis System (CEAS) for the segmentation and computation of endothelial cells in images of the human cornea obtained by in vivo corneal confocal microscopy. First, a Fast Fourier Transform (FFT) Band-pass filter is applied to reduce noise and enhance the image quality to make the cells more visible. Secondly, endothelial cell boundaries are detected using watershed transformations and Voronoi tessellations to accurately quantify the morphological parameters of the human corneal endothelial cells. The performance of the automated segmentation system was tested against manually traced ground-truth images based on a database consisting of 40 corneal confocal endothelial cell images in terms of segmentation accuracy and obtained clinical features. In addition, the robustness and efficiency of the proposed CEAS system were compared with manually obtained cell densities using a separate database of 40 images from controls (n = 11), obese subjects (n = 16) and patients with diabetes (n = 13). The Pearson correlation coefficient between automated and manual endothelial cell densities is 0.9 (p < 0.0001) and a Bland-Altman plot shows that 95% of the data are between the 2SD agreement lines. We demonstrate the effectiveness and robustness of the CEAS system, and the possibility of utilizing it in a real world clinical setting to enable rapid diagnosis and for patient follow-up, with an execution time of only 6 seconds per image. Copyright © 2018 Elsevier B.V. All rights reserved.
Spatial/Spectral Identification of Endmembers from AVIRIS Data using Mathematical Morphology
NASA Technical Reports Server (NTRS)
Plaza, Antonio; Martinez, Pablo; Gualtieri, J. Anthony; Perez, Rosa M.
2001-01-01
During the last several years, a number of airborne and satellite hyperspectral sensors have been developed or improved for remote sensing applications. Imaging spectrometry allows the detection of materials, objects and regions in a particular scene with a high degree of accuracy. Hyperspectral data typically consist of hundreds of thousands of spectra, so the analysis of this information is a key issue. Mathematical morphology theory is a widely used nonlinear technique for image analysis and pattern recognition. Although it is especially well suited to segment binary or grayscale images with irregular and complex shapes, its application in the classification/segmentation of multispectral or hyperspectral images has been quite rare. In this paper, we discuss a new completely automated methodology to find endmembers in the hyperspectral data cube using mathematical morphology. The extension of classic morphology to the hyperspectral domain allows us to integrate spectral and spatial information in the analysis process. In Section 3, some basic concepts about mathematical morphology and the technical details of our algorithm are provided. In Section 4, the accuracy of the proposed method is tested by its application to real hyperspectral data obtained from the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) imaging spectrometer. Some details about these data and reference results, obtained by well-known endmember extraction techniques, are provided in Section 2. Finally, in Section 5 we expose the main conclusions at which we have arrived.
Automated Classification of Pathology Reports.
Oleynik, Michel; Finger, Marcelo; Patrão, Diogo F C
2015-01-01
This work develops an automated classifier of pathology reports which infers the topography and the morphology classes of a tumor using codes from the International Classification of Diseases for Oncology (ICD-O). Data from 94,980 patients of the A.C. Camargo Cancer Center was used for training and validation of Naive Bayes classifiers, evaluated by the F1-score. Measures greater than 74% in the topographic group and 61% in the morphologic group are reported. Our work provides a successful baseline for future research for the classification of medical documents written in Portuguese and in other domains.
Microscopic image analysis for reticulocyte based on watershed algorithm
NASA Astrophysics Data System (ADS)
Wang, J. Q.; Liu, G. F.; Liu, J. G.; Wang, G.
2007-12-01
We present a watershed-based algorithm in the analysis of light microscopic image for reticulocyte (RET), which will be used in an automated recognition system for RET in peripheral blood. The original images, obtained by micrography, are segmented by modified watershed algorithm and are recognized in term of gray entropy and area of connective area. In the process of watershed algorithm, judgment conditions are controlled according to character of the image, besides, the segmentation is performed by morphological subtraction. The algorithm was simulated with MATLAB software. It is similar for automated and manual scoring and there is good correlation(r=0.956) between the methods, which is resulted from 50 pieces of RET images. The result indicates that the algorithm for peripheral blood RETs is comparable to conventional manual scoring, and it is superior in objectivity. This algorithm avoids time-consuming calculation such as ultra-erosion and region-growth, which will speed up the computation consequentially.
Automated SEM Modal Analysis Applied to the Diogenites
NASA Technical Reports Server (NTRS)
Bowman, L. E.; Spilde, M. N.; Papike, James J.
1996-01-01
Analysis of volume proportions of minerals, or modal analysis, is routinely accomplished by point counting on an optical microscope, but the process, particularly on brecciated samples such as the diogenite meteorites, is tedious and prone to error by misidentification of very small fragments, which may make up a significant volume of the sample. Precise volume percentage data can be gathered on a scanning electron microscope (SEM) utilizing digital imaging and an energy dispersive spectrometer (EDS). This form of automated phase analysis reduces error, and at the same time provides more information than could be gathered using simple point counting alone, such as particle morphology statistics and chemical analyses. We have previously studied major, minor, and trace-element chemistry of orthopyroxene from a suite of diogenites. This abstract describes the method applied to determine the modes on this same suite of meteorites and the results of that research. The modal abundances thus determined add additional information on the petrogenesis of the diogenites. In addition, low-abundance phases such as spinels were located for further analysis by this method.
Manoni, Fabio; Gessoni, Gianluca; Fogazzi, Giovanni Battista; Alessio, Maria Grazia; Caleffi, Alberta; Gambaro, Giovanni; Epifani, Maria Grazia; Pieretti, Barbara; Perego, Angelo; Ottomano, Cosimo; Saccani, Graziella; Valverde, Sara; Secchiero, Sandra
2016-01-01
With these guidelines the Intersociety Urinalysis Group (GIAU) aims to stimulate the following aspects: Improvement and standardization of the analytical approach to physical, chemical and morphological urine examination (ECMU). Improvement of the chemical analysis of urine with particular regard to the reconsideration of the diagnostic significance of the parameters that are traditionally evaluated in dipstick analysis together with an increasing awareness of the limits of sensitivity and specificity of this analytical method. Increase the awareness of the importance of professional skills in the field of urinary morphology and the relationship with the clinicians. Implement a policy of evaluation of the analytical quality by using, in addition to traditional internal and external controls, a program for the evaluation of morphological competence. Stimulate the diagnostics industry to focus research efforts and development methodology and instrumental catering on the needs of clinical diagnosis. The hope is to revalue the enormous diagnostic potential of 'ECMU, implementing a urinalysis on personalized diagnostic needs for each patient. Emphasize the value added to ECMU by automated analyzers for the study of the morphology of the corpuscular fraction urine. The hope is to revalue the enormous potential diagnostic of 'ECMU, implementing a urinalysis on personalized diagnostic needs that each patient brings with it.
Chan, Leo Li-Ying; Kuksin, Dmitry; Laverty, Daniel J; Saldi, Stephanie; Qiu, Jean
2015-05-01
The ability to accurately determine cell viability is essential to performing a well-controlled biological experiment. Typical experiments range from standard cell culturing to advanced cell-based assays that may require cell viability measurement for downstream experiments. The traditional cell viability measurement method has been the trypan blue (TB) exclusion assay. However, since the introduction of fluorescence-based dyes for cell viability measurement using flow or image-based cytometry systems, there have been numerous publications comparing the two detection methods. Although previous studies have shown discrepancies between TB exclusion and fluorescence-based viability measurements, image-based morphological analysis was not performed in order to examine the viability discrepancies. In this work, we compared TB exclusion and fluorescence-based viability detection methods using image cytometry to observe morphological changes due to the effect of TB on dead cells. Imaging results showed that as the viability of a naturally-dying Jurkat cell sample decreased below 70 %, many TB-stained cells began to exhibit non-uniform morphological characteristics. Dead cells with these characteristics may be difficult to count under light microscopy, thus generating an artificially higher viability measurement compared to fluorescence-based method. These morphological observations can potentially explain the differences in viability measurement between the two methods.
NASA Astrophysics Data System (ADS)
Pawlik, M. M.; Wild, V.; Walcher, C. J.; Johansson, P. H.; Villforth, C.; Rowlands, K.; Mendez-Abreu, J.; Hewlett, T.
2016-03-01
We present a new morphological indicator designed for automated recognition of galaxies with faint asymmetric tidal features suggestive of an ongoing or past merger. We use the new indicator, together with pre-existing diagnostics of galaxy structure to study the role of galaxy mergers in inducing (post-) starburst spectral signatures in local galaxies, and investigate whether (post-) starburst galaxies play a role in the build-up of the `red sequence'. Our morphological and structural analysis of an evolutionary sample of 335 (post-) starburst galaxies in the Sloan Digital Sky Survey DR7 with starburst ages 0 < tSB < 0.6 Gyr, shows that 45 per cent of galaxies with young starbursts (tSB < 0.1 Gyr) show signatures of an ongoing or past merger. This fraction declines with starburst age, and we find a good agreement between automated and visual classifications. The majority of the oldest (post-) starburst galaxies in our sample (tSB ˜ 0.6 Gyr) have structural properties characteristic of early-type discs and are not as highly concentrated as the fully quenched galaxies commonly found on the `red sequence' in the present day Universe. This suggests that, if (post-) starburst galaxies are a transition phase between active star-formation and quiescence, they do not attain the structure of presently quenched galaxies within the first 0.6 Gyr after the starburst.
López, Carlos; Jaén Martinez, Joaquín; Lejeune, Marylène; Escrivà, Patricia; Salvadó, Maria T; Pons, Lluis E; Alvaro, Tomás; Baucells, Jordi; García-Rojo, Marcial; Cugat, Xavier; Bosch, Ramón
2009-10-01
The volume of digital image (DI) storage continues to be an important problem in computer-assisted pathology. DI compression enables the size of files to be reduced but with the disadvantage of loss of quality. Previous results indicated that the efficiency of computer-assisted quantification of immunohistochemically stained cell nuclei may be significantly reduced when compressed DIs are used. This study attempts to show, with respect to immunohistochemically stained nuclei, which morphometric parameters may be altered by the different levels of JPEG compression, and the implications of these alterations for automated nuclear counts, and further, develops a method for correcting this discrepancy in the nuclear count. For this purpose, 47 DIs from different tissues were captured in uncompressed TIFF format and converted to 1:3, 1:23 and 1:46 compression JPEG images. Sixty-five positive objects were selected from these images, and six morphological parameters were measured and compared for each object in TIFF images and those of the different compression levels using a set of previously developed and tested macros. Roundness proved to be the only morphological parameter that was significantly affected by image compression. Factors to correct the discrepancy in the roundness estimate were derived from linear regression models for each compression level, thereby eliminating the statistically significant differences between measurements in the equivalent images. These correction factors were incorporated in the automated macros, where they reduced the nuclear quantification differences arising from image compression. Our results demonstrate that it is possible to carry out unbiased automated immunohistochemical nuclear quantification in compressed DIs with a methodology that could be easily incorporated in different systems of digital image analysis.
Automated inspection of bread and loaves
NASA Astrophysics Data System (ADS)
Batchelor, Bruce G.
1993-08-01
The prospects for building practical automated inspection machines, capable of detecting the following faults in ordinary, everyday loaves are reviewed: (1) foreign bodies, using X-rays, (2) texture changes, using glancing illumination, mathematical morphology and Neural Net learning techniques, and (3) shape deformations, using structured lighting and simple geometry.
Reproducibility of myelin content-based human habenula segmentation at 3 Tesla.
Kim, Joo-Won; Naidich, Thomas P; Joseph, Joshmi; Nair, Divya; Glasser, Matthew F; O'halloran, Rafael; Doucet, Gaelle E; Lee, Won Hee; Krinsky, Hannah; Paulino, Alejandro; Glahn, David C; Anticevic, Alan; Frangou, Sophia; Xu, Junqian
2018-03-26
In vivo morphological study of the human habenula, a pair of small epithalamic nuclei adjacent to the dorsomedial thalamus, has recently gained significant interest for its role in reward and aversion processing. However, segmenting the habenula from in vivo magnetic resonance imaging (MRI) is challenging due to the habenula's small size and low anatomical contrast. Although manual and semi-automated habenula segmentation methods have been reported, the test-retest reproducibility of the segmented habenula volume and the consistency of the boundaries of habenula segmentation have not been investigated. In this study, we evaluated the intra- and inter-site reproducibility of in vivo human habenula segmentation from 3T MRI (0.7-0.8 mm isotropic resolution) using our previously proposed semi-automated myelin contrast-based method and its fully-automated version, as well as a previously published manual geometry-based method. The habenula segmentation using our semi-automated method showed consistent boundary definition (high Dice coefficient, low mean distance, and moderate Hausdorff distance) and reproducible volume measurement (low coefficient of variation). Furthermore, the habenula boundary in our semi-automated segmentation from 3T MRI agreed well with that in the manual segmentation from 7T MRI (0.5 mm isotropic resolution) of the same subjects. Overall, our proposed semi-automated habenula segmentation showed reliable and reproducible habenula localization, while its fully-automated version offers an efficient way for large sample analysis. © 2018 Wiley Periodicals, Inc.
Choudhry, Priya
2016-01-01
Counting cells and colonies is an integral part of high-throughput screens and quantitative cellular assays. Due to its subjective and time-intensive nature, manual counting has hindered the adoption of cellular assays such as tumor spheroid formation in high-throughput screens. The objective of this study was to develop an automated method for quick and reliable counting of cells and colonies from digital images. For this purpose, I developed an ImageJ macro Cell Colony Edge and a CellProfiler Pipeline Cell Colony Counting, and compared them to other open-source digital methods and manual counts. The ImageJ macro Cell Colony Edge is valuable in counting cells and colonies, and measuring their area, volume, morphology, and intensity. In this study, I demonstrate that Cell Colony Edge is superior to other open-source methods, in speed, accuracy and applicability to diverse cellular assays. It can fulfill the need to automate colony/cell counting in high-throughput screens, colony forming assays, and cellular assays. PMID:26848849
An Automated Approach to Extracting River Bank Locations from Aerial Imagery Using Image Texture
2013-01-01
Atchafalaya River, LA. Map Data: Google, United States Department of Agriculture Farm Ser- vice Agency, Europa Technologies AUTOMATED RIVER BANK...traverse morphologically smooth landscapes including rivers in sand or ice . Within these limitations, we hold that this technique rep- resents a valuable
Automated 3D Phenotype Analysis Using Data Mining
Plyusnin, Ilya; Evans, Alistair R.; Karme, Aleksis; Gionis, Aristides; Jernvall, Jukka
2008-01-01
The ability to analyze and classify three-dimensional (3D) biological morphology has lagged behind the analysis of other biological data types such as gene sequences. Here, we introduce the techniques of data mining to the study of 3D biological shapes to bring the analyses of phenomes closer to the efficiency of studying genomes. We compiled five training sets of highly variable morphologies of mammalian teeth from the MorphoBrowser database. Samples were labeled either by dietary class or by conventional dental types (e.g. carnassial, selenodont). We automatically extracted a multitude of topological attributes using Geographic Information Systems (GIS)-like procedures that were then used in several combinations of feature selection schemes and probabilistic classification models to build and optimize classifiers for predicting the labels of the training sets. In terms of classification accuracy, computational time and size of the feature sets used, non-repeated best-first search combined with 1-nearest neighbor classifier was the best approach. However, several other classification models combined with the same searching scheme proved practical. The current study represents a first step in the automatic analysis of 3D phenotypes, which will be increasingly valuable with the future increase in 3D morphology and phenomics databases. PMID:18320060
New insights in morphological analysis for managing activated sludge systems.
Oliveira, Pedro; Alliet, Marion; Coufort-Saudejaud, Carole; Frances, Christine
2018-06-01
In activated sludge (AS) process, the impact of the operational parameters on process efficiency is assumed to be correlated with the sludge properties. This study provides a better insight into these interactions by subjecting a laboratory-scale AS system to a sequence of operating condition modifications enabling typical situations of a wastewater treatment plant to be represented. Process performance was assessed and AS floc morphology (size, circularity, convexity, solidity and aspect ratio) was quantified by measuring 100,000 flocs per sample with an automated image analysis technique. Introducing 3D distributions, which combine morphological properties, allowed the identification of a filamentous bulking characterized by a floc population shift towards larger sizes and lower solidity and circularity values. Moreover, a washout phenomenon was characterized by smaller AS flocs and an increase in their solidity. Recycle ratio increase and COD:N ratio decrease both promoted a slight reduction of floc sizes and a constant evolution of circularity and convexity values. The analysis of the volume-based 3D distributions turned out to be a smart tool to combine size and shape data, allowing a deeper understanding of the dynamics of floc structure under process disturbances.
Towards automated processing of clinical Finnish: sublanguage analysis and a rule-based parser.
Laippala, Veronika; Ginter, Filip; Pyysalo, Sampo; Salakoski, Tapio
2009-12-01
In this paper, we present steps taken towards more efficient automated processing of clinical Finnish, focusing on daily nursing notes in a Finnish Intensive Care Unit (ICU). First, we analyze ICU Finnish as a sublanguage, identifying its specific features facilitating, for example, the development of a specialized syntactic analyser. The identified features include frequent omission of finite verbs, limitations in allowed syntactic structures, and domain-specific vocabulary. Second, we develop a formal grammar and a parser for ICU Finnish, thus providing better tools for the development of further applications in the clinical domain. The grammar is implemented in the LKB system in a typed feature structure formalism. The lexicon is automatically generated based on the output of the FinTWOL morphological analyzer adapted to the clinical domain. As an additional experiment, we study the effect of using Finnish constraint grammar to reduce the size of the lexicon. The parser construction thus makes efficient use of existing resources for Finnish. The grammar currently covers 76.6% of ICU Finnish sentences, producing highly accurate best-parse analyzes with F-score of 91.1%. We find that building a parser for the highly specialized domain sublanguage is not only feasible, but also surprisingly efficient, given an existing morphological analyzer with broad vocabulary coverage. The resulting parser enables a deeper analysis of the text than was previously possible.
Automated diagnosis of interstitial lung diseases and emphysema in MDCT imaging
NASA Astrophysics Data System (ADS)
Fetita, Catalin; Chang Chien, Kuang-Che; Brillet, Pierre-Yves; Prêteux, Françoise
2007-09-01
Diffuse lung diseases (DLD) include a heterogeneous group of non-neoplasic disease resulting from damage to the lung parenchyma by varying patterns of inflammation. Characterization and quantification of DLD severity using MDCT, mainly in interstitial lung diseases and emphysema, is an important issue in clinical research for the evaluation of new therapies. This paper develops a 3D automated approach for detection and diagnosis of diffuse lung diseases such as fibrosis/honeycombing, ground glass and emphysema. The proposed methodology combines multi-resolution 3D morphological filtering (exploiting the sup-constrained connection cost operator) and graph-based classification for a full characterization of the parenchymal tissue. The morphological filtering performs a multi-level segmentation of the low- and medium-attenuated lung regions as well as their classification with respect to a granularity criterion (multi-resolution analysis). The original intensity range of the CT data volume is thus reduced in the segmented data to a number of levels equal to the resolution depth used (generally ten levels). The specificity of such morphological filtering is to extract tissue patterns locally contrasting with their neighborhood and of size inferior to the resolution depth, while preserving their original shape. A multi-valued hierarchical graph describing the segmentation result is built-up according to the resolution level and the adjacency of the different segmented components. The graph nodes are then enriched with the textural information carried out by their associated components. A graph analysis-reorganization based on the nodes attributes delivers the final classification of the lung parenchyma in normal and ILD/emphysematous regions. It also makes possible to discriminate between different types, or development stages, among the same class of diseases.
NASA Astrophysics Data System (ADS)
Picard de Muller, Gaël; Ait-Belkacem, Rima; Bonnel, David; Longuespée, Rémi; Stauber, Jonathan
2017-12-01
Mass spectrometry imaging datasets are mostly analyzed in terms of average intensity in regions of interest. However, biological tissues have different morphologies with several sizes, shapes, and structures. The important biological information, contained in this highly heterogeneous cellular organization, could be hidden by analyzing the average intensities. Finding an analytical process of morphology would help to find such information, describe tissue model, and support identification of biomarkers. This study describes an informatics approach for the extraction and identification of mass spectrometry image features and its application to sample analysis and modeling. For the proof of concept, two different tissue types (healthy kidney and CT-26 xenograft tumor tissues) were imaged and analyzed. A mouse kidney model and tumor model were generated using morphometric - number of objects and total surface - information. The morphometric information was used to identify m/z that have a heterogeneous distribution. It seems to be a worthwhile pursuit as clonal heterogeneity in a tumor is of clinical relevance. This study provides a new approach to find biomarker or support tissue classification with more information. [Figure not available: see fulltext.
Automated synthesis, insertion and detection of polyps for CT colonography
NASA Astrophysics Data System (ADS)
Sezille, Nicolas; Sadleir, Robert J. T.; Whelan, Paul F.
2003-03-01
CT Colonography (CTC) is a new non-invasive colon imaging technique which has the potential to replace conventional colonoscopy for colorectal cancer screening. A novel system which facilitates automated detection of colorectal polyps at CTC is introduced. As exhaustive testing of such a system using real patient data is not feasible, more complete testing is achieved through synthesis of artificial polyps and insertion into real datasets. The polyp insertion is semi-automatic: candidate points are manually selected using a custom GUI, suitable points are determined automatically from an analysis of the local neighborhood surrounding each of the candidate points. Local density and orientation information are used to generate polyps based on an elliptical model. Anomalies are identified from the modified dataset by analyzing the axial images. Detected anomalies are classified as potential polyps or natural features using 3D morphological techniques. The final results are flagged for review. The system was evaluated using 15 scenarios. The sensitivity of the system was found to be 65% with 34% false positive detections. Automated diagnosis at CTC is possible and thorough testing is facilitated by augmenting real patient data with computer generated polyps. Ultimately, automated diagnosis will enhance standard CTC and increase performance.
[Morphometry of pulmonary tissue: From manual to high throughput automation].
Sallon, C; Soulet, D; Tremblay, Y
2017-12-01
Weibel's research has shown that any alteration of the pulmonary structure has effects on function. This demonstration required a quantitative analysis of lung structures called morphometry. This is possible thanks to stereology, a set of methods based on principles of geometry and statistics. His work has helped to better understand the morphological harmony of the lung, which is essential for its proper functioning. An imbalance leads to pathophysiology such as chronic obstructive pulmonary disease in adults and bronchopulmonary dysplasia in neonates. It is by studying this imbalance that new therapeutic approaches can be developed. These advances are achievable only through morphometric analytical methods, which are increasingly precise and focused, in particular thanks to the high-throughput automation of these methods. This review makes a comparison between an automated method that we developed in the laboratory and semi-manual methods of morphometric analyzes. The automation of morphometric measurements is a fundamental asset in the study of pulmonary pathophysiology because it is an assurance of robustness, reproducibility and speed. This tool will thus contribute significantly to the acceleration of the race for the development of new drugs. Copyright © 2017 SPLF. Published by Elsevier Masson SAS. All rights reserved.
Knowles, David W; Biggin, Mark D
2013-01-01
Animals comprise dynamic three-dimensional arrays of cells that express gene products in intricate spatial and temporal patterns that determine cellular differentiation and morphogenesis. A rigorous understanding of these developmental processes requires automated methods that quantitatively record and analyze complex morphologies and their associated patterns of gene expression at cellular resolution. Here we summarize light microscopy-based approaches to establish permanent, quantitative datasets-atlases-that record this information. We focus on experiments that capture data for whole embryos or large areas of tissue in three dimensions, often at multiple time points. We compare and contrast the advantages and limitations of different methods and highlight some of the discoveries made. We emphasize the need for interdisciplinary collaborations and integrated experimental pipelines that link sample preparation, image acquisition, image analysis, database design, visualization, and quantitative analysis. Copyright © 2013 Wiley Periodicals, Inc.
Computer analysis of digital sky surveys using citizen science and manual classification
NASA Astrophysics Data System (ADS)
Kuminski, Evan; Shamir, Lior
2015-01-01
As current and future digital sky surveys such as SDSS, LSST, DES, Pan-STARRS and Gaia create increasingly massive databases containing millions of galaxies, there is a growing need to be able to efficiently analyze these data. An effective way to do this is through manual analysis, however, this may be insufficient considering the extremely vast pipelines of astronomical images generated by the present and future surveys. Some efforts have been made to use citizen science to classify galaxies by their morphology on a larger scale than individual or small groups of scientists can. While these citizen science efforts such as Zooniverse have helped obtain reasonably accurate morphological information about large numbers of galaxies, they cannot scale to provide complete analysis of billions of galaxy images that will be collected by future ventures such as LSST. Since current forms of manual classification cannot scale to the masses of data collected by digital sky surveys, it is clear that in order to keep up with the growing databases some form of automation of the data analysis will be required, and will work either independently or in combination with human analysis such as citizen science. Here we describe a computer vision method that can automatically analyze galaxy images and deduce galaxy morphology. Experiments using Galaxy Zoo 2 data show that the performance of the method increases as the degree of agreement between the citizen scientists gets higher, providing a cleaner dataset. For several morphological features, such as the spirality of the galaxy, the algorithm agreed with the citizen scientists on around 95% of the samples. However, the method failed to analyze some of the morphological features such as the number of spiral arms, and provided accuracy of just ~36%.
Liston, Adam D; De Munck, Jan C; Hamandi, Khalid; Laufs, Helmut; Ossenblok, Pauly; Duncan, John S; Lemieux, Louis
2006-07-01
Simultaneous acquisition of EEG and fMRI data enables the investigation of the hemodynamic correlates of interictal epileptiform discharges (IEDs) during the resting state in patients with epilepsy. This paper addresses two issues: (1) the semi-automation of IED classification in statistical modelling for fMRI analysis and (2) the improvement of IED detection to increase experimental fMRI efficiency. For patients with multiple IED generators, sensitivity to IED-correlated BOLD signal changes can be improved when the fMRI analysis model distinguishes between IEDs of differing morphology and field. In an attempt to reduce the subjectivity of visual IED classification, we implemented a semi-automated system, based on the spatio-temporal clustering of EEG events. We illustrate the technique's usefulness using EEG-fMRI data from a subject with focal epilepsy in whom 202 IEDs were visually identified and then clustered semi-automatically into four clusters. Each cluster of IEDs was modelled separately for the purpose of fMRI analysis. This revealed IED-correlated BOLD activations in distinct regions corresponding to three different IED categories. In a second step, Signal Space Projection (SSP) was used to project the scalp EEG onto the dipoles corresponding to each IED cluster. This resulted in 123 previously unrecognised IEDs, the inclusion of which, in the General Linear Model (GLM), increased the experimental efficiency as reflected by significant BOLD activations. We have also shown that the detection of extra IEDs is robust in the face of fluctuations in the set of visually detected IEDs. We conclude that automated IED classification can result in more objective fMRI models of IEDs and significantly increased sensitivity.
Groupwise shape analysis of the hippocampus using spectral matching
NASA Astrophysics Data System (ADS)
Shakeri, Mahsa; Lombaert, Hervé; Lippé, Sarah; Kadoury, Samuel
2014-03-01
The hippocampus is a prominent subcortical feature of interest in many neuroscience studies. Its subtle morphological changes often predicate illnesses, including Alzheimer's, schizophrenia or epilepsy. The precise location of structural differences requires a reliable correspondence between shapes across a population. In this paper, we propose an automated method for groupwise hippocampal shape analysis based on a spectral decomposition of a group of shapes to solve the correspondence problem between sets of meshes. The framework generates diffeomorphic correspondence maps across a population, which enables us to create a mean shape. Morphological changes are then located between two groups of subjects. The performance of the proposed method was evaluated on a dataset of 42 hippocampus shapes and compared with a state-of-the-art structural shape analysis approach, using spherical harmonics. Difference maps between mean shapes of two test groups demonstrates that the two approaches showed results with insignificant differences, while Gaussian curvature measures calculated between matched vertices showed a better fit and reduced variability with spectral matching.
Segmentation and feature extraction of retinal vascular morphology
NASA Astrophysics Data System (ADS)
Leopold, Henry A.; Orchard, Jeff; Zelek, John; Lakshminarayanan, Vasudevan
2017-02-01
Analysis of retinal fundus images is essential for physicians, optometrists and ophthalmologists in the diagnosis, care and treatment of patients. The first step of almost all forms of automated fundus analysis begins with the segmentation and subtraction of the retinal vasculature, while analysis of that same structure can aid in the diagnosis of certain retinal and cardiovascular conditions, such as diabetes or stroke. This paper investigates the use of a Convolutional Neural Network as a multi-channel classifier of retinal vessels using DRIVE, a database of fundus images. The result of the network with the application of a confidence threshold was slightly below the 2nd observer and gold standard, with an accuracy of 0.9419 and ROC of 0.9707. The output of the network with on post-processing boasted the highest sensitivity found in the literature with a score of 0.9568 and a good ROC score of 0.9689. The high sensitivity of the system makes it suitable for longitudinal morphology assessments, disease detection and other similar tasks.
Missert, Nancy; Kotula, Paul G.; Rye, Michael; ...
2017-02-15
We used a focused ion beam to obtain cross-sectional specimens from both magnetic multilayer and Nb/Al-AlOx/Nb Josephson junction devices for characterization by scanning transmission electron microscopy (STEM) and energy dispersive X-ray spectroscopy (EDX). An automated multivariate statistical analysis of the EDX spectral images produced chemically unique component images of individual layers within the multilayer structures. STEM imaging elucidated distinct variations in film morphology, interface quality, and/or etch artifacts that could be correlated to magnetic and/or electrical properties measured on the same devices.
Analytical Ultrasonics in Materials Research and Testing
NASA Technical Reports Server (NTRS)
Vary, A.
1986-01-01
Research results in analytical ultrasonics for characterizing structural materials from metals and ceramics to composites are presented. General topics covered by the conference included: status and advances in analytical ultrasonics for characterizing material microstructures and mechanical properties; status and prospects for ultrasonic measurements of microdamage, degradation, and underlying morphological factors; status and problems in precision measurements of frequency-dependent velocity and attenuation for materials analysis; procedures and requirements for automated, digital signal acquisition, processing, analysis, and interpretation; incentives for analytical ultrasonics in materials research and materials processing, testing, and inspection; and examples of progress in ultrasonics for interrelating microstructure, mechanical properites, and dynamic response.
Three-dimensional murine airway segmentation in micro-CT images
NASA Astrophysics Data System (ADS)
Shi, Lijun; Thiesse, Jacqueline; McLennan, Geoffrey; Hoffman, Eric A.; Reinhardt, Joseph M.
2007-03-01
Thoracic imaging for small animals has emerged as an important tool for monitoring pulmonary disease progression and therapy response in genetically engineered animals. Micro-CT is becoming the standard thoracic imaging modality in small animal imaging because it can produce high-resolution images of the lung parenchyma, vasculature, and airways. Segmentation, measurement, and visualization of the airway tree is an important step in pulmonary image analysis. However, manual analysis of the airway tree in micro-CT images can be extremely time-consuming since a typical dataset is usually on the order of several gigabytes in size. Automated and semi-automated tools for micro-CT airway analysis are desirable. In this paper, we propose an automatic airway segmentation method for in vivo micro-CT images of the murine lung and validate our method by comparing the automatic results to manual tracing. Our method is based primarily on grayscale morphology. The results show good visual matches between manually segmented and automatically segmented trees. The average true positive volume fraction compared to manual analysis is 91.61%. The overall runtime for the automatic method is on the order of 30 minutes per volume compared to several hours to a few days for manual analysis.
Marbà-Ardébol, Anna-Maria; Emmerich, Jörn; Muthig, Michael; Neubauer, Peter; Junne, Stefan
2018-05-15
The morphology of yeast cells changes during budding, depending on the growth rate and cultivation conditions. A photo-optical microscope was adapted and used to observe such morphological changes of individual cells directly in the cell suspension. In order to obtain statistically representative samples of the population without the influence of sampling, in situ microscopy (ISM) was applied in the different phases of a Saccharomyces cerevisiae batch cultivation. The real-time measurement was performed by coupling a photo-optical probe to an automated image analysis based on a neural network approach. Automatic cell recognition and classification of budding and non-budding cells was conducted successfully. Deviations between automated and manual counting were considerably low. A differentiation of growth activity across all process stages of a batch cultivation in complex media became feasible. An increased homogeneity among the population during the growth phase was well observable. At growth retardation, the portion of smaller cells increased due to a reduced bud formation. The maturation state of the cells was monitored by determining the budding index as a ratio between the number of cells, which were detected with buds and the total number of cells. A linear correlation between the budding index as monitored with ISM and the growth rate was found. It is shown that ISM is a meaningful analytical tool, as the budding index can provide valuable information about the growth activity of a yeast cell, e.g. in seed breeding or during any other cultivation process. The determination of the single-cell size and shape distributions provided information on the morphological heterogeneity among the populations. The ability to track changes in cell morphology directly on line enables new perspectives for monitoring and control, both in process development and on a production scale.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Missert, Nancy; Kotula, Paul G.; Rye, Michael
We used a focused ion beam to obtain cross-sectional specimens from both magnetic multilayer and Nb/Al-AlOx/Nb Josephson junction devices for characterization by scanning transmission electron microscopy (STEM) and energy dispersive X-ray spectroscopy (EDX). An automated multivariate statistical analysis of the EDX spectral images produced chemically unique component images of individual layers within the multilayer structures. STEM imaging elucidated distinct variations in film morphology, interface quality, and/or etch artifacts that could be correlated to magnetic and/or electrical properties measured on the same devices.
2012-03-01
the three main sub-systems. The Mitsubishi RV12SVL 6-axis robot arm has a 54’’ reach, which allows it to readily move a 2” diameter stainless ... steel sample holder, Figure 2A, between sample exchange points on the Robo-Met.3D, the Tescan SEM, and an additional sample transfer stand that enables...Rowenhorst DJ, et al. (2006) Crystallographic and morphological analysis of coarse martensite : Combining EBSD and serial sectioning. Scripta
Harder, Nathalie; Mora-Bermúdez, Felipe; Godinez, William J; Wünsche, Annelie; Eils, Roland; Ellenberg, Jan; Rohr, Karl
2009-11-01
Live-cell imaging allows detailed dynamic cellular phenotyping for cell biology and, in combination with small molecule or drug libraries, for high-content screening. Fully automated analysis of live cell movies has been hampered by the lack of computational approaches that allow tracking and recognition of individual cell fates over time in a precise manner. Here, we present a fully automated approach to analyze time-lapse movies of dividing cells. Our method dynamically categorizes cells into seven phases of the cell cycle and five aberrant morphological phenotypes over time. It reliably tracks cells and their progeny and can thus measure the length of mitotic phases and detect cause and effect if mitosis goes awry. We applied our computational scheme to annotate mitotic phenotypes induced by RNAi gene knockdown of CKAP5 (also known as ch-TOG) or by treatment with the drug nocodazole. Our approach can be readily applied to comparable assays aiming at uncovering the dynamic cause of cell division phenotypes.
Araki, Tadashi; Jain, Pankaj K; Suri, Harman S; Londhe, Narendra D; Ikeda, Nobutaka; El-Baz, Ayman; Shrivastava, Vimal K; Saba, Luca; Nicolaides, Andrew; Shafique, Shoaib; Laird, John R; Gupta, Ajay; Suri, Jasjit S
2017-01-01
Stroke risk stratification based on grayscale morphology of the ultrasound carotid wall has recently been shown to have a promise in classification of high risk versus low risk plaque or symptomatic versus asymptomatic plaques. In previous studies, this stratification has been mainly based on analysis of the far wall of the carotid artery. Due to the multifocal nature of atherosclerotic disease, the plaque growth is not restricted to the far wall alone. This paper presents a new approach for stroke risk assessment by integrating assessment of both the near and far walls of the carotid artery using grayscale morphology of the plaque. Further, this paper presents a scientific validation system for stroke risk assessment. Both these innovations have never been presented before. The methodology consists of an automated segmentation system of the near wall and far wall regions in grayscale carotid B-mode ultrasound scans. Sixteen grayscale texture features are computed, and fed into the machine learning system. The training system utilizes the lumen diameter to create ground truth labels for the stratification of stroke risk. The cross-validation procedure is adapted in order to obtain the machine learning testing classification accuracy through the use of three sets of partition protocols: (5, 10, and Jack Knife). The mean classification accuracy over all the sets of partition protocols for the automated system in the far and near walls is 95.08% and 93.47%, respectively. The corresponding accuracies for the manual system are 94.06% and 92.02%, respectively. The precision of merit of the automated machine learning system when compared against manual risk assessment system are 98.05% and 97.53% for the far and near walls, respectively. The ROC of the risk assessment system for the far and near walls is close to 1.0 demonstrating high accuracy. Copyright © 2016 Elsevier Ltd. All rights reserved.
Traditional methods for measuring river valley and channel morphology require intensive ground-based surveys which are often expensive, time consuming, and logistically difficult to implement. The number of surveys required to assess the hydrogeomorphic structure of large river n...
NASA Astrophysics Data System (ADS)
Neubert, A.; Fripp, J.; Engstrom, C.; Schwarz, R.; Lauer, L.; Salvado, O.; Crozier, S.
2012-12-01
Recent advances in high resolution magnetic resonance (MR) imaging of the spine provide a basis for the automated assessment of intervertebral disc (IVD) and vertebral body (VB) anatomy. High resolution three-dimensional (3D) morphological information contained in these images may be useful for early detection and monitoring of common spine disorders, such as disc degeneration. This work proposes an automated approach to extract the 3D segmentations of lumbar and thoracic IVDs and VBs from MR images using statistical shape analysis and registration of grey level intensity profiles. The algorithm was validated on a dataset of volumetric scans of the thoracolumbar spine of asymptomatic volunteers obtained on a 3T scanner using the relatively new 3D T2-weighted SPACE pulse sequence. Manual segmentations and expert radiological findings of early signs of disc degeneration were used in the validation. There was good agreement between manual and automated segmentation of the IVD and VB volumes with the mean Dice scores of 0.89 ± 0.04 and 0.91 ± 0.02 and mean absolute surface distances of 0.55 ± 0.18 mm and 0.67 ± 0.17 mm respectively. The method compares favourably to existing 3D MR segmentation techniques for VBs. This is the first time IVDs have been automatically segmented from 3D volumetric scans and shape parameters obtained were used in preliminary analyses to accurately classify (100% sensitivity, 98.3% specificity) disc abnormalities associated with early degenerative changes.
Automated segmentation of murine lung tumors in x-ray micro-CT images
NASA Astrophysics Data System (ADS)
Swee, Joshua K. Y.; Sheridan, Clare; de Bruin, Elza; Downward, Julian; Lassailly, Francois; Pizarro, Luis
2014-03-01
Recent years have seen micro-CT emerge as a means of providing imaging analysis in pre-clinical study, with in-vivo micro-CT having been shown to be particularly applicable to the examination of murine lung tumors. Despite this, existing studies have involved substantial human intervention during the image analysis process, with the use of fully-automated aids found to be almost non-existent. We present a new approach to automate the segmentation of murine lung tumors designed specifically for in-vivo micro-CT-based pre-clinical lung cancer studies that addresses the specific requirements of such study, as well as the limitations human-centric segmentation approaches experience when applied to such micro-CT data. Our approach consists of three distinct stages, and begins by utilizing edge enhancing and vessel enhancing non-linear anisotropic diffusion filters to extract anatomy masks (lung/vessel structure) in a pre-processing stage. Initial candidate detection is then performed through ROI reduction utilizing obtained masks and a two-step automated segmentation approach that aims to extract all disconnected objects within the ROI, and consists of Otsu thresholding, mathematical morphology and marker-driven watershed. False positive reduction is finally performed on initial candidates through random-forest-driven classification using the shape, intensity, and spatial features of candidates. We provide validation of our approach using data from an associated lung cancer study, showing favorable results both in terms of detection (sensitivity=86%, specificity=89%) and structural recovery (Dice Similarity=0.88) when compared against manual specialist annotation.
NASA Astrophysics Data System (ADS)
Vatle, S. S.
2015-12-01
Frequent and up-to-date glacier outlines are needed for many applications of glaciology, not only glacier area change analysis, but also for masks in volume or velocity analysis, for the estimation of water resources and as model input data. Remote sensing offers a good option for creating glacier outlines over large areas, but manual correction is frequently necessary, especially in areas containing supraglacial debris. We show three different workflows for mapping clean ice and debris-covered ice within Object Based Image Analysis (OBIA). By working at the object level as opposed to the pixel level, OBIA facilitates using contextual, spatial and hierarchical information when assigning classes, and additionally permits the handling of multiple data sources. Our first example shows mapping debris-covered ice in the Manaslu Himalaya, Nepal. SAR Coherence data is used in combination with optical and topographic data to classify debris-covered ice, obtaining an accuracy of 91%. Our second example shows using a high-resolution LiDAR derived DEM over the Hohe Tauern National Park in Austria. Breaks in surface morphology are used in creating image objects; debris-covered ice is then classified using a combination of spectral, thermal and topographic properties. Lastly, we show a completely automated workflow for mapping glacier ice in Norway. The NDSI and NIR/SWIR band ratio are used to map clean ice over the entire country but the thresholds are calculated automatically based on a histogram of each image subset. This means that in theory any Landsat scene can be inputted and the clean ice can be automatically extracted. Debris-covered ice can be included semi-automatically using contextual and morphological information.
NeuronMetrics: Software for Semi-Automated Processing of Cultured-Neuron Images
Narro, Martha L.; Yang, Fan; Kraft, Robert; Wenk, Carola; Efrat, Alon; Restifo, Linda L.
2007-01-01
Using primary cell culture to screen for changes in neuronal morphology requires specialized analysis software. We developed NeuronMetrics™ for semi-automated, quantitative analysis of two-dimensional (2D) images of fluorescently labeled cultured neurons. It skeletonizes the neuron image using two complementary image-processing techniques, capturing fine terminal neurites with high fidelity. An algorithm was devised to span wide gaps in the skeleton. NeuronMetrics uses a novel strategy based on geometric features called faces to extract a branch-number estimate from complex arbors with numerous neurite-to-neurite contacts, without creating a precise, contact-free representation of the neurite arbor. It estimates total neurite length, branch number, primary neurite number, territory (the area of the convex polygon bounding the skeleton and cell body), and Polarity Index (a measure of neuronal polarity). These parameters provide fundamental information about the size and shape of neurite arbors, which are critical factors for neuronal function. NeuronMetrics streamlines optional manual tasks such as removing noise, isolating the largest primary neurite, and correcting length for self-fasciculating neurites. Numeric data are output in a single text file, readily imported into other applications for further analysis. Written as modules for ImageJ, NeuronMetrics provides practical analysis tools that are easy to use and support batch processing. Depending on the need for manual intervention, processing time for a batch of ~60 2D images is 1.0–2.5 hours, from a folder of images to a table of numeric data. NeuronMetrics’ output accelerates the quantitative detection of mutations and chemical compounds that alter neurite morphology in vitro, and will contribute to the use of cultured neurons for drug discovery. PMID:17270152
NeuronMetrics: software for semi-automated processing of cultured neuron images.
Narro, Martha L; Yang, Fan; Kraft, Robert; Wenk, Carola; Efrat, Alon; Restifo, Linda L
2007-03-23
Using primary cell culture to screen for changes in neuronal morphology requires specialized analysis software. We developed NeuronMetrics for semi-automated, quantitative analysis of two-dimensional (2D) images of fluorescently labeled cultured neurons. It skeletonizes the neuron image using two complementary image-processing techniques, capturing fine terminal neurites with high fidelity. An algorithm was devised to span wide gaps in the skeleton. NeuronMetrics uses a novel strategy based on geometric features called faces to extract a branch number estimate from complex arbors with numerous neurite-to-neurite contacts, without creating a precise, contact-free representation of the neurite arbor. It estimates total neurite length, branch number, primary neurite number, territory (the area of the convex polygon bounding the skeleton and cell body), and Polarity Index (a measure of neuronal polarity). These parameters provide fundamental information about the size and shape of neurite arbors, which are critical factors for neuronal function. NeuronMetrics streamlines optional manual tasks such as removing noise, isolating the largest primary neurite, and correcting length for self-fasciculating neurites. Numeric data are output in a single text file, readily imported into other applications for further analysis. Written as modules for ImageJ, NeuronMetrics provides practical analysis tools that are easy to use and support batch processing. Depending on the need for manual intervention, processing time for a batch of approximately 60 2D images is 1.0-2.5 h, from a folder of images to a table of numeric data. NeuronMetrics' output accelerates the quantitative detection of mutations and chemical compounds that alter neurite morphology in vitro, and will contribute to the use of cultured neurons for drug discovery.
Ferrara, Giuseppe; Mercedes Panizol, Maria; Mazzone, Marja; Delia Pequeneze, Maria; Reviakina, Vera
2014-12-01
The aim of this study was to compare the identification of clin- ically relevant yeasts by the Vitek YBC and Microscan Walk Away RYID automated methods with conventional phenotypic methods. One hundred and ninety three yeast strains isolated from clinical samples and five controls strains were used. All the yeasts were identified by the automated methods previously mentioned and conventional phenotypic methods such as carbohydrate assimilation, visualization of microscopic morphology on corn meal agar and the use of chromogenic agar. Variables were assessed by 2 x 2 contingency tables, McNemar's Chi square, the Kappa index, and concordance values were calculated, as well as major and minor errors for the automated methods. Yeasts were divided into two groups: (1) frequent isolation and (2) rare isolation. The Vitek YBC and Microscan Walk Away RYID systems were concordant in 88.4 and 85.9% respectively, when compared to conventional phenotypic methods. Although both automated systems can be used for yeasts identification, the presence of major and minor errors indicates the possibility of misidentifications; therefore, the operator of this equipment must use in parallel, phenotypic tests such as visualization of microscopic morphology on corn meal agar and chromogenic agar, especially against infrequently isolated yeasts. Automated systems are a valuable tool; however, the expertise and judgment of the microbiologist are an important strength to ensure the quality of the results.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burger, D.E.
1979-11-01
The extraction of morphological parameters from biological cells by analysis of light-scatter patterns is described. A light-scattering measurement system has been designed and constructed that allows one to visually examine and photographically record biological cells or cell models and measure the light-scatter pattern of an individual cell or cell model. Using a laser or conventional illumination, the imaging system consists of a modified microscope with a 35 mm camera attached to record the cell image or light-scatter pattern. Models of biological cells were fabricated. The dynamic range and angular distributions of light scattered from these models was compared to calculatedmore » distributions. Spectrum analysis techniques applied on the light-scatter data give the sought after morphological cell parameters. These results compared favorably to shape parameters of the fabricated cell models confirming the mathematical model procedure. For nucleated biological material, correct nuclear and cell eccentricity as well as the nuclear and cytoplasmic diameters were determined. A method for comparing the flow equivalent of nuclear and cytoplasmic size to the actual dimensions is shown. This light-scattering experiment provides baseline information for automated cytology. In its present application, it involves correlating average size as measured in flow cytology to the actual dimensions determined from this technique. (ERB)« less
Salimi, Nima; Loh, Kar Hoe; Kaur Dhillon, Sarinder; Chong, Ving Ching
2016-01-01
Background. Fish species may be identified based on their unique otolith shape or contour. Several pattern recognition methods have been proposed to classify fish species through morphological features of the otolith contours. However, there has been no fully-automated species identification model with the accuracy higher than 80%. The purpose of the current study is to develop a fully-automated model, based on the otolith contours, to identify the fish species with the high classification accuracy. Methods. Images of the right sagittal otoliths of 14 fish species from three families namely Sciaenidae, Ariidae, and Engraulidae were used to develop the proposed identification model. Short-time Fourier transform (STFT) was used, for the first time in the area of otolith shape analysis, to extract important features of the otolith contours. Discriminant Analysis (DA), as a classification technique, was used to train and test the model based on the extracted features. Results. Performance of the model was demonstrated using species from three families separately, as well as all species combined. Overall classification accuracy of the model was greater than 90% for all cases. In addition, effects of STFT variables on the performance of the identification model were explored in this study. Conclusions. Short-time Fourier transform could determine important features of the otolith outlines. The fully-automated model proposed in this study (STFT-DA) could predict species of an unknown specimen with acceptable identification accuracy. The model codes can be accessed at http://mybiodiversityontologies.um.edu.my/Otolith/ and https://peerj.com/preprints/1517/. The current model has flexibility to be used for more species and families in future studies.
Jacob, Joseph; Bartholmai, Brian J; Rajagopalan, Srinivasan; Brun, Anne Laure; Egashira, Ryoko; Karwoski, Ronald; Kokosi, Maria; Wells, Athol U; Hansell, David M
2016-11-23
To evaluate computer-based computer tomography (CT) analysis (CALIPER) against visual CT scoring and pulmonary function tests (PFTs) when predicting mortality in patients with connective tissue disease-related interstitial lung disease (CTD-ILD). To identify outcome differences between distinct CTD-ILD groups derived following automated stratification of CALIPER variables. A total of 203 consecutive patients with assorted CTD-ILDs had CT parenchymal patterns evaluated by CALIPER and visual CT scoring: honeycombing, reticular pattern, ground glass opacities, pulmonary vessel volume, emphysema, and traction bronchiectasis. CT scores were evaluated against pulmonary function tests: forced vital capacity, diffusing capacity for carbon monoxide, carbon monoxide transfer coefficient, and composite physiologic index for mortality analysis. Automated stratification of CALIPER-CT variables was evaluated in place of and alongside forced vital capacity and diffusing capacity for carbon monoxide in the ILD gender, age physiology (ILD-GAP) model using receiver operating characteristic curve analysis. Cox regression analyses identified four independent predictors of mortality: patient age (P < 0.0001), smoking history (P = 0.0003), carbon monoxide transfer coefficient (P = 0.003), and pulmonary vessel volume (P < 0.0001). Automated stratification of CALIPER variables identified three morphologically distinct groups which were stronger predictors of mortality than all CT and functional indices. The Stratified-CT model substituted automated stratified groups for functional indices in the ILD-GAP model and maintained model strength (area under curve (AUC) = 0.74, P < 0.0001), ILD-GAP (AUC = 0.72, P < 0.0001). Combining automated stratified groups with the ILD-GAP model (stratified CT-GAP model) strengthened predictions of 1- and 2-year mortality: ILD-GAP (AUC = 0.87 and 0.86, respectively); stratified CT-GAP (AUC = 0.89 and 0.88, respectively). CALIPER-derived pulmonary vessel volume is an independent predictor of mortality across all CTD-ILD patients. Furthermore, automated stratification of CALIPER CT variables represents a novel method of prognostication at least as robust as PFTs in CTD-ILD patients.
NASA Astrophysics Data System (ADS)
Yu, Peter; Eyles, Nick; Sookhan, Shane
2015-10-01
Resolving the origin(s) of drumlins and related megaridges in areas of megascale glacial lineations (MSGL) left by paleo-ice sheets is critical to understanding how ancient ice sheets interacted with their sediment beds. MSGL is now linked with fast-flowing ice streams but there is a broad range of erosional and depositional models. Further progress is reliant on constraining fluxes of subglacial sediment at the ice sheet base which in turn is dependent on morphological data such as landform shape and elongation and most importantly landform volume. Past practice in determining shape has employed a broad range of geomorphological methods from strictly visualisation techniques to more complex semi-automated and automated drumlin extraction methods. This paper reviews and builds on currently available visualisation, semi-automated and automated extraction methods and presents a new, Curvature Based Relief Separation (CBRS) technique; for drumlin mapping. This uses curvature analysis to generate a base level from which topography can be normalized and drumlin volume can be derived. This methodology is tested using a high resolution (3 m) LiDAR elevation dataset from the Wadena Drumlin Field, Minnesota, USA, which was constructed by the Wadena Lobe of the Laurentide Ice Sheet ca. 20,000 years ago and which as a whole contains 2000 drumlins across an area of 7500 km2. This analysis demonstrates that CBRS provides an objective and robust procedure for automated drumlin extraction. There is strong agreement with manually selected landforms but the method is also capable of resolving features that were not detectable manually thereby considerably expanding the known population of streamlined landforms. CBRS provides an effective automatic method for visualisation of large areas of the streamlined beds of former ice sheets and for modelling sediment fluxes below ice sheets.
Degree of anisotropy as an automated indicator of rip channels in high resolution bathymetric models
NASA Astrophysics Data System (ADS)
Trimble, S. M.; Houser, C.; Bishop, M. P.
2017-12-01
A rip current is a concentrated seaward flow of water that forms in the surf zone of a beach as a result of alongshore variations in wave breaking. Rips can carry swimmers swiftly into deep water, and they are responsible for hundreds of fatal drownings and thousands of rescues worldwide each year. These currents form regularly alongside hard structures like piers and jetties, and can also form along sandy coasts when there is a three dimensional bar morphology. This latter rip type tends to be variable in strength and location, making them arguably the most dangerous to swimmers and most difficult to identify. These currents form in characteristic rip channels in surf zone bathymetry, in which the primary axis of self-similarity is oriented shore-normal. This paper demonstrates a new method for automating identification of such rip channels in bathymetric digital surface models (DSMs) using bathymetric data collected by various remote sensing methods. Degree of anisotropy is used to detect rip channels and distinguishes between sandbars, rip channels, and other beach features. This has implications for coastal geomorphology theory and safety practices. As technological advances increase access and accuracy of topobathy mapping methods in the surf zone, frequent nearshore bathymetric DSMs could be more easily captured and processed, then analyzed with this method to result in localized, automated, and frequent detection of rip channels. This could ultimately reduce rip-related fatalities worldwide (i) in present mitigation, by identifying the present location of rip channels, (ii) in forecasting, by tracking the channel's evolution through multiple DSMs, and (iii) in rip education by improving local lifeguard knowledge of the rip hazard. Although this paper on applies analysis of degree of anisotropy to the identification of rip channels, this parameter can be applied to multiple facets of barrier island morphological analysis.
NASA Astrophysics Data System (ADS)
Wang, Tong-Hong; Chen, Tse-Ching; Teng, Xiao; Liang, Kung-Hao; Yeh, Chau-Ting
2015-08-01
Liver fibrosis assessment by biopsy and conventional staining scores is based on histopathological criteria. Variations in sample preparation and the use of semi-quantitative histopathological methods commonly result in discrepancies between medical centers. Thus, minor changes in liver fibrosis might be overlooked in multi-center clinical trials, leading to statistically non-significant data. Here, we developed a computer-assisted, fully automated, staining-free method for hepatitis B-related liver fibrosis assessment. In total, 175 liver biopsies were divided into training (n = 105) and verification (n = 70) cohorts. Collagen was observed using second harmonic generation (SHG) microscopy without prior staining, and hepatocyte morphology was recorded using two-photon excitation fluorescence (TPEF) microscopy. The training cohort was utilized to establish a quantification algorithm. Eleven of 19 computer-recognizable SHG/TPEF microscopic morphological features were significantly correlated with the ISHAK fibrosis stages (P < 0.001). A biphasic scoring method was applied, combining support vector machine and multivariate generalized linear models to assess the early and late stages of fibrosis, respectively, based on these parameters. The verification cohort was used to verify the scoring method, and the area under the receiver operating characteristic curve was >0.82 for liver cirrhosis detection. Since no subjective gradings are needed, interobserver discrepancies could be avoided using this fully automated method.
Wang, Tong-Hong; Chen, Tse-Ching; Teng, Xiao; Liang, Kung-Hao; Yeh, Chau-Ting
2015-08-11
Liver fibrosis assessment by biopsy and conventional staining scores is based on histopathological criteria. Variations in sample preparation and the use of semi-quantitative histopathological methods commonly result in discrepancies between medical centers. Thus, minor changes in liver fibrosis might be overlooked in multi-center clinical trials, leading to statistically non-significant data. Here, we developed a computer-assisted, fully automated, staining-free method for hepatitis B-related liver fibrosis assessment. In total, 175 liver biopsies were divided into training (n = 105) and verification (n = 70) cohorts. Collagen was observed using second harmonic generation (SHG) microscopy without prior staining, and hepatocyte morphology was recorded using two-photon excitation fluorescence (TPEF) microscopy. The training cohort was utilized to establish a quantification algorithm. Eleven of 19 computer-recognizable SHG/TPEF microscopic morphological features were significantly correlated with the ISHAK fibrosis stages (P < 0.001). A biphasic scoring method was applied, combining support vector machine and multivariate generalized linear models to assess the early and late stages of fibrosis, respectively, based on these parameters. The verification cohort was used to verify the scoring method, and the area under the receiver operating characteristic curve was >0.82 for liver cirrhosis detection. Since no subjective gradings are needed, interobserver discrepancies could be avoided using this fully automated method.
Rapid, automated mosaicking of the human corneal subbasal nerve plexus.
Vaishnav, Yash J; Rucker, Stuart A; Saharia, Keshav; McNamara, Nancy A
2017-11-27
Corneal confocal microscopy (CCM) is an in vivo technique used to study corneal nerve morphology. The largest proportion of nerves innervating the cornea lie within the subbasal nerve plexus, where their morphology is altered by refractive surgery, diabetes and dry eye. The main limitations to clinical use of CCM as a diagnostic tool are the small field of view of CCM images and the lengthy time needed to quantify nerves in collected images. Here, we present a novel, rapid, fully automated technique to mosaic individual CCM images into wide-field maps of corneal nerves. We implemented an OpenCV image stitcher that accounts for corneal deformation and uses feature detection to stitch CCM images into a montage. The method takes 3-5 min to process and stitch 40-100 frames on an Amazon EC2 Micro instance. The speed, automation and ease of use conferred by this technique is the first step toward point of care evaluation of wide-field subbasal plexus (SBP) maps in a clinical setting.
Performance of Copan WASP for Routine Urine Microbiology
Quiblier, Chantal; Jetter, Marion; Rominski, Mark; Mouttet, Forouhar; Böttger, Erik C.; Keller, Peter M.
2015-01-01
This study compared a manual workup of urine clinical samples with fully automated WASPLab processing. As a first step, two different inocula (1 and 10 μl) and different streaking patterns were compared using WASP and InoqulA BT instrumentation. Significantly more single colonies were produced with the10-μl inoculum than with the 1-μl inoculum, and automated streaking yielded significantly more single colonies than manual streaking on whole plates (P < 0.001). In a second step, 379 clinical urine samples were evaluated using WASP and the manual workup. Average numbers of detected morphologies, recovered species, and CFUs per milliliter of all 379 urine samples showed excellent agreement between WASPLab and the manual workup. The percentage of urine samples clinically categorized as positive or negative did not differ between the automated and manual workflow, but within the positive samples, automated processing by WASPLab resulted in the detection of more potential pathogens. In summary, the present study demonstrates that (i) the streaking pattern, i.e., primarily the number of zigzags/length of streaking lines, is critical for optimizing the number of single colonies yielded from primary cultures of urine samples; (ii) automated streaking by the WASP instrument is superior to manual streaking regarding the number of single colonies yielded (for 32.2% of the samples); and (iii) automated streaking leads to higher numbers of detected morphologies (for 47.5% of the samples), species (for 17.4% of the samples), and pathogens (for 3.4% of the samples). The results of this study point to an improved quality of microbiological analyses and laboratory reports when using automated sample processing by WASP and WASPLab. PMID:26677255
Grigoryan, Artyom M; Dougherty, Edward R; Kononen, Juha; Bubendorf, Lukas; Hostetter, Galen; Kallioniemi, Olli
2002-01-01
Fluorescence in situ hybridization (FISH) is a molecular diagnostic technique in which a fluorescent labeled probe hybridizes to a target nucleotide sequence of deoxyribose nucleic acid. Upon excitation, each chromosome containing the target sequence produces a fluorescent signal (spot). Because fluorescent spot counting is tedious and often subjective, automated digital algorithms to count spots are desirable. New technology provides a stack of images on multiple focal planes throughout a tissue sample. Multiple-focal-plane imaging helps overcome the biases and imprecision inherent in single-focal-plane methods. This paper proposes an algorithm for global spot counting in stacked three-dimensional slice FISH images without the necessity of nuclei segmentation. It is designed to work in complex backgrounds, when there are agglomerated nuclei, and in the presence of illumination gradients. It is based on the morphological top-hat transform, which locates intensity spikes on irregular backgrounds. After finding signals in the slice images, the algorithm groups these together to form three-dimensional spots. Filters are employed to separate legitimate spots from fluorescent noise. The algorithm is set in a comprehensive toolbox that provides visualization and analytic facilities. It includes simulation software that allows examination of algorithm performance for various image and algorithm parameter settings, including signal size, signal density, and the number of slices.
Holt, Katherine A.; Bebbington, Mark S.
2014-01-01
• Premise of the study: One of the many advantages offered by automated palynology systems is the ability to vastly increase the number of observations made on a particular sample or samples. This is of particular benefit when attempting to fully quantify the degree of variation within or between closely related pollen types. • Methods: An automated palynology system (Classifynder) has been used to further investigate the variation in pollen morphology between two New Zealand species of Myrtaceae (Leptospermum scoparium and Kunzea ericoides) that are of significance in the New Zealand honey industry. Seven geometric features extracted from automatically gathered digital images were used to characterize the range of shape and size of the two taxa, and to examine the extent of previously reported overlap in these variables. • Results: Our results indicate a degree of overlap in all cases. The narrowest overlap was in measurements of maximum Feret diameter (MFD) in grains oriented in polar view. Multivariate statistical analysis using all seven factors provided the most robust discrimination between the two types. • Discussion: Further work is required before this approach could be routinely applied to separating the two pollen types used in this study, most notably the development of comprehensive reference distributions for the types in question. PMID:25202650
Efficient processing of fluorescence images using directional multiscale representations.
Labate, D; Laezza, F; Negi, P; Ozcan, B; Papadakis, M
2014-01-01
Recent advances in high-resolution fluorescence microscopy have enabled the systematic study of morphological changes in large populations of cells induced by chemical and genetic perturbations, facilitating the discovery of signaling pathways underlying diseases and the development of new pharmacological treatments. In these studies, though, due to the complexity of the data, quantification and analysis of morphological features are for the vast majority handled manually, slowing significantly data processing and limiting often the information gained to a descriptive level. Thus, there is an urgent need for developing highly efficient automated analysis and processing tools for fluorescent images. In this paper, we present the application of a method based on the shearlet representation for confocal image analysis of neurons. The shearlet representation is a newly emerged method designed to combine multiscale data analysis with superior directional sensitivity, making this approach particularly effective for the representation of objects defined over a wide range of scales and with highly anisotropic features. Here, we apply the shearlet representation to problems of soma detection of neurons in culture and extraction of geometrical features of neuronal processes in brain tissue, and propose it as a new framework for large-scale fluorescent image analysis of biomedical data.
Efficient processing of fluorescence images using directional multiscale representations
Labate, D.; Laezza, F.; Negi, P.; Ozcan, B.; Papadakis, M.
2017-01-01
Recent advances in high-resolution fluorescence microscopy have enabled the systematic study of morphological changes in large populations of cells induced by chemical and genetic perturbations, facilitating the discovery of signaling pathways underlying diseases and the development of new pharmacological treatments. In these studies, though, due to the complexity of the data, quantification and analysis of morphological features are for the vast majority handled manually, slowing significantly data processing and limiting often the information gained to a descriptive level. Thus, there is an urgent need for developing highly efficient automated analysis and processing tools for fluorescent images. In this paper, we present the application of a method based on the shearlet representation for confocal image analysis of neurons. The shearlet representation is a newly emerged method designed to combine multiscale data analysis with superior directional sensitivity, making this approach particularly effective for the representation of objects defined over a wide range of scales and with highly anisotropic features. Here, we apply the shearlet representation to problems of soma detection of neurons in culture and extraction of geometrical features of neuronal processes in brain tissue, and propose it as a new framework for large-scale fluorescent image analysis of biomedical data. PMID:28804225
Image edge detection based tool condition monitoring with morphological component analysis.
Yu, Xiaolong; Lin, Xin; Dai, Yiquan; Zhu, Kunpeng
2017-07-01
The measurement and monitoring of tool condition are keys to the product precision in the automated manufacturing. To meet the need, this study proposes a novel tool wear monitoring approach based on the monitored image edge detection. Image edge detection has been a fundamental tool to obtain features of images. This approach extracts the tool edge with morphological component analysis. Through the decomposition of original tool wear image, the approach reduces the influence of texture and noise for edge measurement. Based on the target image sparse representation and edge detection, the approach could accurately extract the tool wear edge with continuous and complete contour, and is convenient in charactering tool conditions. Compared to the celebrated algorithms developed in the literature, this approach improves the integrity and connectivity of edges, and the results have shown that it achieves better geometry accuracy and lower error rate in the estimation of tool conditions. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Automated artery-venous classification of retinal blood vessels based on structural mapping method
NASA Astrophysics Data System (ADS)
Joshi, Vinayak S.; Garvin, Mona K.; Reinhardt, Joseph M.; Abramoff, Michael D.
2012-03-01
Retinal blood vessels show morphologic modifications in response to various retinopathies. However, the specific responses exhibited by arteries and veins may provide a precise diagnostic information, i.e., a diabetic retinopathy may be detected more accurately with the venous dilatation instead of average vessel dilatation. In order to analyze the vessel type specific morphologic modifications, the classification of a vessel network into arteries and veins is required. We previously described a method for identification and separation of retinal vessel trees; i.e. structural mapping. Therefore, we propose the artery-venous classification based on structural mapping and identification of color properties prominent to the vessel types. The mean and standard deviation of each of green channel intensity and hue channel intensity are analyzed in a region of interest around each centerline pixel of a vessel. Using the vector of color properties extracted from each centerline pixel, it is classified into one of the two clusters (artery and vein), obtained by the fuzzy-C-means clustering. According to the proportion of clustered centerline pixels in a particular vessel, and utilizing the artery-venous crossing property of retinal vessels, each vessel is assigned a label of an artery or a vein. The classification results are compared with the manually annotated ground truth (gold standard). We applied the proposed method to a dataset of 15 retinal color fundus images resulting in an accuracy of 88.28% correctly classified vessel pixels. The automated classification results match well with the gold standard suggesting its potential in artery-venous classification and the respective morphology analysis.
Saba, Luca; Jain, Pankaj K; Suri, Harman S; Ikeda, Nobutaka; Araki, Tadashi; Singh, Bikesh K; Nicolaides, Andrew; Shafique, Shoaib; Gupta, Ajay; Laird, John R; Suri, Jasjit S
2017-06-01
Severe atherosclerosis disease in carotid arteries causes stenosis which in turn leads to stroke. Machine learning systems have been previously developed for plaque wall risk assessment using morphology-based characterization. The fundamental assumption in such systems is the extraction of the grayscale features of the plaque region. Even though these systems have the ability to perform risk stratification, they lack the ability to achieve higher performance due their inability to select and retain dominant features. This paper introduces a polling-based principal component analysis (PCA) strategy embedded in the machine learning framework to select and retain dominant features, resulting in superior performance. This leads to more stability and reliability. The automated system uses offline image data along with the ground truth labels to generate the parameters, which are then used to transform the online grayscale features to predict the risk of stroke. A set of sixteen grayscale plaque features is computed. Utilizing the cross-validation protocol (K = 10), and the PCA cutoff of 0.995, the machine learning system is able to achieve an accuracy of 98.55 and 98.83%corresponding to the carotidfar wall and near wall plaques, respectively. The corresponding reliability of the system was 94.56 and 95.63%, respectively. The automated system was validated against the manual risk assessment system and the precision of merit for same cross-validation settings and PCA cutoffs are 98.28 and 93.92%for the far and the near wall, respectively.PCA-embedded morphology-based plaque characterization shows a powerful strategy for risk assessment and can be adapted in clinical settings.
NASA Astrophysics Data System (ADS)
Joshi, Vinayak S.; Garvin, Mona K.; Reinhardt, Joseph M.; Abramoff, Michael D.
2011-03-01
Structural analysis of retinal vessel network has so far served in the diagnosis of retinopathies and systemic diseases. The retinopathies are known to affect the morphologic properties of retinal vessels such as course, shape, caliber, and tortuosity. Whether the arteries and the veins respond to these changes together or in tandem has always been a topic of discussion. However the diseases such as diabetic retinopathy and retinopathy of prematurity have been diagnosed with the morphologic changes specific either to arteries or to veins. Thus a method describing the separation of retinal vessel trees imaged in a two dimensional color fundus image may assist in artery-vein classification and quantitative assessment of morphologic changes particular to arteries or veins. We propose a method based on mathematical morphology and graph search to identify and label the retinal vessel trees, which provides a structural mapping of vessel network in terms of each individual primary vessel, its branches and spatial positions of branching and cross-over points. The method was evaluated on a dataset of 15 fundus images resulting into an accuracy of 92.87 % correctly assigned vessel pixels when compared with the manual labeling of separated vessel trees. Accordingly, the structural mapping method performs well and we are currently investigating its potential in evaluating the characteristic properties specific to arteries or veins.
Dias, Roberto A; Gonçalves, Bruno P; da Rocha, Joana F; da Cruz E Silva, Odete A B; da Silva, Augusto M F; Vieira, Sandra I
2017-12-01
Neurons are specialized cells of the Central Nervous System whose function is intricately related to the neuritic network they develop to transmit information. Morphological evaluation of this network and other neuronal structures is required to establish relationships between neuronal morphology and function, and may allow monitoring physiological and pathophysiologic alterations. Fluorescence-based microphotographs are the most widely used in cellular bioimaging, but phase contrast (PhC) microphotographs are easier to obtain, more affordable, and do not require invasive, complicated and disruptive techniques. Despite the various freeware tools available for fluorescence-based images analysis, few exist that can tackle the more elusive and harder-to-analyze PhC images. To surpass this, an interactive semi-automated image processing workflow was developed to easily extract relevant information (e.g. total neuritic length, average cell body area) from both PhC and fluorescence neuronal images. This workflow, named 'NeuronRead', was developed in the form of an ImageJ macro. Its robustness and adaptability were tested and validated on rat cortical primary neurons under control and differentiation inhibitory conditions. Validation included a comparison to manual determinations and to a golden standard freeware tool for fluorescence image analysis. NeuronRead was subsequently applied to PhC images of neurons at distinct differentiation days and exposed or not to DAPT, a pharmacological inhibitor of the γ-secretase enzyme, which cleaves the well-known Alzheimer's amyloid precursor protein (APP) and the Notch receptor. Data obtained confirms a neuritogenic regulatory role for γ-secretase products and validates NeuronRead as a time- and cost-effective useful monitoring tool. Copyright © 2017. Published by Elsevier Inc.
Lin, Sabrina C.; Bays, Brett C.; Omaiye, Esther; Bhanu, Bir; Talbot, Prue
2016-01-01
There is a foundational need for quality control tools in stem cell laboratories engaged in basic research, regenerative therapies, and toxicological studies. These tools require automated methods for evaluating cell processes and quality during in vitro passaging, expansion, maintenance, and differentiation. In this paper, an unbiased, automated high-content profiling toolkit, StemCellQC, is presented that non-invasively extracts information on cell quality and cellular processes from time-lapse phase-contrast videos. Twenty four (24) morphological and dynamic features were analyzed in healthy, unhealthy, and dying human embryonic stem cell (hESC) colonies to identify those features that were affected in each group. Multiple features differed in the healthy versus unhealthy/dying groups, and these features were linked to growth, motility, and death. Biomarkers were discovered that predicted cell processes before they were detectable by manual observation. StemCellQC distinguished healthy and unhealthy/dying hESC colonies with 96% accuracy by non-invasively measuring and tracking dynamic and morphological features over 48 hours. Changes in cellular processes can be monitored by StemCellQC and predictions can be made about the quality of pluripotent stem cell colonies. This toolkit reduced the time and resources required to track multiple pluripotent stem cell colonies and eliminated handling errors and false classifications due to human bias. StemCellQC provided both user-specified and classifier-determined analysis in cases where the affected features are not intuitive or anticipated. Video analysis algorithms allowed assessment of biological phenomena using automatic detection analysis, which can aid facilities where maintaining stem cell quality and/or monitoring changes in cellular processes are essential. In the future StemCellQC can be expanded to include other features, cell types, treatments, and differentiating cells. PMID:26848582
Zahedi, Atena; On, Vincent; Lin, Sabrina C; Bays, Brett C; Omaiye, Esther; Bhanu, Bir; Talbot, Prue
2016-01-01
There is a foundational need for quality control tools in stem cell laboratories engaged in basic research, regenerative therapies, and toxicological studies. These tools require automated methods for evaluating cell processes and quality during in vitro passaging, expansion, maintenance, and differentiation. In this paper, an unbiased, automated high-content profiling toolkit, StemCellQC, is presented that non-invasively extracts information on cell quality and cellular processes from time-lapse phase-contrast videos. Twenty four (24) morphological and dynamic features were analyzed in healthy, unhealthy, and dying human embryonic stem cell (hESC) colonies to identify those features that were affected in each group. Multiple features differed in the healthy versus unhealthy/dying groups, and these features were linked to growth, motility, and death. Biomarkers were discovered that predicted cell processes before they were detectable by manual observation. StemCellQC distinguished healthy and unhealthy/dying hESC colonies with 96% accuracy by non-invasively measuring and tracking dynamic and morphological features over 48 hours. Changes in cellular processes can be monitored by StemCellQC and predictions can be made about the quality of pluripotent stem cell colonies. This toolkit reduced the time and resources required to track multiple pluripotent stem cell colonies and eliminated handling errors and false classifications due to human bias. StemCellQC provided both user-specified and classifier-determined analysis in cases where the affected features are not intuitive or anticipated. Video analysis algorithms allowed assessment of biological phenomena using automatic detection analysis, which can aid facilities where maintaining stem cell quality and/or monitoring changes in cellular processes are essential. In the future StemCellQC can be expanded to include other features, cell types, treatments, and differentiating cells.
Automated image analysis reveals the dynamic 3-dimensional organization of multi-ciliary arrays
Galati, Domenico F.; Abuin, David S.; Tauber, Gabriel A.; Pham, Andrew T.; Pearson, Chad G.
2016-01-01
ABSTRACT Multi-ciliated cells (MCCs) use polarized fields of undulating cilia (ciliary array) to produce fluid flow that is essential for many biological processes. Cilia are positioned by microtubule scaffolds called basal bodies (BBs) that are arranged within a spatially complex 3-dimensional geometry (3D). Here, we develop a robust and automated computational image analysis routine to quantify 3D BB organization in the ciliate, Tetrahymena thermophila. Using this routine, we generate the first morphologically constrained 3D reconstructions of Tetrahymena cells and elucidate rules that govern the kinetics of MCC organization. We demonstrate the interplay between BB duplication and cell size expansion through the cell cycle. In mutant cells, we identify a potential BB surveillance mechanism that balances large gaps in BB spacing by increasing the frequency of closely spaced BBs in other regions of the cell. Finally, by taking advantage of a mutant predisposed to BB disorganization, we locate the spatial domains that are most prone to disorganization by environmental stimuli. Collectively, our analyses reveal the importance of quantitative image analysis to understand the principles that guide the 3D organization of MCCs. PMID:26700722
Shi, Peng; Zhong, Jing; Hong, Jinsheng; Huang, Rongfang; Wang, Kaijun; Chen, Yunbin
2016-08-26
Nasopharyngeal carcinoma is one of the malignant neoplasm with high incidence in China and south-east Asia. Ki-67 protein is strictly associated with cell proliferation and malignant degree. Cells with higher Ki-67 expression are always sensitive to chemotherapy and radiotherapy, the assessment of which is beneficial to NPC treatment. It is still challenging to automatically analyze immunohistochemical Ki-67 staining nasopharyngeal carcinoma images due to the uneven color distributions in different cell types. In order to solve the problem, an automated image processing pipeline based on clustering of local correlation features is proposed in this paper. Unlike traditional morphology-based methods, our algorithm segments cells by classifying image pixels on the basis of local pixel correlations from particularly selected color spaces, then characterizes cells with a set of grading criteria for the reference of pathological analysis. Experimental results showed high accuracy and robustness in nucleus segmentation despite image data variance. Quantitative indicators obtained in this essay provide a reliable evidence for the analysis of Ki-67 staining nasopharyngeal carcinoma microscopic images, which would be helpful in relevant histopathological researches.
Very Deep Convolutional Neural Networks for Morphologic Classification of Erythrocytes.
Durant, Thomas J S; Olson, Eben M; Schulz, Wade L; Torres, Richard
2017-12-01
Morphologic profiling of the erythrocyte population is a widely used and clinically valuable diagnostic modality, but one that relies on a slow manual process associated with significant labor cost and limited reproducibility. Automated profiling of erythrocytes from digital images by capable machine learning approaches would augment the throughput and value of morphologic analysis. To this end, we sought to evaluate the performance of leading implementation strategies for convolutional neural networks (CNNs) when applied to classification of erythrocytes based on morphology. Erythrocytes were manually classified into 1 of 10 classes using a custom-developed Web application. Using recent literature to guide architectural considerations for neural network design, we implemented a "very deep" CNN, consisting of >150 layers, with dense shortcut connections. The final database comprised 3737 labeled cells. Ensemble model predictions on unseen data demonstrated a harmonic mean of recall and precision metrics of 92.70% and 89.39%, respectively. Of the 748 cells in the test set, 23 misclassification errors were made, with a correct classification frequency of 90.60%, represented as a harmonic mean across the 10 morphologic classes. These findings indicate that erythrocyte morphology profiles could be measured with a high degree of accuracy with "very deep" CNNs. Further, these data support future efforts to expand classes and optimize practical performance in a clinical environment as a prelude to full implementation as a clinical tool. © 2017 American Association for Clinical Chemistry.
Milferstedt, Kim; Santa-Catalina, Gaëlle; Godon, Jean-Jacques; Escudié, Renaud; Bernet, Nicolas
2013-01-01
Many natural and engineered biofilm systems periodically face disturbances. Here we present how the recovery time of a biofilm between disturbances (expressed as disturbance frequency) shapes the development of morphology and community structure in a multi-species biofilm at the landscape scale. It was hypothesized that a high disturbance frequency favors the development of a stable adapted biofilm system while a low disturbance frequency promotes a dynamic biofilm response. Biofilms were grown in laboratory-scale reactors over a period of 55-70 days and exposed to the biocide monochloramine at two frequencies: daily or weekly pulse injections. One untreated reactor served as control. Biofilm morphology and community structure were followed on comparably large biofilm areas at the landscape scale using automated image analysis (spatial gray level dependence matrices) and community fingerprinting (single-strand conformation polymorphisms). We demonstrated that a weekly disturbed biofilm developed a resilient morphology and community structure. Immediately after the disturbance, the biofilm simplified but recovered its initial complex morphology and community structure between two biocide pulses. In the daily treated reactor, one organism largely dominated a morphologically simple and stable biofilm. Disturbances primarily affected the abundance distribution of already present bacterial taxa but did not promote growth of previously undetected organisms. Our work indicates that disturbances can be used as lever to engineer biofilms by maintaining a biofilm between two developmental states. PMID:24303024
Automated noninvasive classification of renal cancer on multiphase CT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Linguraru, Marius George; Wang, Shijun; Shah, Furhawn
2011-10-15
Purpose: To explore the added value of the shape of renal lesions for classifying renal neoplasms. To investigate the potential of computer-aided analysis of contrast-enhanced computed-tomography (CT) to quantify and classify renal lesions. Methods: A computer-aided clinical tool based on adaptive level sets was employed to analyze 125 renal lesions from contrast-enhanced abdominal CT studies of 43 patients. There were 47 cysts and 78 neoplasms: 22 Von Hippel-Lindau (VHL), 16 Birt-Hogg-Dube (BHD), 19 hereditary papillary renal carcinomas (HPRC), and 21 hereditary leiomyomatosis and renal cell cancers (HLRCC). The technique quantified the three-dimensional size and enhancement of lesions. Intrapatient and interphasemore » registration facilitated the study of lesion serial enhancement. The histograms of curvature-related features were used to classify the lesion types. The areas under the curve (AUC) were calculated for receiver operating characteristic curves. Results: Tumors were robustly segmented with 0.80 overlap (0.98 correlation) between manual and semi-automated quantifications. The method further identified morphological discrepancies between the types of lesions. The classification based on lesion appearance, enhancement and morphology between cysts and cancers showed AUC = 0.98; for BHD + VHL (solid cancers) vs. HPRC + HLRCC AUC = 0.99; for VHL vs. BHD AUC = 0.82; and for HPRC vs. HLRCC AUC = 0.84. All semi-automated classifications were statistically significant (p < 0.05) and superior to the analyses based solely on serial enhancement. Conclusions: The computer-aided clinical tool allowed the accurate quantification of cystic, solid, and mixed renal tumors. Cancer types were classified into four categories using their shape and enhancement. Comprehensive imaging biomarkers of renal neoplasms on abdominal CT may facilitate their noninvasive classification, guide clinical management, and monitor responses to drugs or interventions.« less
NASA Astrophysics Data System (ADS)
Lancaster, N.; LeBlanc, D.; Bebis, G.; Nicolescu, M.
2015-12-01
Dune-field patterns are believed to behave as self-organizing systems, but what causes the patterns to form is still poorly understood. The most obvious (and in many cases the most significant) aspect of a dune system is the pattern of dune crest lines. Extracting meaningful features such as crest length, orientation, spacing, bifurcations, and merging of crests from image data can reveal important information about the specific dune-field morphological properties, development, and response to changes in boundary conditions, but manual methods are labor-intensive and time-consuming. We are developing the capability to recognize and characterize patterns of sand dunes on planetary surfaces. Our goal is to develop a robust methodology and the necessary algorithms for automated or semi-automated extraction of dune morphometric information from image data. Our main approach uses image processing methods to extract gradient information from satellite images of dune fields. Typically, the gradients have a dominant magnitude and orientation. In many cases, the images have two major dominant gradient orientations, for the sunny and shaded side of the dunes. A histogram of the gradient orientations is used to determine the dominant orientation. A threshold is applied to the image based on gradient orientations which agree with the dominant orientation. The contours of the binary image can then be used to determine the dune crest-lines, based on pixel intensity values. Once the crest-lines have been extracted, the morphological properties can be computed. We have tested our approach on a variety of images of linear and crescentic (transverse) dunes and compared dune detection algorithms with manually-digitized dune crest lines, achieving true positive values of 0.57-0.99; and false positives values of 0.30-0.67, indicating that out approach is generally robust.
Development of an Automated Imaging Pipeline for the Analysis of the Zebrafish Larval Kidney
Westhoff, Jens H.; Giselbrecht, Stefan; Schmidts, Miriam; Schindler, Sebastian; Beales, Philip L.; Tönshoff, Burkhard; Liebel, Urban; Gehrig, Jochen
2013-01-01
The analysis of kidney malformation caused by environmental influences during nephrogenesis or by hereditary nephropathies requires animal models allowing the in vivo observation of developmental processes. The zebrafish has emerged as a useful model system for the analysis of vertebrate organ development and function, and it is suitable for the identification of organotoxic or disease-modulating compounds on a larger scale. However, to fully exploit its potential in high content screening applications, dedicated protocols are required allowing the consistent visualization of inner organs such as the embryonic kidney. To this end, we developed a high content screening compatible pipeline for the automated imaging of standardized views of the developing pronephros in zebrafish larvae. Using a custom designed tool, cavities were generated in agarose coated microtiter plates allowing for accurate positioning and orientation of zebrafish larvae. This enabled the subsequent automated acquisition of stable and consistent dorsal views of pronephric kidneys. The established pipeline was applied in a pilot screen for the analysis of the impact of potentially nephrotoxic drugs on zebrafish pronephros development in the Tg(wt1b:EGFP) transgenic line in which the developing pronephros is highlighted by GFP expression. The consistent image data that was acquired allowed for quantification of gross morphological pronephric phenotypes, revealing concentration dependent effects of several compounds on nephrogenesis. In addition, applicability of the imaging pipeline was further confirmed in a morpholino based model for cilia-associated human genetic disorders associated with different intraflagellar transport genes. The developed tools and pipeline can be used to study various aspects in zebrafish kidney research, and can be readily adapted for the analysis of other organ systems. PMID:24324758
Development of an automated imaging pipeline for the analysis of the zebrafish larval kidney.
Westhoff, Jens H; Giselbrecht, Stefan; Schmidts, Miriam; Schindler, Sebastian; Beales, Philip L; Tönshoff, Burkhard; Liebel, Urban; Gehrig, Jochen
2013-01-01
The analysis of kidney malformation caused by environmental influences during nephrogenesis or by hereditary nephropathies requires animal models allowing the in vivo observation of developmental processes. The zebrafish has emerged as a useful model system for the analysis of vertebrate organ development and function, and it is suitable for the identification of organotoxic or disease-modulating compounds on a larger scale. However, to fully exploit its potential in high content screening applications, dedicated protocols are required allowing the consistent visualization of inner organs such as the embryonic kidney. To this end, we developed a high content screening compatible pipeline for the automated imaging of standardized views of the developing pronephros in zebrafish larvae. Using a custom designed tool, cavities were generated in agarose coated microtiter plates allowing for accurate positioning and orientation of zebrafish larvae. This enabled the subsequent automated acquisition of stable and consistent dorsal views of pronephric kidneys. The established pipeline was applied in a pilot screen for the analysis of the impact of potentially nephrotoxic drugs on zebrafish pronephros development in the Tg(wt1b:EGFP) transgenic line in which the developing pronephros is highlighted by GFP expression. The consistent image data that was acquired allowed for quantification of gross morphological pronephric phenotypes, revealing concentration dependent effects of several compounds on nephrogenesis. In addition, applicability of the imaging pipeline was further confirmed in a morpholino based model for cilia-associated human genetic disorders associated with different intraflagellar transport genes. The developed tools and pipeline can be used to study various aspects in zebrafish kidney research, and can be readily adapted for the analysis of other organ systems.
NASA Astrophysics Data System (ADS)
Hong, Hyundae; Benac, Jasenka; Riggsbee, Daniel; Koutsky, Keith
2014-03-01
High throughput (HT) phenotyping of crops is essential to increase yield in environments deteriorated by climate change. The controlled environment of a greenhouse offers an ideal platform to study the genotype to phenotype linkages for crop screening. Advanced imaging technologies are used to study plants' responses to resource limitations such as water and nutrient deficiency. Advanced imaging technologies coupled with automation make HT phenotyping in the greenhouse not only feasible, but practical. Monsanto has a state of the art automated greenhouse (AGH) facility. Handling of the soil, pots water and nutrients are all completely automated. Images of the plants are acquired by multiple hyperspectral and broadband cameras. The hyperspectral cameras cover wavelengths from visible light through short wave infra-red (SWIR). Inhouse developed software analyzes the images to measure plant morphological and biochemical properties. We measure phenotypic metrics like plant area, height, and width as well as biomass. Hyperspectral imaging allows us to measure biochemcical metrics such as chlorophyll, anthocyanin, and foliar water content. The last 4 years of AGH operations on crops like corn, soybean, and cotton have demonstrated successful application of imaging and analysis technologies for high throughput plant phenotyping. Using HT phenotyping, scientists have been showing strong correlations to environmental conditions, such as water and nutrient deficits, as well as the ability to tease apart distinct differences in the genetic backgrounds of crops.
Automated detection of extended sources in radio maps: progress from the SCORPIO survey
NASA Astrophysics Data System (ADS)
Riggi, S.; Ingallinera, A.; Leto, P.; Cavallaro, F.; Bufano, F.; Schillirò, F.; Trigilio, C.; Umana, G.; Buemi, C. S.; Norris, R. P.
2016-08-01
Automated source extraction and parametrization represents a crucial challenge for the next-generation radio interferometer surveys, such as those performed with the Square Kilometre Array (SKA) and its precursors. In this paper, we present a new algorithm, called CAESAR (Compact And Extended Source Automated Recognition), to detect and parametrize extended sources in radio interferometric maps. It is based on a pre-filtering stage, allowing image denoising, compact source suppression and enhancement of diffuse emission, followed by an adaptive superpixel clustering stage for final source segmentation. A parametrization stage provides source flux information and a wide range of morphology estimators for post-processing analysis. We developed CAESAR in a modular software library, also including different methods for local background estimation and image filtering, along with alternative algorithms for both compact and diffuse source extraction. The method was applied to real radio continuum data collected at the Australian Telescope Compact Array (ATCA) within the SCORPIO project, a pathfinder of the Evolutionary Map of the Universe (EMU) survey at the Australian Square Kilometre Array Pathfinder (ASKAP). The source reconstruction capabilities were studied over different test fields in the presence of compact sources, imaging artefacts and diffuse emission from the Galactic plane and compared with existing algorithms. When compared to a human-driven analysis, the designed algorithm was found capable of detecting known target sources and regions of diffuse emission, outperforming alternative approaches over the considered fields.
AutoLens: Automated Modeling of a Strong Lens's Light, Mass and Source
NASA Astrophysics Data System (ADS)
Nightingale, J. W.; Dye, S.; Massey, Richard J.
2018-05-01
This work presents AutoLens, the first entirely automated modeling suite for the analysis of galaxy-scale strong gravitational lenses. AutoLens simultaneously models the lens galaxy's light and mass whilst reconstructing the extended source galaxy on an adaptive pixel-grid. The method's approach to source-plane discretization is amorphous, adapting its clustering and regularization to the intrinsic properties of the lensed source. The lens's light is fitted using a superposition of Sersic functions, allowing AutoLens to cleanly deblend its light from the source. Single component mass models representing the lens's total mass density profile are demonstrated, which in conjunction with light modeling can detect central images using a centrally cored profile. Decomposed mass modeling is also shown, which can fully decouple a lens's light and dark matter and determine whether the two component are geometrically aligned. The complexity of the light and mass models are automatically chosen via Bayesian model comparison. These steps form AutoLens's automated analysis pipeline, such that all results in this work are generated without any user-intervention. This is rigorously tested on a large suite of simulated images, assessing its performance on a broad range of lens profiles, source morphologies and lensing geometries. The method's performance is excellent, with accurate light, mass and source profiles inferred for data sets representative of both existing Hubble imaging and future Euclid wide-field observations.
Soares, Filipa A.C.; Chandra, Amit; Thomas, Robert J.; Pedersen, Roger A.; Vallier, Ludovic; Williams, David J.
2014-01-01
The transfer of a laboratory process into a manufacturing facility is one of the most critical steps required for the large scale production of cell-based therapy products. This study describes the first published protocol for scalable automated expansion of human induced pluripotent stem cell lines growing in aggregates in feeder-free and chemically defined medium. Cells were successfully transferred between different sites representative of research and manufacturing settings; and passaged manually and using the CompacT SelecT automation platform. Modified protocols were developed for the automated system and the management of cells aggregates (clumps) was identified as the critical step. Cellular morphology, pluripotency gene expression and differentiation into the three germ layers have been used compare the outcomes of manual and automated processes. PMID:24440272
A novel mesh processing based technique for 3D plant analysis
2012-01-01
Background In recent years, imaging based, automated, non-invasive, and non-destructive high-throughput plant phenotyping platforms have become popular tools for plant biology, underpinning the field of plant phenomics. Such platforms acquire and record large amounts of raw data that must be accurately and robustly calibrated, reconstructed, and analysed, requiring the development of sophisticated image understanding and quantification algorithms. The raw data can be processed in different ways, and the past few years have seen the emergence of two main approaches: 2D image processing and 3D mesh processing algorithms. Direct image quantification methods (usually 2D) dominate the current literature due to comparative simplicity. However, 3D mesh analysis provides the tremendous potential to accurately estimate specific morphological features cross-sectionally and monitor them over-time. Result In this paper, we present a novel 3D mesh based technique developed for temporal high-throughput plant phenomics and perform initial tests for the analysis of Gossypium hirsutum vegetative growth. Based on plant meshes previously reconstructed from multi-view images, the methodology involves several stages, including morphological mesh segmentation, phenotypic parameters estimation, and plant organs tracking over time. The initial study focuses on presenting and validating the accuracy of the methodology on dicotyledons such as cotton but we believe the approach will be more broadly applicable. This study involved applying our technique to a set of six Gossypium hirsutum (cotton) plants studied over four time-points. Manual measurements, performed for each plant at every time-point, were used to assess the accuracy of our pipeline and quantify the error on the morphological parameters estimated. Conclusion By directly comparing our automated mesh based quantitative data with manual measurements of individual stem height, leaf width and leaf length, we obtained the mean absolute errors of 9.34%, 5.75%, 8.78%, and correlation coefficients 0.88, 0.96, and 0.95 respectively. The temporal matching of leaves was accurate in 95% of the cases and the average execution time required to analyse a plant over four time-points was 4.9 minutes. The mesh processing based methodology is thus considered suitable for quantitative 4D monitoring of plant phenotypic features. PMID:22553969
Wang, Tong-Hong; Chen, Tse-Ching; Teng, Xiao; Liang, Kung-Hao; Yeh, Chau-Ting
2015-01-01
Liver fibrosis assessment by biopsy and conventional staining scores is based on histopathological criteria. Variations in sample preparation and the use of semi-quantitative histopathological methods commonly result in discrepancies between medical centers. Thus, minor changes in liver fibrosis might be overlooked in multi-center clinical trials, leading to statistically non-significant data. Here, we developed a computer-assisted, fully automated, staining-free method for hepatitis B-related liver fibrosis assessment. In total, 175 liver biopsies were divided into training (n = 105) and verification (n = 70) cohorts. Collagen was observed using second harmonic generation (SHG) microscopy without prior staining, and hepatocyte morphology was recorded using two-photon excitation fluorescence (TPEF) microscopy. The training cohort was utilized to establish a quantification algorithm. Eleven of 19 computer-recognizable SHG/TPEF microscopic morphological features were significantly correlated with the ISHAK fibrosis stages (P < 0.001). A biphasic scoring method was applied, combining support vector machine and multivariate generalized linear models to assess the early and late stages of fibrosis, respectively, based on these parameters. The verification cohort was used to verify the scoring method, and the area under the receiver operating characteristic curve was >0.82 for liver cirrhosis detection. Since no subjective gradings are needed, interobserver discrepancies could be avoided using this fully automated method. PMID:26260921
Haug, M; Reischl, B; Prölß, G; Pollmann, C; Buckert, T; Keidel, C; Schürmann, S; Hock, M; Rupitsch, S; Heckel, M; Pöschel, T; Scheibel, T; Haynl, C; Kiriaev, L; Head, S I; Friedrich, O
2018-04-15
We engineered an automated biomechatronics system, MyoRobot, for robust objective and versatile assessment of muscle or polymer materials (bio-)mechanics. It covers multiple levels of muscle biosensor assessment, e.g. membrane voltage or contractile apparatus Ca 2+ ion responses (force resolution 1µN, 0-10mN for the given sensor; [Ca 2+ ] range ~ 100nM-25µM). It replaces previously tedious manual protocols to obtain exhaustive information on active/passive biomechanical properties across various morphological tissue levels. Deciphering mechanisms of muscle weakness requires sophisticated force protocols, dissecting contributions from altered Ca 2+ homeostasis, electro-chemical, chemico-mechanical biosensors or visco-elastic components. From whole organ to single fibre levels, experimental demands and hardware requirements increase, limiting biomechanics research potential, as reflected by only few commercial biomechatronics systems that can address resolution, experimental versatility and mostly, automation of force recordings. Our MyoRobot combines optical force transducer technology with high precision 3D actuation (e.g. voice coil, 1µm encoder resolution; stepper motors, 4µm feed motion), and customized control software, enabling modular experimentation packages and automated data pre-analysis. In small bundles and single muscle fibres, we demonstrate automated recordings of (i) caffeine-induced-, (ii) electrical field stimulation (EFS)-induced force, (iii) pCa-force, (iv) slack-tests and (v) passive length-tension curves. The system easily reproduces results from manual systems (two times larger stiffness in slow over fast muscle) and provides novel insights into unloaded shortening velocities (declining with increasing slack lengths). The MyoRobot enables automated complex biomechanics assessment in muscle research. Applications also extend to material sciences, exemplarily shown here for spider silk and collagen biopolymers. Copyright © 2017 Elsevier B.V. All rights reserved.
Tomlinson, Mathew J; Naeem, Asad
2018-03-21
CASA has been used in reproductive medicine and pathology laboratories for over 25 years, yet the 'fertility industry' generally remains sceptical and has avoided automation, despite clear weaknesses in manual semen analysis. Early implementers had difficulty in validating CASA-Mot instruments against recommended manual methods (haemocytometer) due to the interference of seminal debris and non-sperm cells, which also affects the accuracy of grading motility. Both the inability to provide accurate sperm counts and a lack of consensus as to the value of sperm kinematic parameters appear to have continued to have a negative effect on CASA-Mot's reputation. One positive interpretation from earlier work is that at least one or more measures of sperm velocity adds clinical value to the semen analysis, and these are clearly more objective than any manual motility analysis. Moreover, recent CASA-Mot systems offer simple solutions to earlier problems in eliminating artefacts and have been successfully validated for sperm concentration; as a result, they should be viewed with more confidence in relation to motility grading. Sperm morphology and DNA testing both require an evidence-based consensus and a well-validated (reliable, reproducible) assay to be developed before automation of either can be of real clinical benefit.
Cest Analysis: Automated Change Detection from Very-High Remote Sensing Images
NASA Astrophysics Data System (ADS)
Ehlers, M.; Klonus, S.; Jarmer, T.; Sofina, N.; Michel, U.; Reinartz, P.; Sirmacek, B.
2012-08-01
A fast detection, visualization and assessment of change in areas of crisis or catastrophes are important requirements for coordination and planning of help. Through the availability of new satellites and/or airborne sensors with very high spatial resolutions (e.g., WorldView, GeoEye) new remote sensing data are available for a better detection, delineation and visualization of change. For automated change detection, a large number of algorithms has been proposed and developed. From previous studies, however, it is evident that to-date no single algorithm has the potential for being a reliable change detector for all possible scenarios. This paper introduces the Combined Edge Segment Texture (CEST) analysis, a decision-tree based cooperative suite of algorithms for automated change detection that is especially designed for the generation of new satellites with very high spatial resolution. The method incorporates frequency based filtering, texture analysis, and image segmentation techniques. For the frequency analysis, different band pass filters can be applied to identify the relevant frequency information for change detection. After transforming the multitemporal images via a fast Fourier transform (FFT) and applying the most suitable band pass filter, different methods are available to extract changed structures: differencing and correlation in the frequency domain and correlation and edge detection in the spatial domain. Best results are obtained using edge extraction. For the texture analysis, different 'Haralick' parameters can be calculated (e.g., energy, correlation, contrast, inverse distance moment) with 'energy' so far providing the most accurate results. These algorithms are combined with a prior segmentation of the image data as well as with morphological operations for a final binary change result. A rule-based combination (CEST) of the change algorithms is applied to calculate the probability of change for a particular location. CEST was tested with high-resolution satellite images of the crisis areas of Darfur (Sudan). CEST results are compared with a number of standard algorithms for automated change detection such as image difference, image ratioe, principal component analysis, delta cue technique and post classification change detection. The new combined method shows superior results averaging between 45% and 15% improvement in accuracy.
Oscillometric Blood Pressure Estimation: Past, Present, and Future.
Forouzanfar, Mohamad; Dajani, Hilmi R; Groza, Voicu Z; Bolic, Miodrag; Rajan, Sreeraman; Batkin, Izmail
2015-01-01
The use of automated blood pressure (BP) monitoring is growing as it does not require much expertise and can be performed by patients several times a day at home. Oscillometry is one of the most common measurement methods used in automated BP monitors. A review of the literature shows that a large variety of oscillometric algorithms have been developed for accurate estimation of BP but these algorithms are scattered in many different publications or patents. Moreover, considering that oscillometric devices dominate the home BP monitoring market, little effort has been made to survey the underlying algorithms that are used to estimate BP. In this review, a comprehensive survey of the existing oscillometric BP estimation algorithms is presented. The survey covers a broad spectrum of algorithms including the conventional maximum amplitude and derivative oscillometry as well as the recently proposed learning algorithms, model-based algorithms, and algorithms that are based on analysis of pulse morphology and pulse transit time. The aim is to classify the diverse underlying algorithms, describe each algorithm briefly, and discuss their advantages and disadvantages. This paper will also review the artifact removal techniques in oscillometry and the current standards for the automated BP monitors.
Automated body weight prediction of dairy cows using 3-dimensional vision.
Song, X; Bokkers, E A M; van der Tol, P P J; Groot Koerkamp, P W G; van Mourik, S
2018-05-01
The objectives of this study were to quantify the error of body weight prediction using automatically measured morphological traits in a 3-dimensional (3-D) vision system and to assess the influence of various sources of uncertainty on body weight prediction. In this case study, an image acquisition setup was created in a cow selection box equipped with a top-view 3-D camera. Morphological traits of hip height, hip width, and rump length were automatically extracted from the raw 3-D images taken of the rump area of dairy cows (n = 30). These traits combined with days in milk, age, and parity were used in multiple linear regression models to predict body weight. To find the best prediction model, an exhaustive feature selection algorithm was used to build intermediate models (n = 63). Each model was validated by leave-one-out cross-validation, giving the root mean square error and mean absolute percentage error. The model consisting of hip width (measurement variability of 0.006 m), days in milk, and parity was the best model, with the lowest errors of 41.2 kg of root mean square error and 5.2% mean absolute percentage error. Our integrated system, including the image acquisition setup, image analysis, and the best prediction model, predicted the body weights with a performance similar to that achieved using semi-automated or manual methods. Moreover, the variability of our simplified morphological trait measurement showed a negligible contribution to the uncertainty of body weight prediction. We suggest that dairy cow body weight prediction can be improved by incorporating more predictive morphological traits and by improving the prediction model structure. The Authors. Published by FASS Inc. and Elsevier Inc. on behalf of the American Dairy Science Association®. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/3.0/).
Automated scoring of regional lung perfusion in children from contrast enhanced 3D MRI
NASA Astrophysics Data System (ADS)
Heimann, Tobias; Eichinger, Monika; Bauman, Grzegorz; Bischoff, Arved; Puderbach, Michael; Meinzer, Hans-Peter
2012-03-01
MRI perfusion images give information about regional lung function and can be used to detect pulmonary pathologies in cystic fibrosis (CF) children. However, manual assessment of the percentage of pathologic tissue in defined lung subvolumes features large inter- and intra-observer variation, making it difficult to determine disease progression consistently. We present an automated method to calculate a regional score for this purpose. First, lungs are located based on thresholding and morphological operations. Second, statistical shape models of left and right children's lungs are initialized at the determined locations and used to precisely segment morphological images. Segmentation results are transferred to perfusion maps and employed as masks to calculate perfusion statistics. An automated threshold to determine pathologic tissue is calculated and used to determine accurate regional scores. We evaluated the method on 10 MRI images and achieved an average surface distance of less than 1.5 mm compared to manual reference segmentations. Pathologic tissue was detected correctly in 9 cases. The approach seems suitable for detecting early signs of CF and monitoring response to therapy.
Guo, Ling; Wang, Zhen; Anderson, Courtney M; Doolittle, Emerald; Kernag, Siobhan; Cotta, Claudiu V; Ondrejka, Sarah L; Ma, Xiao-Jun; Cook, James R
2018-03-01
The assessment of B-cell clonality is a critical component of the evaluation of suspected lymphoproliferative disorders, but analysis from formalin-fixed, paraffin-embedded tissues can be challenging if fresh tissue is not available for flow cytometry. Immunohistochemical and conventional bright field in situ hybridization stains for kappa and lambda are effective for evaluation of plasma cells but are often insufficiently sensitive to detect the much lower abundance of light chains present in B-cells. We describe an ultrasensitive RNA in situ hybridization assay that has been adapted for use on an automated immunohistochemistry platform and compare results with flow cytometry in 203 consecutive tissues and 104 consecutive bone marrows. Overall, in 203 tissue biopsies, RNA in situ hybridization identified light chain-restricted B-cells in 85 (42%) vs 58 (29%) by flow cytometry. Within 83 B-cell non-Hodgkin lymphomas, RNA in situ hybridization identified restricted B-cells in 74 (89%) vs 56 (67%) by flow cytometry. B-cell clonality could be evaluated in only 23/104 (22%) bone marrow cases owing to poor RNA preservation, but evaluable cases showed 91% concordance with flow cytometry. RNA in situ hybridization allowed for recognition of biclonal/composite lymphomas not identified by flow cytometry and highlighted unexpected findings, such as coexpression of kappa and lambda RNA in 2 cases and the presence of lambda light chain RNA in a T lymphoblastic lymphoma. Automated RNA in situ hybridization showed excellent interobserver reproducibility for manual evaluation (average K=0.92), and an automated image analysis system showed high concordance (97%) with manual evaluation. Automated RNA in situ hybridization staining, which can be adopted on commonly utilized immunohistochemistry instruments, allows for the interpretation of clonality in the context of the morphological features in formalin-fixed, paraffin-embedded tissues with a clinical sensitivity similar or superior to flow cytometry.
Guo, Ling; Wang, Zhen; Anderson, Courtney M.; Doolittle, Emerald; Kernag, Siobhan; Cotta, Claudiu V.; Ondrejka, Sarah L.; Ma, Xiao-Jun; Cook, James R.
2017-01-01
The assessment of B-cell clonality is a critical component of the evaluation of suspected lymphoproliferative disorders, but analysis from formalin fixed paraffin embedded tissues can be challenging if fresh tissue is not available for flow cytometry. Immunohistochemical and conventional bright field in situ hybridization stains for kappa and lambda are effective for evaluation of plasma cells, but are often insufficiently sensitive to detect the much lower abundance of light chains present in B cells. We describe an ultrasensitive RNA in situ hybridization assay which has been adapted for use on an automated immunohistochemistry platform and compare results with flow cytometry in 203 consecutive tissues and 104 consecutive bone marrows. Overall, in 203 tissue biopsies, RNA in situ hybridization identified light chain restricted B-cells in 85 (42%) vs. 58 (29%) by flow cytometry. Within 83 B-cell non-Hodgkin lymphomas, RNA in situ hybridization identified a restricted B-cells in 74 (89%) vs. 56 (67%) by flow cytometry. B-cell clonality could be evaluated in only 23/104 (22%) bone marrow cases due to poor RNA preservation, but evaluable cases showed 91% concordance with flow cytometry. RNA in situ hybridization allowed for recognition of biclonal/composite lymphomas not identified by flow cytometry, and highlighted unexpected findings, such as coexpression of kappa and lambda RNA in 2 cases and the presence of lambda light chain RNA in a T lymphoblastic lymphoma. Automated RNA in situ hybridization showed excellent interobserver reproducibility for manual evaluation (average K=0.92), and an automated image analysis system showed high concordance (97%) with manual evaluation. Automated RNA in situ hybridization staining, which can be adopted on commonly utilized immunohistochemistry instruments, allows for the interpretation of clonality in the context of the morphologic features in formalin fixed, paraffin embedded tissues with a clinical sensitivity similar or superior to flow cytometry. PMID:29052600
Narula, Sukrit; Shameer, Khader; Salem Omar, Alaa Mabrouk; Dudley, Joel T; Sengupta, Partho P
2016-11-29
Machine-learning models may aid cardiac phenotypic recognition by using features of cardiac tissue deformation. This study investigated the diagnostic value of a machine-learning framework that incorporates speckle-tracking echocardiographic data for automated discrimination of hypertrophic cardiomyopathy (HCM) from physiological hypertrophy seen in athletes (ATH). Expert-annotated speckle-tracking echocardiographic datasets obtained from 77 ATH and 62 HCM patients were used for developing an automated system. An ensemble machine-learning model with 3 different machine-learning algorithms (support vector machines, random forests, and artificial neural networks) was developed and a majority voting method was used for conclusive predictions with further K-fold cross-validation. Feature selection using an information gain (IG) algorithm revealed that volume was the best predictor for differentiating between HCM ands. ATH (IG = 0.24) followed by mid-left ventricular segmental (IG = 0.134) and average longitudinal strain (IG = 0.131). The ensemble machine-learning model showed increased sensitivity and specificity compared with early-to-late diastolic transmitral velocity ratio (p < 0.01), average early diastolic tissue velocity (e') (p < 0.01), and strain (p = 0.04). Because ATH were younger, adjusted analysis was undertaken in younger HCM patients and compared with ATH with left ventricular wall thickness >13 mm. In this subgroup analysis, the automated model continued to show equal sensitivity, but increased specificity relative to early-to-late diastolic transmitral velocity ratio, e', and strain. Our results suggested that machine-learning algorithms can assist in the discrimination of physiological versus pathological patterns of hypertrophic remodeling. This effort represents a step toward the development of a real-time, machine-learning-based system for automated interpretation of echocardiographic images, which may help novice readers with limited experience. Copyright © 2016 American College of Cardiology Foundation. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Glazoff, Michael V.; Hiromoto, Robert; Tokuhiro, Akira
2014-08-01
In the after-Fukushima world, the stability of materials under extreme conditions is an important issue for the safety of nuclear reactors. Among the methods explored currently to improve zircaloys’ thermal stability in off-normal conditions, using a protective coat of the SiC filaments is considered because silicon carbide is well known for its remarkable chemical inertness at high temperatures. A typical SiC fiber contains ∼50,000 individual filaments of 5-10 μm in diameter. In this paper, an effort was made to develop and apply mathematical morphology to the process of automatic defect identification in Zircaloy-4 rods braided with the protective layer of the silicon carbide filament. However, the issues of the braiding quality have to be addressed to ensure its full protective potential. We present the original mathematical morphology algorithms that allow solving this problem of quality assurance successfully. In nuclear industry, such algorithms are used for the first time, and could be easily generalized to the case of automated continuous monitoring for defect identification in the future.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Michael V Glazoff; Robert Hiromoto; Akira Tokuhiro
In the after-Fukushima world, the stability of materials under extreme conditions is an important issue for the safety of nuclear reactors. Among the methods explored currently to improve zircaloys’ thermal stability in off-normal conditions, using a protective coat of the SiC filaments is considered because silicon carbide is well known for its remarkable chemical inertness at high temperatures. A typical SiC fiber contains ~50,000 individual filaments of 5 – 10 µm in diameter. In this paper, an effort was made to develop and apply mathematical morphology to the process of automatic defect identification in Zircaloy-4 rods braided with the protectivemore » layer of the silicon carbide filament. However, the issues of the braiding quality have to be addressed to ensure its full protective potential. We present the original mathematical morphology algorithms that allow solving this problem of quality assurance successfully. In nuclear industry, such algorithms are used for the first time, and could be easily generalized to the case of automated continuous monitoring for defect identification in the future.« less
Pastel, R; Struthers, A
2001-05-20
Morphology-dependent resonances (MDRs) are used to measure accurately the evaporation rates of laser-trapped 1- to 2-mum droplets of ethylene glycol. Droplets containing 3 x 10(-5) M Rhodamine-590 laser dye are optically trapped in a 20-mum hollow fiber by two counterpropagating 150-mW, 800-nm laser beams. A weaker 532-nm laser excites the dye, and fluorescence emission is observed near 560 nm as the droplet evaporates. A complete series of first-order TE and TM MDRs dominates the fluorescent output. MDR mode identification sizes the droplets and provides accurate evaporation rates. We verify the automated MDR mode identification by counting fringes in a videotape of the experiment. The longitudinal spring constant of the trap, measured by analysis of the videotaped motion of droplets perturbed from the trap center, provides independent verification of the laser's intensity within the trap.
NASA Astrophysics Data System (ADS)
Pastel, Robert; Struthers, Allan
2001-05-01
Morphology-dependent resonances (MDRs) are used to measure accurately the evaporation rates of laser-trapped 1- to 2- m droplets of ethylene glycol. Droplets containing 3 x10-5 M Rhodamine-590 laser dye are optically trapped in a 20- m hollow fiber by two counterpropagating 150-mW, 800-nm laser beams. A weaker 532-nm laser excites the dye, and fluorescence emission is observed near 560 nm as the droplet evaporates. A complete series of first-order TE and TM MDRs dominates the fluorescent output. MDR mode identification sizes the droplets and provides accurate evaporation rates. We verify the automated MDR mode identification by counting fringes in a videotape of the experiment. The longitudinal spring constant of the trap, measured by analysis of the videotaped motion of droplets perturbed from the trap center, provides independent verification of the laser s intensity within the trap.
Xing, Fuyong; Yang, Lin
2016-01-01
Digital pathology and microscopy image analysis is widely used for comprehensive studies of cell morphology or tissue structure. Manual assessment is labor intensive and prone to interobserver variations. Computer-aided methods, which can significantly improve the objectivity and reproducibility, have attracted a great deal of interest in recent literature. Among the pipeline of building a computer-aided diagnosis system, nucleus or cell detection and segmentation play a very important role to describe the molecular morphological information. In the past few decades, many efforts have been devoted to automated nucleus/cell detection and segmentation. In this review, we provide a comprehensive summary of the recent state-of-the-art nucleus/cell segmentation approaches on different types of microscopy images including bright-field, phase-contrast, differential interference contrast, fluorescence, and electron microscopies. In addition, we discuss the challenges for the current methods and the potential future work of nucleus/cell detection and segmentation.
Quantifying Therapeutic and Diagnostic Efficacy in 2D Microvascular Images
NASA Technical Reports Server (NTRS)
Parsons-Wingerter, Patricia; Vickerman, Mary B.; Keith, Patricia A.
2009-01-01
VESGEN is a newly automated, user-interactive program that maps and quantifies the effects of vascular therapeutics and regulators on microvascular form and function. VESGEN analyzes two-dimensional, black and white vascular images by measuring important vessel morphology parameters. This software guides the user through each required step of the analysis process via a concise graphical user interface (GUI). Primary applications of the VESGEN code are 2D vascular images acquired as clinical diagnostic images of the human retina and as experimental studies of the effects of vascular regulators and therapeutics on vessel remodeling.
VESGEN Software for Mapping and Quantification of Vascular Regulators
NASA Technical Reports Server (NTRS)
Parsons-Wingerter, Patricia A.; Vickerman, Mary B.; Keith, Patricia A.
2012-01-01
VESsel GENeration (VESGEN) Analysis is an automated software that maps and quantifies effects of vascular regulators on vascular morphology by analyzing important vessel parameters. Quantification parameters include vessel diameter, length, branch points, density, and fractal dimension. For vascular trees, measurements are reported as dependent functions of vessel branching generation. VESGEN maps and quantifies vascular morphological events according to fractal-based vascular branching generation. It also relies on careful imaging of branching and networked vascular form. It was developed as a plug-in for ImageJ (National Institutes of Health, USA). VESGEN uses image-processing concepts of 8-neighbor pixel connectivity, skeleton, and distance map to analyze 2D, black-and-white (binary) images of vascular trees, networks, and tree-network composites. VESGEN maps typically 5 to 12 (or more) generations of vascular branching, starting from a single parent vessel. These generations are tracked and measured for critical vascular parameters that include vessel diameter, length, density and number, and tortuosity per branching generation. The effects of vascular therapeutics and regulators on vascular morphology and branching tested in human clinical or laboratory animal experimental studies are quantified by comparing vascular parameters with control groups. VESGEN provides a user interface to both guide and allow control over the users vascular analysis process. An option is provided to select a morphological tissue type of vascular trees, network or tree-network composites, which determines the general collections of algorithms, intermediate images, and output images and measurements that will be produced.
Keenan, S J; Diamond, J; McCluggage, W G; Bharucha, H; Thompson, D; Bartels, P H; Hamilton, P W
2000-11-01
The histological grading of cervical intraepithelial neoplasia (CIN) remains subjective, resulting in inter- and intra-observer variation and poor reproducibility in the grading of cervical lesions. This study has attempted to develop an objective grading system using automated machine vision. The architectural features of cervical squamous epithelium are quantitatively analysed using a combination of computerized digital image processing and Delaunay triangulation analysis; 230 images digitally captured from cases previously classified by a gynaecological pathologist included normal cervical squamous epithelium (n=30), koilocytosis (n=46), CIN 1 (n=52), CIN 2 (n=56), and CIN 3 (n=46). Intra- and inter-observer variation had kappa values of 0.502 and 0.415, respectively. A machine vision system was developed in KS400 macro programming language to segment and mark the centres of all nuclei within the epithelium. By object-oriented analysis of image components, the positional information of nuclei was used to construct a Delaunay triangulation mesh. Each mesh was analysed to compute triangle dimensions including the mean triangle area, the mean triangle edge length, and the number of triangles per unit area, giving an individual quantitative profile of measurements for each case. Discriminant analysis of the geometric data revealed the significant discriminatory variables from which a classification score was derived. The scoring system distinguished between normal and CIN 3 in 98.7% of cases and between koilocytosis and CIN 1 in 76.5% of cases, but only 62.3% of the CIN cases were classified into the correct group, with the CIN 2 group showing the highest rate of misclassification. Graphical plots of triangulation data demonstrated the continuum of morphological change from normal squamous epithelium to the highest grade of CIN, with overlapping of the groups originally defined by the pathologists. This study shows that automated location of nuclei in cervical biopsies using computerized image analysis is possible. Analysis of positional information enables quantitative evaluation of architectural features in CIN using Delaunay triangulation meshes, which is effective in the objective classification of CIN. This demonstrates the future potential of automated machine vision systems in diagnostic histopathology. Copyright 2000 John Wiley & Sons, Ltd.
Jaiswara, Ranjana; Nandi, Diptarup; Balakrishnan, Rohini
2013-01-01
Traditional taxonomy based on morphology has often failed in accurate species identification owing to the occurrence of cryptic species, which are reproductively isolated but morphologically identical. Molecular data have thus been used to complement morphology in species identification. The sexual advertisement calls in several groups of acoustically communicating animals are species-specific and can thus complement molecular data as non-invasive tools for identification. Several statistical tools and automated identifier algorithms have been used to investigate the efficiency of acoustic signals in species identification. Despite a plethora of such methods, there is a general lack of knowledge regarding the appropriate usage of these methods in specific taxa. In this study, we investigated the performance of two commonly used statistical methods, discriminant function analysis (DFA) and cluster analysis, in identification and classification based on acoustic signals of field cricket species belonging to the subfamily Gryllinae. Using a comparative approach we evaluated the optimal number of species and calling song characteristics for both the methods that lead to most accurate classification and identification. The accuracy of classification using DFA was high and was not affected by the number of taxa used. However, a constraint in using discriminant function analysis is the need for a priori classification of songs. Accuracy of classification using cluster analysis, which does not require a priori knowledge, was maximum for 6-7 taxa and decreased significantly when more than ten taxa were analysed together. We also investigated the efficacy of two novel derived acoustic features in improving the accuracy of identification. Our results show that DFA is a reliable statistical tool for species identification using acoustic signals. Our results also show that cluster analysis of acoustic signals in crickets works effectively for species classification and identification.
Depeursinge, Adrien; Chin, Anne S.; Leung, Ann N.; Terrone, Donato; Bristow, Michael; Rosen, Glenn; Rubin, Daniel L.
2014-01-01
Objectives We propose a novel computational approach for the automated classification of classic versus atypical usual interstitial pneumonia (UIP). Materials and Methods 33 patients with UIP were enrolled in this study. They were classified as classic versus atypical UIP by a consensus of two thoracic radiologists with more than 15 years of experience using the American Thoracic Society evidence–based guidelines for CT diagnosis of UIP. Two cardiothoracic fellows with one year of subspecialty training provided independent readings. The system is based on regional characterization of the morphological tissue properties of lung using volumetric texture analysis of multiple detector CT images. A simple digital atlas with 36 lung subregions is used to locate texture properties, from which the responses of multi-directional Riesz wavelets are obtained. Machine learning is used to aggregate and to map the regional texture attributes to a simple score that can be used to stratify patients with UIP into classic and atypical subtypes. Results We compared the predictions based on regional volumetric texture analysis with the ground truth established by expert consensus. The area under the receiver operating characteristic curve of the proposed score was estimated to be 0.81 using a leave-one-patient-out cross-validation, with high specificity for classic UIP. The performance of our automated method was found to be similar to that of the two fellows and to the agreement between experienced chest radiologists reported in the literature. However, the errors of our method and the fellows occurred on different cases, which suggests that combining human and computerized evaluations may be synergistic. Conclusions Our results are encouraging and suggest that an automated system may be useful in routine clinical practice as a diagnostic aid for identifying patients with complex lung disease such as classic UIP, obviating the need for invasive surgical lung biopsy and its associated risks. PMID:25551822
Maeda, Yoshiaki; Dobashi, Hironori; Sugiyama, Yui; Saeki, Tatsuya; Lim, Tae-kyu; Harada, Manabu; Matsunaga, Tadashi; Yoshino, Tomoko
2017-01-01
Detection and identification of microbial species are crucial in a wide range of industries, including production of beverages, foods, cosmetics, and pharmaceuticals. Traditionally, colony formation and its morphological analysis (e.g., size, shape, and color) with a naked eye have been employed for this purpose. However, such a conventional method is time consuming, labor intensive, and not very reproducible. To overcome these problems, we propose a novel method that detects microcolonies (diameter 10–500 μm) using a lensless imaging system. When comparing colony images of five microorganisms from different genera (Escherichia coli, Salmonella enterica, Pseudomonas aeruginosa, Staphylococcus aureus, and Candida albicans), the images showed obvious different features. Being closely related species, St. aureus and St. epidermidis resembled each other, but the imaging analysis could extract substantial information (colony fingerprints) including the morphological and physiological features, and linear discriminant analysis of the colony fingerprints distinguished these two species with 100% of accuracy. Because this system may offer many advantages such as high-throughput testing, lower costs, more compact equipment, and ease of automation, it holds promise for microbial detection and identification in various academic and industrial areas. PMID:28369067
TIPS: A system for automated image-based phenotyping of maize tassels
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gage, Joseph L.; Miller, Nathan D.; Spalding, Edgar P.
Here, the maize male inflorescence (tassel) produces pollen necessary for reproduction and commercial grain production of maize. The size of the tassel has been linked to factors affecting grain yield, so understanding the genetic control of tassel architecture is an important goal. Tassels are fragile and deform easily after removal from the plant, necessitating rapid measurement of any shape characteristics that cannot be retained during storage. Some morphological characteristics of tassels such as curvature and compactness are difficult to quantify using traditional methods, but can be quantified by image-based phenotyping tools. Lastly, these constraints necessitate the development of an efficientmore » method for capturing natural-state tassel morphology and complementary automated analytical methods that can quickly and reproducibly quantify traits of interest such as height, spread, and branch number.« less
NASA Astrophysics Data System (ADS)
Singla, Neeru; Dubey, Kavita; Srivastava, Vishal; Ahmad, Azeem; Mehta, D. S.
2018-02-01
We developed an automated high-resolution full-field spatial coherence tomography (FF-SCT) microscope for quantitative phase imaging that is based on the spatial, rather than the temporal, coherence gating. The Red and Green color laser light was used for finding the quantitative phase images of unstained human red blood cells (RBCs). This study uses morphological parameters of unstained RBCs phase images to distinguish between normal and infected cells. We recorded the single interferogram by a FF-SCT microscope for red and green color wavelength and average the two phase images to further reduced the noise artifacts. In order to characterize anemia infected from normal cells different morphological features were extracted and these features were used to train machine learning ensemble model to classify RBCs with high accuracy.
TIPS: A system for automated image-based phenotyping of maize tassels
Gage, Joseph L.; Miller, Nathan D.; Spalding, Edgar P.; ...
2017-03-31
Here, the maize male inflorescence (tassel) produces pollen necessary for reproduction and commercial grain production of maize. The size of the tassel has been linked to factors affecting grain yield, so understanding the genetic control of tassel architecture is an important goal. Tassels are fragile and deform easily after removal from the plant, necessitating rapid measurement of any shape characteristics that cannot be retained during storage. Some morphological characteristics of tassels such as curvature and compactness are difficult to quantify using traditional methods, but can be quantified by image-based phenotyping tools. Lastly, these constraints necessitate the development of an efficientmore » method for capturing natural-state tassel morphology and complementary automated analytical methods that can quickly and reproducibly quantify traits of interest such as height, spread, and branch number.« less
[Macrocephalic spermatozoa. What would be the impact on reproduction?].
Guichaoua, M-R; Mercier, G; Geoffroy-Siraudin, C; Paulmyer-Lacroix, O; Lanteaume, A; Metzler-Guillemin, C; Perrin, J; Achard, V
2009-09-01
We want to highlight the risk of infertility and failure of Assisted Reproductive Technologies due to the presence of macrocephalic spermatozoa (MS) in the sperm at rate equalling or superior to 20% in at least one semen analysis. We did a retrospective analysis of 19 infertile patients presenting MS at average rate between 14.3 and 49.7%. For each patient, at least one semen analysis showed a MS rate equal or superior to 20%. We did an automated analysis of the spermatozoa surface for 13 patients and a detailed analysis of the MS morphology in 18 patients. Thirteen couples benefited of one or more IVF with or without ICSI. The semen analysis shows an impairment of one or more parameter of the sperm in all patients. Three morphological aspects for MS were highlighted: MS with irregular head, MS with regular head, and MS with multiple heads, with a dominance of irregular heads. The spermatozoa surface analysis shows a significant increase of the average surface and of the standard deviation (p<0.0001). The average rate of pregnancies by transfer is decreased compared to usual rates in our laboratories (13% versus 28%). We want to sensitize biologist and clinical doctors to the existence of partial forms of this syndrome, which could be related to infertility with impaired sperm parameters and low pregnancy rates after FIV or ICSI.
NASA Astrophysics Data System (ADS)
Nuzhnaya, Tatyana; Bakic, Predrag; Kontos, Despina; Megalooikonomou, Vasileios; Ling, Haibin
2012-02-01
This work is a part of our ongoing study aimed at understanding a relation between the topology of anatomical branching structures with the underlying image texture. Morphological variability of the breast ductal network is associated with subsequent development of abnormalities in patients with nipple discharge such as papilloma, breast cancer and atypia. In this work, we investigate complex dependence among ductal components to perform segmentation, the first step for analyzing topology of ductal lobes. Our automated framework is based on incorporating a conditional random field with texture descriptors of skewness, coarseness, contrast, energy and fractal dimension. These features are selected to capture the architectural variability of the enhanced ducts by encoding spatial variations between pixel patches in galactographic image. The segmentation algorithm was applied to a dataset of 20 x-ray galactograms obtained at the Hospital of the University of Pennsylvania. We compared the performance of the proposed approach with fully and semi automated segmentation algorithms based on neural network classification, fuzzy-connectedness, vesselness filter and graph cuts. Global consistency error and confusion matrix analysis were used as accuracy measurements. For the proposed approach, the true positive rate was higher and the false negative rate was significantly lower compared to other fully automated methods. This indicates that segmentation based on CRF incorporated with texture descriptors has potential to efficiently support the analysis of complex topology of the ducts and aid in development of realistic breast anatomy phantoms.
Flaberg, Emilie; Sabelström, Per; Strandh, Christer; Szekely, Laszlo
2008-01-01
Background Confocal laser scanning microscopy has revolutionized cell biology. However, the technique has major limitations in speed and sensitivity due to the fact that a single laser beam scans the sample, allowing only a few microseconds signal collection for each pixel. This limitation has been overcome by the introduction of parallel beam illumination techniques in combination with cold CCD camera based image capture. Methods Using the combination of microlens enhanced Nipkow spinning disc confocal illumination together with fully automated image capture and large scale in silico image processing we have developed a system allowing the acquisition, presentation and analysis of maximum resolution confocal panorama images of several Gigapixel size. We call the method Extended Field Laser Confocal Microscopy (EFLCM). Results We show using the EFLCM technique that it is possible to create a continuous confocal multi-colour mosaic from thousands of individually captured images. EFLCM can digitize and analyze histological slides, sections of entire rodent organ and full size embryos. It can also record hundreds of thousands cultured cells at multiple wavelength in single event or time-lapse fashion on fixed slides, in live cell imaging chambers or microtiter plates. Conclusion The observer independent image capture of EFLCM allows quantitative measurements of fluorescence intensities and morphological parameters on a large number of cells. EFLCM therefore bridges the gap between the mainly illustrative fluorescence microscopy and purely quantitative flow cytometry. EFLCM can also be used as high content analysis (HCA) instrument for automated screening processes. PMID:18627634
Flaberg, Emilie; Sabelström, Per; Strandh, Christer; Szekely, Laszlo
2008-07-16
Confocal laser scanning microscopy has revolutionized cell biology. However, the technique has major limitations in speed and sensitivity due to the fact that a single laser beam scans the sample, allowing only a few microseconds signal collection for each pixel. This limitation has been overcome by the introduction of parallel beam illumination techniques in combination with cold CCD camera based image capture. Using the combination of microlens enhanced Nipkow spinning disc confocal illumination together with fully automated image capture and large scale in silico image processing we have developed a system allowing the acquisition, presentation and analysis of maximum resolution confocal panorama images of several Gigapixel size. We call the method Extended Field Laser Confocal Microscopy (EFLCM). We show using the EFLCM technique that it is possible to create a continuous confocal multi-colour mosaic from thousands of individually captured images. EFLCM can digitize and analyze histological slides, sections of entire rodent organ and full size embryos. It can also record hundreds of thousands cultured cells at multiple wavelength in single event or time-lapse fashion on fixed slides, in live cell imaging chambers or microtiter plates. The observer independent image capture of EFLCM allows quantitative measurements of fluorescence intensities and morphological parameters on a large number of cells. EFLCM therefore bridges the gap between the mainly illustrative fluorescence microscopy and purely quantitative flow cytometry. EFLCM can also be used as high content analysis (HCA) instrument for automated screening processes.
A novel method for automated assessment of megakaryocyte differentiation and proplatelet formation.
Salzmann, M; Hoesel, B; Haase, M; Mussbacher, M; Schrottmaier, W C; Kral-Pointner, J B; Finsterbusch, M; Mazharian, A; Assinger, A; Schmid, J A
2018-06-01
Transfusion of platelet concentrates represents an important treatment for various bleeding complications. However, the short half-life and frequent contaminations with bacteria restrict the availability of platelet concentrates and raise a clear demand for platelets generated ex vivo. Therefore, in vitro platelet generation from megakaryocytes represents an important research topic. A vital step for this process represents accurate analysis of thrombopoiesis and proplatelet formation, which is usually conducted manually. We aimed to develop a novel method for automated classification and analysis of proplatelet-forming megakaryocytes in vitro. After fluorescent labelling of surface and nucleus, MKs were automatically categorized and analysed with a novel pipeline of the open source software CellProfiler. Our new workflow is able to detect and quantify four subtypes of megakaryocytes undergoing thrombopoiesis: proplatelet-forming, spreading, pseudopodia-forming and terminally differentiated, anucleated megakaryocytes. Furthermore, we were able to characterize the inhibitory effect of dasatinib on thrombopoiesis in more detail. Our new workflow enabled rapid, unbiased, quantitative and qualitative in-depth analysis of proplatelet formation based on morphological characteristics. Clinicians and basic researchers alike will benefit from this novel technique that allows reliable and unbiased quantification of proplatelet formation. It thereby provides a valuable tool for the development of methods to generate platelets ex vivo and to detect effects of drugs on megakaryocyte differentiation.
Practical considerations of image analysis and quantification of signal transduction IHC staining.
Grunkin, Michael; Raundahl, Jakob; Foged, Niels T
2011-01-01
The dramatic increase in computer processing power in combination with the availability of high-quality digital cameras during the last 10 years has fertilized the grounds for quantitative microscopy based on digital image analysis. With the present introduction of robust scanners for whole slide imaging in both research and routine, the benefits of automation and objectivity in the analysis of tissue sections will be even more obvious. For in situ studies of signal transduction, the combination of tissue microarrays, immunohistochemistry, digital imaging, and quantitative image analysis will be central operations. However, immunohistochemistry is a multistep procedure including a lot of technical pitfalls leading to intra- and interlaboratory variability of its outcome. The resulting variations in staining intensity and disruption of original morphology are an extra challenge for the image analysis software, which therefore preferably should be dedicated to the detection and quantification of histomorphometrical end points.
Developments in the Implementation of Acoustic Droplet Ejection for Protein Crystallography.
Wu, Ping; Noland, Cameron; Ultsch, Mark; Edwards, Bonnie; Harris, David; Mayer, Robert; Harris, Seth F
2016-02-01
Acoustic droplet ejection (ADE) enables crystallization experiments at the low-nanoliter scale, resulting in rapid vapor diffusion equilibration dynamics and efficient reagent usage in the empirical discovery of structure-enabling protein crystallization conditions. We extend our validation of this technology applied to the diverse physicochemical property space of aqueous crystallization reagents where dynamic fluid analysis coupled to ADE aids in accurate and precise dispensations. Addition of crystallization seed stocks, chemical additives, or small-molecule ligands effectively modulates crystallization, and we here provide examples in optimization of crystal morphology and diffraction quality by the acoustic delivery of ultra-small volumes of these cofactors. Additional applications are discussed, including set up of in situ proteolysis and alternate geometries of crystallization that leverage the small scale afforded by acoustic delivery. Finally, we describe parameters of a system of automation in which the acoustic liquid handler is integrated with a robotic arm, plate centrifuge, peeler, sealer, and stacks, which allows unattended high-throughput crystallization experimentation. © 2015 Society for Laboratory Automation and Screening.
An Automated Classification Technique for Detecting Defects in Battery Cells
NASA Technical Reports Server (NTRS)
McDowell, Mark; Gray, Elizabeth
2006-01-01
Battery cell defect classification is primarily done manually by a human conducting a visual inspection to determine if the battery cell is acceptable for a particular use or device. Human visual inspection is a time consuming task when compared to an inspection process conducted by a machine vision system. Human inspection is also subject to human error and fatigue over time. We present a machine vision technique that can be used to automatically identify defective sections of battery cells via a morphological feature-based classifier using an adaptive two-dimensional fast Fourier transformation technique. The initial area of interest is automatically classified as either an anode or cathode cell view as well as classified as an acceptable or a defective battery cell. Each battery cell is labeled and cataloged for comparison and analysis. The result is the implementation of an automated machine vision technique that provides a highly repeatable and reproducible method of identifying and quantifying defects in battery cells.
Preprocessing film-copied MRI for studying morphological brain changes.
Pham, Tuan D; Eisenblätter, Uwe; Baune, Bernhard T; Berger, Klaus
2009-06-15
The magnetic resonance imaging (MRI) of the brain is one of the important data items for studying memory and morbidity in elderly as these images can provide useful information through the quantitative measures of various regions of interest of the brain. As an effort to fully automate the biomedical analysis of the brain that can be combined with the genetic data of the same human population and where the records of the original MRI data are missing, this paper presents two effective methods for addressing this imaging problem. The first method handles the restoration of the film-copied MRI. The second method involves the segmentation of the image data. Experimental results and comparisons with other methods suggest the usefulness of the proposed image analysis methodology.
Diamond, James; Anderson, Neil H; Bartels, Peter H; Montironi, Rodolfo; Hamilton, Peter W
2004-09-01
Quantitative examination of prostate histology offers clues in the diagnostic classification of lesions and in the prediction of response to treatment and prognosis. To facilitate the collection of quantitative data, the development of machine vision systems is necessary. This study explored the use of imaging for identifying tissue abnormalities in prostate histology. Medium-power histological scenes were recorded from whole-mount radical prostatectomy sections at x 40 objective magnification and assessed by a pathologist as exhibiting stroma, normal tissue (nonneoplastic epithelial component), or prostatic carcinoma (PCa). A machine vision system was developed that divided the scenes into subregions of 100 x 100 pixels and subjected each to image-processing techniques. Analysis of morphological characteristics allowed the identification of normal tissue. Analysis of image texture demonstrated that Haralick feature 4 was the most suitable for discriminating stroma from PCa. Using these morphological and texture measurements, it was possible to define a classification scheme for each subregion. The machine vision system is designed to integrate these classification rules and generate digital maps of tissue composition from the classification of subregions; 79.3% of subregions were correctly classified. Established classification rates have demonstrated the validity of the methodology on small scenes; a logical extension was to apply the methodology to whole slide images via scanning technology. The machine vision system is capable of classifying these images. The machine vision system developed in this project facilitates the exploration of morphological and texture characteristics in quantifying tissue composition. It also illustrates the potential of quantitative methods to provide highly discriminatory information in the automated identification of prostatic lesions using computer vision.
Silva, Lara Rosana Vieira; Mizokami, Leila Lopes; Vieira, Paola Rabello; Kuckelhaus, Selma Aparecida Souza
2016-02-01
Dermatoglyphics can be found in the thick skin of both hands and feet which make the identification process possible, however morphological changes throughout life can affect identification in elderly individuals. Considering that dermatoglyphics is an important biometric method, due to it being practical and inexpensive, this longitudinal and retrospective study was aimed to evaluate the morphological variations in fingerprints obtained from men and women (n=20) during their adult and elderly stages of life; the time between obtaining the two fingerprints was 33.5±9.4 years. For the morphometric analysis, an area of 1 cm(2) was selected to quantify the visible friction ridges, minutiae, interpapillary and white lines, and later side-by-side confrontation was used to determine the identity of the individuals. Our results showed a reduction of friction ridges, an increase in the number of white lines for the group (men and women) and a decrease in the number of interpapillary lines in the group of women. It also showed that the selection of compatible fingerprints by the automated AFIS/VRP system allowed the identification of 23 individuals (57.5%), but when the identification was made by the automated AFIS/VRP system, followed by the analysis of archived patterns to eliminate incompatible fingerprints, determination of the identity of 28 individuals (70.0%) was possible. The dermatoglyphics of the elderly suffered morphometric changes that prevented the identification of 30% of them, probably due to the aging process, and pointed to the importance of improving the methods of obtaining fingerprints to clarify issues related to the identification of the elderly. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Kemper, Thomas; Gueguen, Lionel; Soille, Pierre
2012-06-01
The enumeration of the population remains a critical task in the management of refugee/IDP camps. Analysis of very high spatial resolution satellite data proofed to be an efficient and secure approach for the estimation of dwellings and the monitoring of the camp over time. In this paper we propose a new methodology for the automated extraction of features based on differential morphological decomposition segmentation for feature extraction and interactive training sample selection from the max-tree and min-tree structures. This feature extraction methodology is tested on a WorldView-2 scene of an IDP camp in Darfur Sudan. Special emphasis is given to the additional available bands of the WorldView-2 sensor. The results obtained show that the interactive image information tool is performing very well by tuning the feature extraction to the local conditions. The analysis of different spectral subsets shows that it is possible to obtain good results already with an RGB combination, but by increasing the number of spectral bands the detection of dwellings becomes more accurate. Best results were obtained using all eight bands of WorldView-2 satellite.
Automated retinal vessel type classification in color fundus images
NASA Astrophysics Data System (ADS)
Yu, H.; Barriga, S.; Agurto, C.; Nemeth, S.; Bauman, W.; Soliz, P.
2013-02-01
Automated retinal vessel type classification is an essential first step toward machine-based quantitative measurement of various vessel topological parameters and identifying vessel abnormalities and alternations in cardiovascular disease risk analysis. This paper presents a new and accurate automatic artery and vein classification method developed for arteriolar-to-venular width ratio (AVR) and artery and vein tortuosity measurements in regions of interest (ROI) of 1.5 and 2.5 optic disc diameters from the disc center, respectively. This method includes illumination normalization, automatic optic disc detection and retinal vessel segmentation, feature extraction, and a partial least squares (PLS) classification. Normalized multi-color information, color variation, and multi-scale morphological features are extracted on each vessel segment. We trained the algorithm on a set of 51 color fundus images using manually marked arteries and veins. We tested the proposed method in a previously unseen test data set consisting of 42 images. We obtained an area under the ROC curve (AUC) of 93.7% in the ROI of AVR measurement and 91.5% of AUC in the ROI of tortuosity measurement. The proposed AV classification method has the potential to assist automatic cardiovascular disease early detection and risk analysis.
Guber, Alexander; Greif, Joel; Rona, Roni; Fireman, Elizabeth; Madi, Lea; Kaplan, Tal; Yemini, Zipi; Gottfried, Maya; Katz, Ruth L; Daniely, Michal
2010-10-25
Lung cancer results from a multistep process, whereby genetic and epigenetic alterations lead to a malignant phenotype. Somatic mutations, deletions, and amplifications can be detected in the tumor itself, but they can also be found in histologically normal bronchial epithelium as a result of field cancerization. The present feasibility study describes a computer-assisted analysis of induced sputum employing morphology and fluorescence in situ hybridization (target-FISH), using 2 biomarkers located at chromosomes 3p22.1 and 10q22.3. Induced sputum samples were collected using a standardized protocol from 12 patients with lung cancer and from 15 healthy, nonsmoking controls. We used an automated scanning system that allows consecutive scans of morphology and FISH of the same slide. Cells derived for the lower airways were analyzed for the presence of genetic alterations in the 3p22.1 and 10q22.3 loci. The cutoff for a positive diagnosis was defined as >4% of cells showing genetic alterations. Eleven of 12 lung cancer patients and 12 of 15 controls were identified correctly, giving an overall sensitivity and specificity of 91.66% and 80%, respectively. This study describes a new technology for detecting lung cancer noninvasively in induced sputum via a combination of morphology and FISH analysis (target-FISH) using computer-assisted technology. This approach may potentially be utilized for mass screening of high-risk populations. © 2010 American Cancer Society.
A Visual Galaxy Classification Interface and its Classroom Application
NASA Astrophysics Data System (ADS)
Kautsch, Stefan J.; Phung, Chau; VanHilst, Michael; Castro, Victor H
2014-06-01
Galaxy morphology is an important topic in modern astronomy to understand questions concerning the evolution and formation of galaxies and their dark matter content. In order to engage students in exploring galaxy morphology, we developed a web-based, graphical interface that allows students to visually classify galaxy images according to various morphological types. The website is designed with HTML5, JavaScript, PHP, and a MySQL database. The classification interface provides hands-on research experience and training for students and interested clients, and allows them to contribute to studies of galaxy morphology. We present the first results of a pilot study and compare the visually classified types using our interface with that from automated classification routines.
Automated surface photometry for the Coma Cluster galaxies: The catalog
NASA Technical Reports Server (NTRS)
Doi, M.; Fukugita, M.; Okamura, S.; Tarusawa, K.
1995-01-01
A homogeneous photometry catalog is presented for 450 galaxies with B(sub 25.5) less than or equal to 16 mag located in the 9.8 deg x 9.8 deg region centered on the Coma Cluster. The catalog is based on photographic photometry using an automated surface photometry software for data reduction applied to B-band Schmidt plates. The catalog provides accurate positions, isophotal and total magnitudes, major and minor axes, and a few other photometric parameters including rudimentary morphology (early of late type).
An Automated Method for High-Definition Transcranial Direct Current Stimulation Modeling*
Huang, Yu; Su, Yuzhuo; Rorden, Christopher; Dmochowski, Jacek; Datta, Abhishek; Parra, Lucas C.
2014-01-01
Targeted transcranial stimulation with electric currents requires accurate models of the current flow from scalp electrodes to the human brain. Idiosyncratic anatomy of individual brains and heads leads to significant variability in such current flows across subjects, thus, necessitating accurate individualized head models. Here we report on an automated processing chain that computes current distributions in the head starting from a structural magnetic resonance image (MRI). The main purpose of automating this process is to reduce the substantial effort currently required for manual segmentation, electrode placement, and solving of finite element models. In doing so, several weeks of manual labor were reduced to no more than 4 hours of computation time and minimal user interaction, while current-flow results for the automated method deviated by less than 27.9% from the manual method. Key facilitating factors are the addition of three tissue types (skull, scalp and air) to a state-of-the-art automated segmentation process, morphological processing to correct small but important segmentation errors, and automated placement of small electrodes based on easily reproducible standard electrode configurations. We anticipate that such an automated processing will become an indispensable tool to individualize transcranial direct current stimulation (tDCS) therapy. PMID:23367144
a Novel Method for Automation of 3d Hydro Break Line Generation from LIDAR Data Using Matlab
NASA Astrophysics Data System (ADS)
Toscano, G. J.; Gopalam, U.; Devarajan, V.
2013-08-01
Water body detection is necessary to generate hydro break lines, which are in turn useful in creating deliverables such as TINs, contours, DEMs from LiDAR data. Hydro flattening follows the detection and delineation of water bodies (lakes, rivers, ponds, reservoirs, streams etc.) with hydro break lines. Manual hydro break line generation is time consuming and expensive. Accuracy and processing time depend on the number of vertices marked for delineation of break lines. Automation with minimal human intervention is desired for this operation. This paper proposes using a novel histogram analysis of LiDAR elevation data and LiDAR intensity data to automatically detect water bodies. Detection of water bodies using elevation information was verified by checking against LiDAR intensity data since the spectral reflectance of water bodies is very small compared with that of land and vegetation in near infra-red wavelength range. Detection of water bodies using LiDAR intensity data was also verified by checking against LiDAR elevation data. False detections were removed using morphological operations and 3D break lines were generated. Finally, a comparison of automatically generated break lines with their semi-automated/manual counterparts was performed to assess the accuracy of the proposed method and the results were discussed.
Fully automated chest wall line segmentation in breast MRI by using context information
NASA Astrophysics Data System (ADS)
Wu, Shandong; Weinstein, Susan P.; Conant, Emily F.; Localio, A. Russell; Schnall, Mitchell D.; Kontos, Despina
2012-03-01
Breast MRI has emerged as an effective modality for the clinical management of breast cancer. Evidence suggests that computer-aided applications can further improve the diagnostic accuracy of breast MRI. A critical and challenging first step for automated breast MRI analysis, is to separate the breast as an organ from the chest wall. Manual segmentation or user-assisted interactive tools are inefficient, tedious, and error-prone, which is prohibitively impractical for processing large amounts of data from clinical trials. To address this challenge, we developed a fully automated and robust computerized segmentation method that intensively utilizes context information of breast MR imaging and the breast tissue's morphological characteristics to accurately delineate the breast and chest wall boundary. A critical component is the joint application of anisotropic diffusion and bilateral image filtering to enhance the edge that corresponds to the chest wall line (CWL) and to reduce the effect of adjacent non-CWL tissues. A CWL voting algorithm is proposed based on CWL candidates yielded from multiple sequential MRI slices, in which a CWL representative is generated and used through a dynamic time warping (DTW) algorithm to filter out inferior candidates, leaving the optimal one. Our method is validated by a representative dataset of 20 3D unilateral breast MRI scans that span the full range of the American College of Radiology (ACR) Breast Imaging Reporting and Data System (BI-RADS) fibroglandular density categorization. A promising performance (average overlay percentage of 89.33%) is observed when the automated segmentation is compared to manually segmented ground truth obtained by an experienced breast imaging radiologist. The automated method runs time-efficiently at ~3 minutes for each breast MR image set (28 slices).
Kusumoto, Dai; Lachmann, Mark; Kunihiro, Takeshi; Yuasa, Shinsuke; Kishino, Yoshikazu; Kimura, Mai; Katsuki, Toshiomi; Itoh, Shogo; Seki, Tomohisa; Fukuda, Keiichi
2018-06-05
Deep learning technology is rapidly advancing and is now used to solve complex problems. Here, we used deep learning in convolutional neural networks to establish an automated method to identify endothelial cells derived from induced pluripotent stem cells (iPSCs), without the need for immunostaining or lineage tracing. Networks were trained to predict whether phase-contrast images contain endothelial cells based on morphology only. Predictions were validated by comparison to immunofluorescence staining for CD31, a marker of endothelial cells. Method parameters were then automatically and iteratively optimized to increase prediction accuracy. We found that prediction accuracy was correlated with network depth and pixel size of images to be analyzed. Finally, K-fold cross-validation confirmed that optimized convolutional neural networks can identify endothelial cells with high performance, based only on morphology. Copyright © 2018 The Author(s). Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Chitchian, Shahab; Vincent, Kathleen L.; Vargas, Gracie; Motamedi, Massoud
2012-11-01
We have explored the use of optical coherence tomography (OCT) as a noninvasive tool for assessing the toxicity of topical microbicides, products used to prevent HIV, by monitoring the integrity of the vaginal epithelium. A novel feature-based segmentation algorithm using a nearest-neighbor classifier was developed to monitor changes in the morphology of vaginal epithelium. The two-step automated algorithm yielded OCT images with a clearly defined epithelial layer, enabling differentiation of normal and damaged tissue. The algorithm was robust in that it was able to discriminate the epithelial layer from underlying stroma as well as residual microbicide product on the surface. This segmentation technique for OCT images has the potential to be readily adaptable to the clinical setting for noninvasively defining the boundaries of the epithelium, enabling quantifiable assessment of microbicide-induced damage in vaginal tissue.
NASA Astrophysics Data System (ADS)
Kemper, Björn; Lenz, Philipp; Bettenworth, Dominik; Krausewitz, Philipp; Domagk, Dirk; Ketelhut, Steffi
2015-05-01
Digital holographic microscopy (DHM) has been demonstrated to be a versatile tool for high resolution non-destructive quantitative phase imaging of surfaces and multi-modal minimally-invasive monitoring of living cell cultures in-vitro. DHM provides quantitative monitoring of physiological processes through functional imaging and structural analysis which, for example, gives new insight into signalling of cellular water permeability and cell morphology changes due to toxins and infections. Also the analysis of dissected tissues quantitative DHM phase contrast prospects application fields by stain-free imaging and the quantification of tissue density changes. We show that DHM allows imaging of different tissue layers with high contrast in unstained tissue sections. As the investigation of fixed samples represents a very important application field in pathology, we also analyzed the influence of the sample preparation. The retrieved data demonstrate that the quality of quantitative DHM phase images of dissected tissues depends strongly on the fixing method and common staining agents. As in DHM the reconstruction is performed numerically, multi-focus imaging is achieved from a single digital hologram. Thus, we evaluated the automated refocussing feature of DHM for application on different types of dissected tissues and revealed that on moderately stained samples highly reproducible holographic autofocussing can be achieved. Finally, it is demonstrated that alterations of the spatial refractive index distribution in murine and human tissue samples represent a reliable absolute parameter that is related of different degrees of inflammation in experimental colitis and Crohn's disease. This paves the way towards the usage of DHM in digital pathology for automated histological examinations and further studies to elucidate the translational potential of quantitative phase microscopy for the clinical management of patients, e.g., with inflammatory bowel disease.
Liu, Fang; Zhou, Zhaoye; Jang, Hyungseok; Samsonov, Alexey; Zhao, Gengyan; Kijowski, Richard
2018-04-01
To describe and evaluate a new fully automated musculoskeletal tissue segmentation method using deep convolutional neural network (CNN) and three-dimensional (3D) simplex deformable modeling to improve the accuracy and efficiency of cartilage and bone segmentation within the knee joint. A fully automated segmentation pipeline was built by combining a semantic segmentation CNN and 3D simplex deformable modeling. A CNN technique called SegNet was applied as the core of the segmentation method to perform high resolution pixel-wise multi-class tissue classification. The 3D simplex deformable modeling refined the output from SegNet to preserve the overall shape and maintain a desirable smooth surface for musculoskeletal structure. The fully automated segmentation method was tested using a publicly available knee image data set to compare with currently used state-of-the-art segmentation methods. The fully automated method was also evaluated on two different data sets, which include morphological and quantitative MR images with different tissue contrasts. The proposed fully automated segmentation method provided good segmentation performance with segmentation accuracy superior to most of state-of-the-art methods in the publicly available knee image data set. The method also demonstrated versatile segmentation performance on both morphological and quantitative musculoskeletal MR images with different tissue contrasts and spatial resolutions. The study demonstrates that the combined CNN and 3D deformable modeling approach is useful for performing rapid and accurate cartilage and bone segmentation within the knee joint. The CNN has promising potential applications in musculoskeletal imaging. Magn Reson Med 79:2379-2391, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
Automated delineation and characterization of drumlins using a localized contour tree approach
NASA Astrophysics Data System (ADS)
Wang, Shujie; Wu, Qiusheng; Ward, Dylan
2017-10-01
Drumlins are ubiquitous landforms in previously glaciated regions, formed through a series of complex subglacial processes operating underneath the paleo-ice sheets. Accurate delineation and characterization of drumlins are essential for understanding the formation mechanism of drumlins as well as the flow behaviors and basal conditions of paleo-ice sheets. Automated mapping of drumlins is particularly important for examining the distribution patterns of drumlins across large spatial scales. This paper presents an automated vector-based approach to mapping drumlins from high-resolution light detection and ranging (LiDAR) data. The rationale is to extract a set of concentric contours by building localized contour trees and establishing topological relationships. This automated method can overcome the shortcomings of previously manual and automated methods for mapping drumlins, for instance, the azimuthal biases during the generation of shaded relief images. A case study was carried out over a portion of the New York Drumlin Field. Overall 1181 drumlins were identified from the LiDAR-derived DEM across the study region, which had been underestimated in previous literature. The delineation results were visually and statistically compared to the manual digitization results. The morphology of drumlins was characterized by quantifying the length, width, elongation ratio, height, area, and volume. Statistical and spatial analyses were conducted to examine the distribution pattern and spatial variability of drumlin size and form. The drumlins and the morphologic characteristics exhibit significant spatial clustering rather than randomly distributed patterns. The form of drumlins varies from ovoid to spindle shapes towards the downstream direction of paleo ice flows, along with the decrease in width, area, and volume. This observation is in line with previous studies, which may be explained by the variations in sediment thickness and/or the velocity increases of ice flows towards ice front.
Estimating ankle rotational constraints from anatomic structure
NASA Astrophysics Data System (ADS)
Baker, H. H.; Bruckner, Janice S.; Langdon, John H.
1992-09-01
Three-dimensional biomedical data obtained through tomography provide exceptional views of biological anatomy. While visualization is one of the primary purposes for obtaining these data, other more quantitative and analytic uses are possible. These include modeling of tissue properties and interrelationships, simulation of physical processes, interactive surgical investigation, and analysis of kinematics and dynamics. As an application of our research in modeling tissue structure and function, we have been working to develop interactive and automated tools for studying joint geometry and kinematics. We focus here on discrimination of morphological variations in the foot and determining the implications of these on both hominid bipedal evolution and physical therapy treatment for foot disorders.
Franco, A; Willems, G; Couto Souza, P H; Coucke, W; Thevissen, P
2016-07-01
The number of teeth involved in cases of bite-mark analysis is generally fewer in comparison to the number of teeth available for cases of dental identification. This decreases the amount of information available and can hamper the distinction between bite suspects. The opposite is true in cases of dental identification and the assumption is that more teeth contribute to a higher degree of specificity and the possibility of identification in these cases. Despite being broadly accepted in forensic dentistry, this hypothesis has never been scientifically tested. The present study aims to assess the impact of the quantity of teeth or tooth parts on morphological differences in twin dentitions. A sample of 344 dental casts collected from 86 pairs of twins was used. The dental casts were digitized using an automated motion device (XCAD 3D® (XCADCAM Technology®, São Paulo, SP, Brazil) and were imported as three-dimensional dental model images (3D-DMI) in Geomagic Studio® (3D Systems®, Rock Hill, SC, USA) software package. Sub samples were established based on the quantity of teeth and tooth parts studied. Pair wise morphological comparisons between the corresponding twin siblings were established and quantified. Increasing the quantity of teeth and tooth parts resulted in an increase of morphological difference between twin dentitions. More evident differences were observed comparing anterior vs. entire dentitions (p < 0.05) and complete vs. partial anterior dentitions (p < 0.05). Dental identifications and bite-mark analysis must include all the possibly related dental information to reach optimal comparison outcomes.
Development of Automated Image Analysis Software for Suspended Marine Particle Classification
2003-09-30
Development of Automated Image Analysis Software for Suspended Marine Particle Classification Scott Samson Center for Ocean Technology...REPORT TYPE 3. DATES COVERED 00-00-2003 to 00-00-2003 4. TITLE AND SUBTITLE Development of Automated Image Analysis Software for Suspended...objective is to develop automated image analysis software to reduce the effort and time required for manual identification of plankton images. Automated
NASA Astrophysics Data System (ADS)
Garcia-Allende, P. Beatriz; Amygdalos, Iakovos; Dhanapala, Hiruni; Goldin, Robert D.; Hanna, George B.; Elson, Daniel S.
2012-01-01
Computer-aided diagnosis of ophthalmic diseases using optical coherence tomography (OCT) relies on the extraction of thickness and size measures from the OCT images, but such defined layers are usually not observed in emerging OCT applications aimed at "optical biopsy" such as pulmonology or gastroenterology. Mathematical methods such as Principal Component Analysis (PCA) or textural analyses including both spatial textural analysis derived from the two-dimensional discrete Fourier transform (DFT) and statistical texture analysis obtained independently from center-symmetric auto-correlation (CSAC) and spatial grey-level dependency matrices (SGLDM), as well as, quantitative measurements of the attenuation coefficient have been previously proposed to overcome this problem. We recently proposed an alternative approach consisting of a region segmentation according to the intensity variation along the vertical axis and a pure statistical technology for feature quantification. OCT images were first segmented in the axial direction in an automated manner according to intensity. Afterwards, a morphological analysis of the segmented OCT images was employed for quantifying the features that served for tissue classification. In this study, a PCA processing of the extracted features is accomplished to combine their discriminative power in a lower number of dimensions. Ready discrimination of gastrointestinal surgical specimens is attained demonstrating that the approach further surpasses the algorithms previously reported and is feasible for tissue classification in the clinical setting.
Bass, Ellen J; Baumgart, Leigh A; Shepley, Kathryn Klein
2013-03-01
Displaying both the strategy that information analysis automation employs to makes its judgments and variability in the task environment may improve human judgment performance, especially in cases where this variability impacts the judgment performance of the information analysis automation. This work investigated the contribution of providing either information analysis automation strategy information, task environment information, or both, on human judgment performance in a domain where noisy sensor data are used by both the human and the information analysis automation to make judgments. In a simplified air traffic conflict prediction experiment, 32 participants made probability of horizontal conflict judgments under different display content conditions. After being exposed to the information analysis automation, judgment achievement significantly improved for all participants as compared to judgments without any of the automation's information. Participants provided with additional display content pertaining to cue variability in the task environment had significantly higher aided judgment achievement compared to those provided with only the automation's judgment of a probability of conflict. When designing information analysis automation for environments where the automation's judgment achievement is impacted by noisy environmental data, it may be beneficial to show additional task environment information to the human judge in order to improve judgment performance.
Evaluation of anemia diagnosis based on elastic light scattering (Conference Presentation)
NASA Astrophysics Data System (ADS)
Tong, Lieshu; Wang, Xinrui; Xie, Dengling; Chen, Xiaoya; Chu, Kaiqin; Dou, Hu; Smith, Zachary J.
2017-03-01
Currently, one-third of humanity is still suffering from anemia. In China the most common forms of anemia are iron deficiency and Thalassemia minor. Differentiating these two is the key to effective treatment. Iron deficiency is caused by malnutrition and can be cured by iron supplementation. Thalassemia is a hereditary disease in which the hemoglobin β chain is lowered or absent. Iron therapy is not effective, and there is evidence that iron therapy may be harmful to patients with Thalassemia. Both anemias can be diagnosed using red blood cell morphology: Iron deficiency presents a smaller mean cell volume compared to normal cells, but with a wide distribution; Thalassemia, meanwhile, presents a very small cell size and tight particle size distribution. Several researchers have proposed diagnostic indices based on red cell morphology to differentiate these two diseases. However, these indices lack sensitivity and specificity and are constructed without statistical rigor. Using multivariate methods we demonstrate a new classification method based on red cell morphology that diagnoses anemia in a Chinese population with enough accuracy for its use as a screening method. We further demonstrate a low cost instrument that precisely measures red cell morphology using elastic light scattering. This instrument is combined with an automated analysis program that processes scattering data to report red cell morphology without the need for user intervention. Despite using consumer-grade components, when comparing our experimental results with gold-standard measurements, the device can still achieve the high precision required for sensing clinically significant changes in red cell morphology.
Bray, Mark-Anthony; Gustafsdottir, Sigrun M; Rohban, Mohammad H; Singh, Shantanu; Ljosa, Vebjorn; Sokolnicki, Katherine L; Bittker, Joshua A; Bodycombe, Nicole E; Dančík, Vlado; Hasaka, Thomas P; Hon, Cindy S; Kemp, Melissa M; Li, Kejie; Walpita, Deepika; Wawer, Mathias J; Golub, Todd R; Schreiber, Stuart L; Clemons, Paul A; Shamji, Alykhan F
2017-01-01
Abstract Background Large-scale image sets acquired by automated microscopy of perturbed samples enable a detailed comparison of cell states induced by each perturbation, such as a small molecule from a diverse library. Highly multiplexed measurements of cellular morphology can be extracted from each image and subsequently mined for a number of applications. Findings This microscopy dataset includes 919 265 five-channel fields of view, representing 30 616 tested compounds, available at “The Cell Image Library” (CIL) repository. It also includes data files containing morphological features derived from each cell in each image, both at the single-cell level and population-averaged (i.e., per-well) level; the image analysis workflows that generated the morphological features are also provided. Quality-control metrics are provided as metadata, indicating fields of view that are out-of-focus or containing highly fluorescent material or debris. Lastly, chemical annotations are supplied for the compound treatments applied. Conclusions Because computational algorithms and methods for handling single-cell morphological measurements are not yet routine, the dataset serves as a useful resource for the wider scientific community applying morphological (image-based) profiling. The dataset can be mined for many purposes, including small-molecule library enrichment and chemical mechanism-of-action studies, such as target identification. Integration with genetically perturbed datasets could enable identification of small-molecule mimetics of particular disease- or gene-related phenotypes that could be useful as probes or potential starting points for development of future therapeutics. PMID:28327978
BahadarKhan, Khan; A Khaliq, Amir; Shahid, Muhammad
2016-01-01
Diabetic Retinopathy (DR) harm retinal blood vessels in the eye causing visual deficiency. The appearance and structure of blood vessels in retinal images play an essential part in the diagnoses of an eye sicknesses. We proposed a less computational unsupervised automated technique with promising results for detection of retinal vasculature by using morphological hessian based approach and region based Otsu thresholding. Contrast Limited Adaptive Histogram Equalization (CLAHE) and morphological filters have been used for enhancement and to remove low frequency noise or geometrical objects, respectively. The hessian matrix and eigenvalues approach used has been in a modified form at two different scales to extract wide and thin vessel enhanced images separately. Otsu thresholding has been further applied in a novel way to classify vessel and non-vessel pixels from both enhanced images. Finally, postprocessing steps has been used to eliminate the unwanted region/segment, non-vessel pixels, disease abnormalities and noise, to obtain a final segmented image. The proposed technique has been analyzed on the openly accessible DRIVE (Digital Retinal Images for Vessel Extraction) and STARE (STructured Analysis of the REtina) databases along with the ground truth data that has been precisely marked by the experts. PMID:27441646
Callegaro, Giulia; Corvi, Raffaella; Salovaara, Susan; Urani, Chiara; Stefanini, Federico M
2017-06-01
Cell Transformation Assays (CTAs) have long been proposed for the identification of chemical carcinogenicity potential. The endpoint of these in vitro assays is represented by the phenotypic alterations in cultured cells, which are characterized by the change from the non-transformed to the transformed phenotype. Despite the wide fields of application and the numerous advantages of CTAs, their use in regulatory toxicology has been limited in part due to concerns about the subjective nature of visual scoring, i.e. the step in which transformed colonies or foci are evaluated through morphological features. An objective evaluation of morphological features has been previously obtained through automated digital processing of foci images to extract the value of three statistical image descriptors. In this study a further potential of the CTA using BALB/c 3T3 cells is addressed by analysing the effect of increasing concentrations of two known carcinogens, benzo[a]pyrene and NiCl 2 , with different modes of action on foci morphology. The main result of our quantitative evaluation shows that the concentration of the considered carcinogens has an effect on foci morphology that is statistically significant for the mean of two among the three selected descriptors. Statistical significance also corresponds to visual relevance. The statistical analysis of variations in foci morphology due to concentration allowed to quantify morphological changes that can be visually appreciated but not precisely determined. Therefore, it has the potential of providing new quantitative parameters in CTAs, and of exploiting all the information encoded in foci. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Sopharak, Akara; Uyyanonvara, Bunyarit; Barman, Sarah; Williamson, Thomas
To prevent blindness from diabetic retinopathy, periodic screening and early diagnosis are neccessary. Due to lack of expert ophthalmologists in rural area, automated early exudate (one of visible sign of diabetic retinopathy) detection could help to reduce the number of blindness in diabetic patients. Traditional automatic exudate detection methods are based on specific parameter configuration, while the machine learning approaches which seems more flexible may be computationally high cost. A comparative analysis of traditional and machine learning of exudates detection, namely, mathematical morphology, fuzzy c-means clustering, naive Bayesian classifier, Support Vector Machine and Nearest Neighbor classifier are presented. Detected exudates are validated with expert ophthalmologists' hand-drawn ground-truths. The sensitivity, specificity, precision, accuracy and time complexity of each method are also compared.
Cryo-imaging in a toxicological study on mouse fetuses
NASA Astrophysics Data System (ADS)
Roy, Debashish; Gargesha, Madhusudhana; Sloter, Eddie; Watanabe, Michiko; Wilson, David
2010-03-01
We applied the Case cryo-imaging system to detect signals of developmental toxicity in transgenic mouse fetuses resulting from maternal exposure to a developmental environmental toxicant (2,3,7,8-tetrachlorodibenzo-p-dioxin, TCDD). We utilized a fluorescent transgenic mouse model that expresses Green Fluorescent Protein (GFP) exclusively in smooth muscles under the control of the smooth muscle gamma actin (SMGA) promoter (SMGA/EGFP mice kindly provided by J. Lessard, U. Cincinnati). Analysis of cryo-image data volumes, comprising of very high-resolution anatomical brightfield and molecular fluorescence block face images, revealed qualitative and quantitative morphological differences in control versus exposed fetuses. Fetuses randomly chosen from pregnant females euthanized on gestation day (GD) 18 were either manually examined or cryo-imaged. For cryo-imaging, fetuses were embedded, frozen and cryo-sectioned at 20 μm thickness and brightfield color and fluorescent block-face images were acquired with an in-plane resolution of ~15 μm. Automated 3D volume visualization schemes segmented out the black embedding medium and blended fluorescence and brightfield data to produce 3D reconstructions of all fetuses. Comparison of Treatment groups TCDD GD13, TCDD GD14 and control through automated analysis tools highlighted differences not observable by prosectors performing traditional fresh dissection. For example, severe hydronephrosis, suggestive of irreversible kidney damage, was detected by cryoimaging in fetuses exposed to TCDD. Automated quantification of total fluorescence in smooth muscles revealed suppressed fluorescence in TCDD-exposed fetuses. This application demonstrated that cryo-imaging can be utilized as a routine high-throughput screening tool to assess the effects of potential toxins on the developmental biology of small animals.
Automated measurement of cell motility and proliferation
Bahnson, Alfred; Athanassiou, Charalambos; Koebler, Douglas; Qian, Lei; Shun, Tongying; Shields, Donna; Yu, Hui; Wang, Hong; Goff, Julie; Cheng, Tao; Houck, Raymond; Cowsert, Lex
2005-01-01
Background Time-lapse microscopic imaging provides a powerful approach for following changes in cell phenotype over time. Visible responses of whole cells can yield insight into functional changes that underlie physiological processes in health and disease. For example, features of cell motility accompany molecular changes that are central to the immune response, to carcinogenesis and metastasis, to wound healing and tissue regeneration, and to the myriad developmental processes that generate an organism. Previously reported image processing methods for motility analysis required custom viewing devices and manual interactions that may introduce bias, that slow throughput, and that constrain the scope of experiments in terms of the number of treatment variables, time period of observation, replication and statistical options. Here we describe a fully automated system in which images are acquired 24/7 from 384 well plates and are automatically processed to yield high-content motility and morphological data. Results We have applied this technology to study the effects of different extracellular matrix compounds on human osteoblast-like cell lines to explore functional changes that may underlie processes involved in bone formation and maintenance. We show dose-response and kinetic data for induction of increased motility by laminin and collagen type I without significant effects on growth rate. Differential motility response was evident within 4 hours of plating cells; long-term responses differed depending upon cell type and surface coating. Average velocities were increased approximately 0.1 um/min by ten-fold increases in laminin coating concentration in some cases. Comparison with manual tracking demonstrated the accuracy of the automated method and highlighted the comparative imprecision of human tracking for analysis of cell motility data. Quality statistics are reported that associate with stage noise, interference by non-cell objects, and uncertainty in the outlining and positioning of cells by automated image analysis. Exponential growth, as monitored by total cell area, did not linearly correlate with absolute cell number, but proved valuable for selection of reliable tracking data and for disclosing between-experiment variations in cell growth. Conclusion These results demonstrate the applicability of a system that uses fully automated image acquisition and analysis to study cell motility and growth. Cellular motility response is determined in an unbiased and comparatively high throughput manner. Abundant ancillary data provide opportunities for uniform filtering according to criteria that select for biological relevance and for providing insight into features of system performance. Data quality measures have been developed that can serve as a basis for the design and quality control of experiments that are facilitated by automation and the 384 well plate format. This system is applicable to large-scale studies such as drug screening and research into effects of complex combinations of factors and matrices on cell phenotype. PMID:15831094
Xing, Fuyong; Yang, Lin
2016-01-01
Digital pathology and microscopy image analysis is widely used for comprehensive studies of cell morphology or tissue structure. Manual assessment is labor intensive and prone to inter-observer variations. Computer-aided methods, which can significantly improve the objectivity and reproducibility, have attracted a great deal of interest in recent literatures. Among the pipeline of building a computer-aided diagnosis system, nucleus or cell detection and segmentation play a very important role to describe the molecular morphological information. In the past few decades, many efforts have been devoted to automated nucleus/cell detection and segmentation. In this review, we provide a comprehensive summary of the recent state-of-the-art nucleus/cell segmentation approaches on different types of microscopy images including bright-field, phase-contrast, differential interference contrast (DIC), fluorescence, and electron microscopies. In addition, we discuss the challenges for the current methods and the potential future work of nucleus/cell detection and segmentation. PMID:26742143
Paveley, Ross A.; Mansour, Nuha R.; Hallyburton, Irene; Bleicher, Leo S.; Benn, Alex E.; Mikic, Ivana; Guidi, Alessandra; Gilbert, Ian H.; Hopkins, Andrew L.; Bickle, Quentin D.
2012-01-01
Sole reliance on one drug, Praziquantel, for treatment and control of schistosomiasis raises concerns about development of widespread resistance, prompting renewed interest in the discovery of new anthelmintics. To discover new leads we designed an automated label-free, high content-based, high throughput screen (HTS) to assess drug-induced effects on in vitro cultured larvae (schistosomula) using bright-field imaging. Automatic image analysis and Bayesian prediction models define morphological damage, hit/non-hit prediction and larval phenotype characterization. Motility was also assessed from time-lapse images. In screening a 10,041 compound library the HTS correctly detected 99.8% of the hits scored visually. A proportion of these larval hits were also active in an adult worm ex-vivo screen and are the subject of ongoing studies. The method allows, for the first time, screening of large compound collections against schistosomes and the methods are adaptable to other whole organism and cell-based screening by morphology and motility phenotyping. PMID:22860151
Song, Jie; Xiao, Liang; Lian, Zhichao
2017-03-01
This paper presents a novel method for automated morphology delineation and analysis of cell nuclei in histopathology images. Combining the initial segmentation information and concavity measurement, the proposed method first segments clusters of nuclei into individual pieces, avoiding segmentation errors introduced by the scale-constrained Laplacian-of-Gaussian filtering. After that a nuclear boundary-to-marker evidence computing is introduced to delineate individual objects after the refined segmentation process. The obtained evidence set is then modeled by the periodic B-splines with the minimum description length principle, which achieves a practical compromise between the complexity of the nuclear structure and its coverage of the fluorescence signal to avoid the underfitting and overfitting results. The algorithm is computationally efficient and has been tested on the synthetic database as well as 45 real histopathology images. By comparing the proposed method with several state-of-the-art methods, experimental results show the superior recognition performance of our method and indicate the potential applications of analyzing the intrinsic features of nuclei morphology.
Optoelectronic image processing for cervical cancer screening
NASA Astrophysics Data System (ADS)
Narayanswamy, Ramkumar; Sharpe, John P.; Johnson, Kristina M.
1994-05-01
Automation of the Pap-smear cervical screening method is highly desirable as it relieves tedium for the human operators, reduces cost and should increase accuracy and provide repeatability. We present here the design for a high-throughput optoelectronic system which forms the first stage of a two stage system to automate pap-smear screening. We use a mathematical morphological technique called the hit-or-miss transform to identify the suspicious areas on a pap-smear slide. This algorithm is implemented using a VanderLugt architecture and a time-sequential ANDing smart pixel array.
Paintdakhi, Ahmad; Parry, Bradley; Campos, Manuel; Irnov, Irnov; Elf, Johan; Surovtsev, Ivan; Jacobs-Wagner, Christine
2016-01-01
Summary With the realization that bacteria display phenotypic variability among cells and exhibit complex subcellular organization critical for cellular function and behavior, microscopy has re-emerged as a primary tool in bacterial research during the last decade. However, the bottleneck in today’s single-cell studies is quantitative image analysis of cells and fluorescent signals. Here, we address current limitations through the development of Oufti, a stand-alone, open-source software package for automated measurements of microbial cells and fluorescence signals from microscopy images. Oufti provides computational solutions for tracking touching cells in confluent samples, handles various cell morphologies, offers algorithms for quantitative analysis of both diffraction and non-diffraction-limited fluorescence signals, and is scalable for high-throughput analysis of massive datasets, all with subpixel precision. All functionalities are integrated in a single package. The graphical user interface, which includes interactive modules for segmentation, image analysis, and post-processing analysis, makes the software broadly accessible to users irrespective of their computational skills. PMID:26538279
Bass, Ellen J.; Baumgart, Leigh A.; Shepley, Kathryn Klein
2014-01-01
Displaying both the strategy that information analysis automation employs to makes its judgments and variability in the task environment may improve human judgment performance, especially in cases where this variability impacts the judgment performance of the information analysis automation. This work investigated the contribution of providing either information analysis automation strategy information, task environment information, or both, on human judgment performance in a domain where noisy sensor data are used by both the human and the information analysis automation to make judgments. In a simplified air traffic conflict prediction experiment, 32 participants made probability of horizontal conflict judgments under different display content conditions. After being exposed to the information analysis automation, judgment achievement significantly improved for all participants as compared to judgments without any of the automation's information. Participants provided with additional display content pertaining to cue variability in the task environment had significantly higher aided judgment achievement compared to those provided with only the automation's judgment of a probability of conflict. When designing information analysis automation for environments where the automation's judgment achievement is impacted by noisy environmental data, it may be beneficial to show additional task environment information to the human judge in order to improve judgment performance. PMID:24847184
Peter Vogt; Kurt H. Riitters; Marcin Iwanowski; Christine Estreguil; Jacek Kozak; Pierre Soille
2007-01-01
Corridors are important geographic features for biological conservation and biodiversity assessment. The identification and mapping of corridors is usually based on visual interpretations of movement patterns (functional corridors) or habitat maps (structural corridors). We present a method for automated corridor mapping with morphological image processing, and...
2014-01-01
Background The ability of science to produce experimental data has outpaced the ability to effectively visualize and integrate the data into a conceptual framework that can further higher order understanding. Multidimensional and shape-based observational data of regenerative biology presents a particularly daunting challenge in this regard. Large amounts of data are available in regenerative biology, but little progress has been made in understanding how organisms such as planaria robustly achieve and maintain body form. An example of this kind of data can be found in a new repository (PlanformDB) that encodes descriptions of planaria experiments and morphological outcomes using a graph formalism. Results We are developing a model discovery framework that uses a cell-based modeling platform combined with evolutionary search to automatically search for and identify plausible mechanisms for the biological behavior described in PlanformDB. To automate the evolutionary search we developed a way to compare the output of the modeling platform to the morphological descriptions stored in PlanformDB. We used a flexible connected component algorithm to create a graph representation of the virtual worm from the robust, cell-based simulation data. These graphs can then be validated and compared with target data from PlanformDB using the well-known graph-edit distance calculation, which provides a quantitative metric of similarity between graphs. The graph edit distance calculation was integrated into a fitness function that was able to guide automated searches for unbiased models of planarian regeneration. We present a cell-based model of planarian that can regenerate anatomical regions following bisection of the organism, and show that the automated model discovery framework is capable of searching for and finding models of planarian regeneration that match experimental data stored in PlanformDB. Conclusion The work presented here, including our algorithm for converting cell-based models into graphs for comparison with data stored in an external data repository, has made feasible the automated development, training, and validation of computational models using morphology-based data. This work is part of an ongoing project to automate the search process, which will greatly expand our ability to identify, consider, and test biological mechanisms in the field of regenerative biology. PMID:24917489
Comparability of automated human induced pluripotent stem cell culture: a pilot study.
Archibald, Peter R T; Chandra, Amit; Thomas, Dave; Chose, Olivier; Massouridès, Emmanuelle; Laâbi, Yacine; Williams, David J
2016-12-01
Consistent and robust manufacturing is essential for the translation of cell therapies, and the utilisation automation throughout the manufacturing process may allow for improvements in quality control, scalability, reproducibility and economics of the process. The aim of this study was to measure and establish the comparability between alternative process steps for the culture of hiPSCs. Consequently, the effects of manual centrifugation and automated non-centrifugation process steps, performed using TAP Biosystems' CompacT SelecT automated cell culture platform, upon the culture of a human induced pluripotent stem cell (hiPSC) line (VAX001024c07) were compared. This study, has demonstrated that comparable morphologies and cell diameters were observed in hiPSCs cultured using either manual or automated process steps. However, non-centrifugation hiPSC populations exhibited greater cell yields, greater aggregate rates, increased pluripotency marker expression, and decreased differentiation marker expression compared to centrifugation hiPSCs. A trend for decreased variability in cell yield was also observed after the utilisation of the automated process step. This study also highlights the detrimental effect of the cryopreservation and thawing processes upon the growth and characteristics of hiPSC cultures, and demonstrates that automated hiPSC manufacturing protocols can be successfully transferred between independent laboratories.
Anderson, Courtney M.; Zhang, Bingqing; Miller, Melanie; Butko, Emerald; Wu, Xingyong; Laver, Thomas; Kernag, Casey; Kim, Jeffrey; Luo, Yuling; Lamparski, Henry; Park, Emily; Su, Nan
2016-01-01
ABSTRACT Biomarkers such as DNA, RNA, and protein are powerful tools in clinical diagnostics and therapeutic development for many diseases. Identifying RNA expression at the single cell level within the morphological context by RNA in situ hybridization provides a great deal of information on gene expression changes over conventional techniques that analyze bulk tissue, yet widespread use of this technique in the clinical setting has been hampered by the dearth of automated RNA ISH assays. Here we present an automated version of the RNA ISH technology RNAscope that is adaptable to multiple automation platforms. The automated RNAscope assay yields a high signal‐to‐noise ratio with little to no background staining and results comparable to the manual assay. In addition, the automated duplex RNAscope assay was able to detect two biomarkers simultaneously. Lastly, assay consistency and reproducibility were confirmed by quantification of TATA‐box binding protein (TBP) mRNA signals across multiple lots and multiple experiments. Taken together, the data presented in this study demonstrate that the automated RNAscope technology is a high performance RNA ISH assay with broad applicability in biomarker research and diagnostic assay development. J. Cell. Biochem. 117: 2201–2208, 2016. © 2016 The Authors. Journal of Cellular Biochemistry Published by Wiley Periodicals, Inc. PMID:27191821
Chen, C; Li, H; Zhou, X; Wong, S T C
2008-05-01
Image-based, high throughput genome-wide RNA interference (RNAi) experiments are increasingly carried out to facilitate the understanding of gene functions in intricate biological processes. Automated screening of such experiments generates a large number of images with great variations in image quality, which makes manual analysis unreasonably time-consuming. Therefore, effective techniques for automatic image analysis are urgently needed, in which segmentation is one of the most important steps. This paper proposes a fully automatic method for cells segmentation in genome-wide RNAi screening images. The method consists of two steps: nuclei and cytoplasm segmentation. Nuclei are extracted and labelled to initialize cytoplasm segmentation. Since the quality of RNAi image is rather poor, a novel scale-adaptive steerable filter is designed to enhance the image in order to extract long and thin protrusions on the spiky cells. Then, constraint factor GCBAC method and morphological algorithms are combined to be an integrated method to segment tight clustered cells. Compared with the results obtained by using seeded watershed and the ground truth, that is, manual labelling results by experts in RNAi screening data, our method achieves higher accuracy. Compared with active contour methods, our method consumes much less time. The positive results indicate that the proposed method can be applied in automatic image analysis of multi-channel image screening data.
Yamamoto, Yuta; Iriyama, Yasutoshi; Muto, Shunsuke
2016-04-01
In this article, we propose a smart image-analysis method suitable for extracting target features with hierarchical dimension from original data. The method was applied to three-dimensional volume data of an all-solid lithium-ion battery obtained by the automated sequential sample milling and imaging process using a focused ion beam/scanning electron microscope to investigate the spatial configuration of voids inside the battery. To automatically fully extract the shape and location of the voids, three types of filters were consecutively applied: a median blur filter to extract relatively larger voids, a morphological opening operation filter for small dot-shaped voids and a morphological closing operation filter for small voids with concave contrasts. Three data cubes separately processed by the above-mentioned filters were integrated by a union operation to the final unified volume data, which confirmed the correct extraction of the voids over the entire dimension contained in the original data. © The Author 2015. Published by Oxford University Press on behalf of The Japanese Society of Microscopy. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
2012-01-01
Background Filamentous fungi are confronted with changes and limitations of their carbon source during growth in their natural habitats and during industrial applications. To survive life-threatening starvation conditions, carbon from endogenous resources becomes mobilized to fuel maintenance and self-propagation. Key to understand the underlying cellular processes is the system-wide analysis of fungal starvation responses in a temporal and spatial resolution. The knowledge deduced is important for the development of optimized industrial production processes. Results This study describes the physiological, morphological and genome-wide transcriptional changes caused by prolonged carbon starvation during submerged batch cultivation of the filamentous fungus Aspergillus niger. Bioreactor cultivation supported highly reproducible growth conditions and monitoring of physiological parameters. Changes in hyphal growth and morphology were analyzed at distinct cultivation phases using automated image analysis. The Affymetrix GeneChip platform was used to establish genome-wide transcriptional profiles for three selected time points during prolonged carbon starvation. Compared to the exponential growth transcriptome, about 50% (7,292) of all genes displayed differential gene expression during at least one of the starvation time points. Enrichment analysis of Gene Ontology, Pfam domain and KEGG pathway annotations uncovered autophagy and asexual reproduction as major global transcriptional trends. Induced transcription of genes encoding hydrolytic enzymes was accompanied by increased secretion of hydrolases including chitinases, glucanases, proteases and phospholipases as identified by mass spectrometry. Conclusions This study is the first system-wide analysis of the carbon starvation response in a filamentous fungus. Morphological, transcriptomic and secretomic analyses identified key events important for fungal survival and their chronology. The dataset obtained forms a comprehensive framework for further elucidation of the interrelation and interplay of the individual cellular events involved. PMID:22873931
Protocols for Automated Protist Analysis
2011-12-01
Report No: CG-D-14-13 Protocols for Automated Protist Analysis December 2011 Distribution Statement A: Approved for public...release; distribution is unlimited. Protocols for Automated Protist Analysis ii UNCLAS//Public | CG-926 RDC | B. Nelson, et al. | Public...Director United States Coast Guard Research & Development Center 1 Chelsea Street New London, CT 06320 Protocols for Automated Protist Analysis
Textural Maturity Analysis and Sedimentary Environment Discrimination Based on Grain Shape Data
NASA Astrophysics Data System (ADS)
Tunwal, M.; Mulchrone, K. F.; Meere, P. A.
2017-12-01
Morphological analysis of clastic sedimentary grains is an important source of information regarding the processes involved in their formation, transportation and deposition. However, a standardised approach for quantitative grain shape analysis is generally lacking. In this contribution we report on a study where fully automated image analysis techniques were applied to loose sediment samples collected from glacial, aeolian, beach and fluvial environments. A range of shape parameters are evaluated for their usefulness in textural characterisation of populations of grains. The utility of grain shape data in ranking textural maturity of samples within a given sedimentary environment is evaluated. Furthermore, discrimination of sedimentary environment on the basis of grain shape information is explored. The data gathered demonstrates a clear progression in textural maturity in terms of roundness, angularity, irregularity, fractal dimension, convexity, solidity and rectangularity. Textural maturity can be readily categorised using automated grain shape parameter analysis. However, absolute discrimination between different depositional environments on the basis of shape parameters alone is less certain. For example, the aeolian environment is quite distinct whereas fluvial, glacial and beach samples are inherently variable and tend to overlap each other in terms of textural maturity. This is most likely due to a collection of similar processes and sources operating within these environments. This study strongly demonstrates the merit of quantitative population-based shape parameter analysis of texture and indicates that it can play a key role in characterising both loose and consolidated sediments. This project is funded by the Irish Petroleum Infrastructure Programme (www.pip.ie)
NASA Astrophysics Data System (ADS)
Mahmud, K.; Mariethoz, G.; Baker, A.; Treble, P. C.; Markowska, M.; McGuire, E.
2016-01-01
Limestone aeolianites constitute karstic aquifers covering much of the western and southern Australian coastal fringe. They are a key groundwater resource for a range of industries such as winery and tourism, and provide important ecosystem services such as habitat for stygofauna. Moreover, recharge estimation is important for understanding the water cycle, for contaminant transport, for water management, and for stalagmite-based paleoclimate reconstructions. Caves offer a natural inception point to observe both the long-term groundwater recharge and the preferential movement of water through the unsaturated zone of such limestone. With the availability of automated drip rate logging systems and remote sensing techniques, it is now possible to deploy the combination of these methods for larger-scale studies of infiltration processes within a cave. In this study, we utilize a spatial survey of automated cave drip monitoring in two large chambers of Golgotha Cave, south-western Western Australia (SWWA), with the aim of better understanding infiltration water movement and the relationship between infiltration, stalactite morphology, and unsaturated zone recharge. By applying morphological analysis of ceiling features from Terrestrial LiDAR (T-LiDAR) data, coupled with drip time series and climate data from 2012 to 2014, we demonstrate the nature of the relationships between infiltration through fractures in the limestone and unsaturated zone recharge. Similarities between drip rate time series are interpreted in terms of flow patterns, cave chamber morphology, and lithology. Moreover, we develop a new technique to estimate recharge in large-scale caves, engaging flow classification to determine the cave ceiling area covered by each flow category and drip data for the entire observation period, to calculate the total volume of cave discharge. This new technique can be applied to other cave sites to identify highly focussed areas of recharge and can help to better estimate the total recharge volume.
NASA Astrophysics Data System (ADS)
Mahmud, K.; Mariethoz, G.; Baker, A.; Treble, P. C.; Markowska, M.; McGuire, E.
2015-09-01
Limestone aeolianites constitute karstic aquifers covering much of the western and southern Australian coastal fringe. They are a key groundwater resource for a range of industries such as winery and tourism, and provide important ecosystem services such as habitat for stygofauna. Moreover, recharge estimation is important for understanding the water cycle, for contaminant transport, for water management and for stalagmite-based paleoclimate reconstructions. Caves offer a natural inception point to observe both the long-term groundwater recharge and the preferential movement of water through the unsaturated zone of such limestone. With the availability of automated drip rate logging systems and remote sensing techniques, it is now possible to deploy the combination of these methods for larger scale studies of infiltration processes within a cave. In this study, we utilize a spatial survey of automated cave drip monitoring in two large chambers of the Golgotha Cave, South-West Western Australia (SWWA), with the aim of better understanding infiltration water movement and the relationship between infiltration, stalactite morphology and unsaturated zone recharge. By applying morphological analysis of ceiling features from Terrestrial LiDAR (T-LiDAR) data, coupled with drip time series and climate data from 2012-2014, we demonstrate the nature of the relationships between infiltration through fractures in the limestone and unsaturated zone recharge. Similarities between drip-rate time series are interpreted in terms of flow patterns, cave chamber morphology and lithology. Moreover, we develop a new technique to estimate recharge in large scale caves, engaging flow classification to determine the cave ceiling area covered by each flow category and drip data for the entire observation period, to calculate the total volume of cave discharge. This new technique can be applied to other cave sites to identify highly focused areas of recharge and can help to better estimate the total recharge volume.
NASA Astrophysics Data System (ADS)
Yu, H.; Barriga, S.; Agurto, C.; Zamora, G.; Bauman, W.; Soliz, P.
2012-03-01
Retinal vasculature is one of the most important anatomical structures in digital retinal photographs. Accurate segmentation of retinal blood vessels is an essential task in automated analysis of retinopathy. This paper presents a new and effective vessel segmentation algorithm that features computational simplicity and fast implementation. This method uses morphological pre-processing to decrease the disturbance of bright structures and lesions before vessel extraction. Next, a vessel probability map is generated by computing the eigenvalues of the second derivatives of Gaussian filtered image at multiple scales. Then, the second order local entropy thresholding is applied to segment the vessel map. Lastly, a rule-based decision step, which measures the geometric shape difference between vessels and lesions is applied to reduce false positives. The algorithm is evaluated on the low-resolution DRIVE and STARE databases and the publicly available high-resolution image database from Friedrich-Alexander University Erlangen-Nuremberg, Germany). The proposed method achieved comparable performance to state of the art unsupervised vessel segmentation methods with a competitive faster speed on the DRIVE and STARE databases. For the high resolution fundus image database, the proposed algorithm outperforms an existing approach both on performance and speed. The efficiency and robustness make the blood vessel segmentation method described here suitable for broad application in automated analysis of retinal images.
Mazzaferri, Javier; Larrivée, Bruno; Cakir, Bertan; Sapieha, Przemyslaw; Costantino, Santiago
2018-03-02
Preclinical studies of vascular retinal diseases rely on the assessment of developmental dystrophies in the oxygen induced retinopathy rodent model. The quantification of vessel tufts and avascular regions is typically computed manually from flat mounted retinas imaged using fluorescent probes that highlight the vascular network. Such manual measurements are time-consuming and hampered by user variability and bias, thus a rapid and objective method is needed. Here, we introduce a machine learning approach to segment and characterize vascular tufts, delineate the whole vasculature network, and identify and analyze avascular regions. Our quantitative retinal vascular assessment (QuRVA) technique uses a simple machine learning method and morphological analysis to provide reliable computations of vascular density and pathological vascular tuft regions, devoid of user intervention within seconds. We demonstrate the high degree of error and variability of manual segmentations, and designed, coded, and implemented a set of algorithms to perform this task in a fully automated manner. We benchmark and validate the results of our analysis pipeline using the consensus of several manually curated segmentations using commonly used computer tools. The source code of our implementation is released under version 3 of the GNU General Public License ( https://www.mathworks.com/matlabcentral/fileexchange/65699-javimazzaf-qurva ).
HyphArea--automated analysis of spatiotemporal fungal patterns.
Baum, Tobias; Navarro-Quezada, Aura; Knogge, Wolfgang; Douchkov, Dimitar; Schweizer, Patrick; Seiffert, Udo
2011-01-01
In phytopathology quantitative measurements are rarely used to assess crop plant disease symptoms. Instead, a qualitative valuation by eye is often the method of choice. In order to close the gap between subjective human inspection and objective quantitative results, the development of an automated analysis system that is capable of recognizing and characterizing the growth patterns of fungal hyphae in micrograph images was developed. This system should enable the efficient screening of different host-pathogen combinations (e.g., barley-Blumeria graminis, barley-Rhynchosporium secalis) using different microscopy technologies (e.g., bright field, fluorescence). An image segmentation algorithm was developed for gray-scale image data that achieved good results with several microscope imaging protocols. Furthermore, adaptability towards different host-pathogen systems was obtained by using a classification that is based on a genetic algorithm. The developed software system was named HyphArea, since the quantification of the area covered by a hyphal colony is the basic task and prerequisite for all further morphological and statistical analyses in this context. By means of a typical use case the utilization and basic properties of HyphArea could be demonstrated. It was possible to detect statistically significant differences between the growth of an R. secalis wild-type strain and a virulence mutant. Copyright © 2010 Elsevier GmbH. All rights reserved.
NASA Astrophysics Data System (ADS)
Großerueschkamp, Frederik; Bracht, Thilo; Diehl, Hanna C.; Kuepper, Claus; Ahrens, Maike; Kallenbach-Thieltges, Angela; Mosig, Axel; Eisenacher, Martin; Marcus, Katrin; Behrens, Thomas; Brüning, Thomas; Theegarten, Dirk; Sitek, Barbara; Gerwert, Klaus
2017-03-01
Diffuse malignant mesothelioma (DMM) is a heterogeneous malignant neoplasia manifesting with three subtypes: epithelioid, sarcomatoid and biphasic. DMM exhibit a high degree of spatial heterogeneity that complicates a thorough understanding of the underlying different molecular processes in each subtype. We present a novel approach to spatially resolve the heterogeneity of a tumour in a label-free manner by integrating FTIR imaging and laser capture microdissection (LCM). Subsequent proteome analysis of the dissected homogenous samples provides in addition molecular resolution. FTIR imaging resolves tumour subtypes within tissue thin-sections in an automated and label-free manner with accuracy of about 85% for DMM subtypes. Even in highly heterogeneous tissue structures, our label-free approach can identify small regions of interest, which can be dissected as homogeneous samples using LCM. Subsequent proteome analysis provides a location specific molecular characterization. Applied to DMM subtypes, we identify 142 differentially expressed proteins, including five protein biomarkers commonly used in DMM immunohistochemistry panels. Thus, FTIR imaging resolves not only morphological alteration within tissue but it resolves even alterations at the level of single proteins in tumour subtypes. Our fully automated workflow FTIR-guided LCM opens new avenues collecting homogeneous samples for precise and predictive biomarkers from omics studies.
Großerueschkamp, Frederik; Bracht, Thilo; Diehl, Hanna C; Kuepper, Claus; Ahrens, Maike; Kallenbach-Thieltges, Angela; Mosig, Axel; Eisenacher, Martin; Marcus, Katrin; Behrens, Thomas; Brüning, Thomas; Theegarten, Dirk; Sitek, Barbara; Gerwert, Klaus
2017-03-30
Diffuse malignant mesothelioma (DMM) is a heterogeneous malignant neoplasia manifesting with three subtypes: epithelioid, sarcomatoid and biphasic. DMM exhibit a high degree of spatial heterogeneity that complicates a thorough understanding of the underlying different molecular processes in each subtype. We present a novel approach to spatially resolve the heterogeneity of a tumour in a label-free manner by integrating FTIR imaging and laser capture microdissection (LCM). Subsequent proteome analysis of the dissected homogenous samples provides in addition molecular resolution. FTIR imaging resolves tumour subtypes within tissue thin-sections in an automated and label-free manner with accuracy of about 85% for DMM subtypes. Even in highly heterogeneous tissue structures, our label-free approach can identify small regions of interest, which can be dissected as homogeneous samples using LCM. Subsequent proteome analysis provides a location specific molecular characterization. Applied to DMM subtypes, we identify 142 differentially expressed proteins, including five protein biomarkers commonly used in DMM immunohistochemistry panels. Thus, FTIR imaging resolves not only morphological alteration within tissue but it resolves even alterations at the level of single proteins in tumour subtypes. Our fully automated workflow FTIR-guided LCM opens new avenues collecting homogeneous samples for precise and predictive biomarkers from omics studies.
Jaiswara, Ranjana; Nandi, Diptarup; Balakrishnan, Rohini
2013-01-01
Traditional taxonomy based on morphology has often failed in accurate species identification owing to the occurrence of cryptic species, which are reproductively isolated but morphologically identical. Molecular data have thus been used to complement morphology in species identification. The sexual advertisement calls in several groups of acoustically communicating animals are species-specific and can thus complement molecular data as non-invasive tools for identification. Several statistical tools and automated identifier algorithms have been used to investigate the efficiency of acoustic signals in species identification. Despite a plethora of such methods, there is a general lack of knowledge regarding the appropriate usage of these methods in specific taxa. In this study, we investigated the performance of two commonly used statistical methods, discriminant function analysis (DFA) and cluster analysis, in identification and classification based on acoustic signals of field cricket species belonging to the subfamily Gryllinae. Using a comparative approach we evaluated the optimal number of species and calling song characteristics for both the methods that lead to most accurate classification and identification. The accuracy of classification using DFA was high and was not affected by the number of taxa used. However, a constraint in using discriminant function analysis is the need for a priori classification of songs. Accuracy of classification using cluster analysis, which does not require a priori knowledge, was maximum for 6–7 taxa and decreased significantly when more than ten taxa were analysed together. We also investigated the efficacy of two novel derived acoustic features in improving the accuracy of identification. Our results show that DFA is a reliable statistical tool for species identification using acoustic signals. Our results also show that cluster analysis of acoustic signals in crickets works effectively for species classification and identification. PMID:24086666
Automation of peak-tracking analysis of stepwise perturbed NMR spectra.
Banelli, Tommaso; Vuano, Marco; Fogolari, Federico; Fusiello, Andrea; Esposito, Gennaro; Corazza, Alessandra
2017-02-01
We describe a new algorithmic approach able to automatically pick and track the NMR resonances of a large number of 2D NMR spectra acquired during a stepwise variation of a physical parameter. The method has been named Trace in Track (TINT), referring to the idea that a gaussian decomposition traces peaks within the tracks recognised through 3D mathematical morphology. It is capable of determining the evolution of the chemical shifts, intensity and linewidths of each tracked peak.The performances obtained in term of track reconstruction and correct assignment on realistic synthetic spectra were high above 90% when a noise level similar to that of experimental data were considered. TINT was applied successfully to several protein systems during a temperature ramp in isotope exchange experiments. A comparison with a state-of-the-art algorithm showed promising results for great numbers of spectra and low signal to noise ratios, when the graduality of the perturbation is appropriate. TINT can be applied to different kinds of high throughput chemical shift mapping experiments, with quasi-continuous variations, in which a quantitative automated recognition is crucial.
An automated data exploitation system for airborne sensors
NASA Astrophysics Data System (ADS)
Chen, Hai-Wen; McGurr, Mike
2014-06-01
Advanced wide area persistent surveillance (WAPS) sensor systems on manned or unmanned airborne vehicles are essential for wide-area urban security monitoring in order to protect our people and our warfighter from terrorist attacks. Currently, human (imagery) analysts process huge data collections from full motion video (FMV) for data exploitation and analysis (real-time and forensic), providing slow and inaccurate results. An Automated Data Exploitation System (ADES) is urgently needed. In this paper, we present a recently developed ADES for airborne vehicles under heavy urban background clutter conditions. This system includes four processes: (1) fast image registration, stabilization, and mosaicking; (2) advanced non-linear morphological moving target detection; (3) robust multiple target (vehicles, dismounts, and human) tracking (up to 100 target tracks); and (4) moving or static target/object recognition (super-resolution). Test results with real FMV data indicate that our ADES can reliably detect, track, and recognize multiple vehicles under heavy urban background clutters. Furthermore, our example shows that ADES as a baseline platform can provide capability for vehicle abnormal behavior detection to help imagery analysts quickly trace down potential threats and crimes.
High-throughput behavioral screening method for detecting auditory response defects in zebrafish.
Bang, Pascal I; Yelick, Pamela C; Malicki, Jarema J; Sewell, William F
2002-08-30
We have developed an automated, high-throughput behavioral screening method for detecting hearing defects in zebrafish. Our assay monitors a rapid escape reflex in response to a loud sound. With this approach, 36 adult zebrafish, restrained in visually isolated compartments, can be simultaneously assessed for responsiveness to near-field 400 Hz sinusoidal tone bursts. Automated, objective determinations of responses are achieved with a computer program that obtains images at precise times relative to the acoustic stimulus. Images taken with a CCD video camera before and after stimulus presentation are subtracted to reveal a response to the sound. Up to 108 fish can be screened per hour. Over 6500 fish were tested to validate the reliability of the assay. We found that 1% of these animals displayed hearing deficits. The phenotypes of non-responders were further assessed with radiological analysis for defects in the gross morphology of the auditory system. Nearly all of those showed abnormalities in conductive elements of the auditory system: the swim bladder or Weberian ossicles. Copyright 2002 Elsevier Science B.V.
Automated image segmentation-assisted flattening of atomic force microscopy images.
Wang, Yuliang; Lu, Tongda; Li, Xiaolai; Wang, Huimin
2018-01-01
Atomic force microscopy (AFM) images normally exhibit various artifacts. As a result, image flattening is required prior to image analysis. To obtain optimized flattening results, foreground features are generally manually excluded using rectangular masks in image flattening, which is time consuming and inaccurate. In this study, a two-step scheme was proposed to achieve optimized image flattening in an automated manner. In the first step, the convex and concave features in the foreground were automatically segmented with accurate boundary detection. The extracted foreground features were taken as exclusion masks. In the second step, data points in the background were fitted as polynomial curves/surfaces, which were then subtracted from raw images to get the flattened images. Moreover, sliding-window-based polynomial fitting was proposed to process images with complex background trends. The working principle of the two-step image flattening scheme were presented, followed by the investigation of the influence of a sliding-window size and polynomial fitting direction on the flattened images. Additionally, the role of image flattening on the morphological characterization and segmentation of AFM images were verified with the proposed method.
Automated Protist Analysis of Complex Samples: Recent Investigations Using Motion and Thresholding
2012-01-01
Report No: CG-D-15-13 Automated Protist Analysis of Complex Samples: Recent Investigations Using Motion and Thresholding...Distribution Statement A: Approved for public release; distribution is unlimited. January 2012 Automated Protist Analysis of Complex Samples...Chelsea Street New London, CT 06320 Automated Protist Analysis of Complex Samples iii UNCLAS//PUBLIC | CG-926 R&DC | B. Nelson, et al
Wang, Xinggang; Yang, Wei; Weinreb, Jeffrey; Han, Juan; Li, Qiubai; Kong, Xiangchuang; Yan, Yongluan; Ke, Zan; Luo, Bo; Liu, Tao; Wang, Liang
2017-11-13
Prostate cancer (PCa) is a major cause of death since ancient time documented in Egyptian Ptolemaic mummy imaging. PCa detection is critical to personalized medicine and varies considerably under an MRI scan. 172 patients with 2,602 morphologic images (axial 2D T2-weighted imaging) of the prostate were obtained. A deep learning with deep convolutional neural network (DCNN) and a non-deep learning with SIFT image feature and bag-of-word (BoW), a representative method for image recognition and analysis, were used to distinguish pathologically confirmed PCa patients from prostate benign conditions (BCs) patients with prostatitis or prostate benign hyperplasia (BPH). In fully automated detection of PCa patients, deep learning had a statistically higher area under the receiver operating characteristics curve (AUC) than non-deep learning (P = 0.0007 < 0.001). The AUCs were 0.84 (95% CI 0.78-0.89) for deep learning method and 0.70 (95% CI 0.63-0.77) for non-deep learning method, respectively. Our results suggest that deep learning with DCNN is superior to non-deep learning with SIFT image feature and BoW model for fully automated PCa patients differentiation from prostate BCs patients. Our deep learning method is extensible to image modalities such as MR imaging, CT and PET of other organs.
Brandes, Susanne; Mokhtari, Zeinab; Essig, Fabian; Hünniger, Kerstin; Kurzai, Oliver; Figge, Marc Thilo
2015-02-01
Time-lapse microscopy is an important technique to study the dynamics of various biological processes. The labor-intensive manual analysis of microscopy videos is increasingly replaced by automated segmentation and tracking methods. These methods are often limited to certain cell morphologies and/or cell stainings. In this paper, we present an automated segmentation and tracking framework that does not have these restrictions. In particular, our framework handles highly variable cell shapes and does not rely on any cell stainings. Our segmentation approach is based on a combination of spatial and temporal image variations to detect moving cells in microscopy videos. This method yields a sensitivity of 99% and a precision of 95% in object detection. The tracking of cells consists of different steps, starting from single-cell tracking based on a nearest-neighbor-approach, detection of cell-cell interactions and splitting of cell clusters, and finally combining tracklets using methods from graph theory. The segmentation and tracking framework was applied to synthetic as well as experimental datasets with varying cell densities implying different numbers of cell-cell interactions. We established a validation framework to measure the performance of our tracking technique. The cell tracking accuracy was found to be >99% for all datasets indicating a high accuracy for connecting the detected cells between different time points. Copyright © 2014 Elsevier B.V. All rights reserved.
An Improved Representation of Regional Boundaries on Parcellated Morphological Surfaces
Hao, Xuejun; Xu, Dongrong; Bansal, Ravi; Liu, Jun; Peterson, Bradley S.
2010-01-01
Establishing the correspondences of brain anatomy with function is important for understanding neuroimaging data. Regional delineations on morphological surfaces define anatomical landmarks and help to visualize and interpret both functional data and morphological measures mapped onto the cortical surface. We present an efficient algorithm that accurately delineates the morphological surface of the cerebral cortex in real time during generation of the surface using information from parcellated 3D data. With this accurate region delineation, we then develop methods for boundary-preserved simplification and smoothing, as well as procedures for the automated correction of small, misclassified regions to improve the quality of the delineated surface. We demonstrate that our delineation algorithm, together with a new method for double-snapshot visualization of cortical regions, can be used to establish a clear correspondence between brain anatomy and mapped quantities, such as morphological measures, across groups of subjects. PMID:21144708
Automated volumetric segmentation of retinal fluid on optical coherence tomography
Wang, Jie; Zhang, Miao; Pechauer, Alex D.; Liu, Liang; Hwang, Thomas S.; Wilson, David J.; Li, Dengwang; Jia, Yali
2016-01-01
We propose a novel automated volumetric segmentation method to detect and quantify retinal fluid on optical coherence tomography (OCT). The fuzzy level set method was introduced for identifying the boundaries of fluid filled regions on B-scans (x and y-axes) and C-scans (z-axis). The boundaries identified from three types of scans were combined to generate a comprehensive volumetric segmentation of retinal fluid. Then, artefactual fluid regions were removed using morphological characteristics and by identifying vascular shadowing with OCT angiography obtained from the same scan. The accuracy of retinal fluid detection and quantification was evaluated on 10 eyes with diabetic macular edema. Automated segmentation had good agreement with manual segmentation qualitatively and quantitatively. The fluid map can be integrated with OCT angiogram for intuitive clinical evaluation. PMID:27446676
Automated segmentation of comet assay images using Gaussian filtering and fuzzy clustering.
Sansone, Mario; Zeni, Olga; Esposito, Giovanni
2012-05-01
Comet assay is one of the most popular tests for the detection of DNA damage at single cell level. In this study, an algorithm for comet assay analysis has been proposed, aiming to minimize user interaction and providing reproducible measurements. The algorithm comprises two-steps: (a) comet identification via Gaussian pre-filtering and morphological operators; (b) comet segmentation via fuzzy clustering. The algorithm has been evaluated using comet images from human leukocytes treated with a commonly used DNA damaging agent. A comparison of the proposed approach with a commercial system has been performed. Results show that fuzzy segmentation can increase overall sensitivity, giving benefits in bio-monitoring studies where weak genotoxic effects are expected.
Wagner, David G; Russell, Donna K; Benson, Jenna M; Schneider, Ashley E; Hoda, Rana S; Bonfiglio, Thomas A
2011-10-01
Traditional cell block (TCB) sections serve as an important diagnostic adjunct to cytologic smears but are also used today as a reliable preparation for immunohistochemical (IHC) studies. There are many ways to prepare a cell block and the methods continue to be revised. In this study, we compare the TCB with the Cellient™ automated cell block system. Thirty-five cell blocks were obtained from 16 benign and 19 malignant nongynecologic cytology specimens at a large university teaching hospital and prepared according to TCB and Cellient protocols. Cell block sections from both methods were compared for possible differences in various morphologic features and immunohistochemical staining patterns. In the 16 benign cases, no significant morphologic differences were found between the TCB and Cellient cell block sections. For the 19 malignant cases, some noticeable differences in the nuclear chromatin and cellularity were identified, although statistical significance was not attained. Immunohistochemical or special stains were performed on 89% of the malignant cases (17/19). Inadequate cellularity precluded full evaluation in 23% of Cellient cell block IHC preparations (4/17). Of the malignant cases with adequate cellularity (13/17), the immunohistochemical staining patterns from the different methods were identical in 53% of cases. The traditional and Cellient cell block sections showed similar morphologic and immunohistochemical staining patterns. The only significant difference between the two methods concerned the lower overall cell block cellularity identified during immunohistochemical staining in the Cellient cell block sections. Copyright © 2010 Wiley-Liss, Inc.
Rotation-invariant convolutional neural networks for galaxy morphology prediction
NASA Astrophysics Data System (ADS)
Dieleman, Sander; Willett, Kyle W.; Dambre, Joni
2015-06-01
Measuring the morphological parameters of galaxies is a key requirement for studying their formation and evolution. Surveys such as the Sloan Digital Sky Survey have resulted in the availability of very large collections of images, which have permitted population-wide analyses of galaxy morphology. Morphological analysis has traditionally been carried out mostly via visual inspection by trained experts, which is time consuming and does not scale to large (≳104) numbers of images. Although attempts have been made to build automated classification systems, these have not been able to achieve the desired level of accuracy. The Galaxy Zoo project successfully applied a crowdsourcing strategy, inviting online users to classify images by answering a series of questions. Unfortunately, even this approach does not scale well enough to keep up with the increasing availability of galaxy images. We present a deep neural network model for galaxy morphology classification which exploits translational and rotational symmetry. It was developed in the context of the Galaxy Challenge, an international competition to build the best model for morphology classification based on annotated images from the Galaxy Zoo project. For images with high agreement among the Galaxy Zoo participants, our model is able to reproduce their consensus with near-perfect accuracy (>99 per cent) for most questions. Confident model predictions are highly accurate, which makes the model suitable for filtering large collections of images and forwarding challenging images to experts for manual annotation. This approach greatly reduces the experts' workload without affecting accuracy. The application of these algorithms to larger sets of training data will be critical for analysing results from future surveys such as the Large Synoptic Survey Telescope.
Automated peroperative assessment of stents apposition from OCT pullbacks.
Dubuisson, Florian; Péry, Emilie; Ouchchane, Lemlih; Combaret, Nicolas; Kauffmann, Claude; Souteyrand, Géraud; Motreff, Pascal; Sarry, Laurent
2015-04-01
This study's aim was to control the stents apposition by automatically analyzing endovascular optical coherence tomography (OCT) sequences. Lumen is detected using threshold, morphological and gradient operators to run a Dijkstra algorithm. Wrong detection tagged by the user and caused by bifurcation, struts'presence, thrombotic lesions or dissections can be corrected using a morphing algorithm. Struts are also segmented by computing symmetrical and morphological operators. Euclidian distance between detected struts and wall artery initializes a stent's complete distance map and missing data are interpolated with thin-plate spline functions. Rejection of detected outliers, regularization of parameters by generalized cross-validation and using the one-side cyclic property of the map also optimize accuracy. Several indices computed from the map provide quantitative values of malapposition. Algorithm was run on four in-vivo OCT sequences including different incomplete stent apposition's cases. Comparison with manual expert measurements validates the segmentation׳s accuracy and shows an almost perfect concordance of automated results. Copyright © 2014 Elsevier Ltd. All rights reserved.
Jaccard, Nicolas; Griffin, Lewis D; Keser, Ana; Macown, Rhys J; Super, Alexandre; Veraitch, Farlan S; Szita, Nicolas
2014-03-01
The quantitative determination of key adherent cell culture characteristics such as confluency, morphology, and cell density is necessary for the evaluation of experimental outcomes and to provide a suitable basis for the establishment of robust cell culture protocols. Automated processing of images acquired using phase contrast microscopy (PCM), an imaging modality widely used for the visual inspection of adherent cell cultures, could enable the non-invasive determination of these characteristics. We present an image-processing approach that accurately detects cellular objects in PCM images through a combination of local contrast thresholding and post hoc correction of halo artifacts. The method was thoroughly validated using a variety of cell lines, microscope models and imaging conditions, demonstrating consistently high segmentation performance in all cases and very short processing times (<1 s per 1,208 × 960 pixels image). Based on the high segmentation performance, it was possible to precisely determine culture confluency, cell density, and the morphology of cellular objects, demonstrating the wide applicability of our algorithm for typical microscopy image processing pipelines. Furthermore, PCM image segmentation was used to facilitate the interpretation and analysis of fluorescence microscopy data, enabling the determination of temporal and spatial expression patterns of a fluorescent reporter. We created a software toolbox (PHANTAST) that bundles all the algorithms and provides an easy to use graphical user interface. Source-code for MATLAB and ImageJ is freely available under a permissive open-source license. © 2013 The Authors. Biotechnology and Bioengineering Published by Wiley Periodicals, Inc.
Wilkins, Ruth; Flegal, Farrah; Knoll, Joan H.M.; Rogan, Peter K.
2017-01-01
Accurate digital image analysis of abnormal microscopic structures relies on high quality images and on minimizing the rates of false positive (FP) and negative objects in images. Cytogenetic biodosimetry detects dicentric chromosomes (DCs) that arise from exposure to ionizing radiation, and determines radiation dose received based on DC frequency. Improvements in automated DC recognition increase the accuracy of dose estimates by reclassifying FP DCs as monocentric chromosomes or chromosome fragments. We also present image segmentation methods to rank high quality digital metaphase images and eliminate suboptimal metaphase cells. A set of chromosome morphology segmentation methods selectively filtered out FP DCs arising primarily from sister chromatid separation, chromosome fragmentation, and cellular debris. This reduced FPs by an average of 55% and was highly specific to these abnormal structures (≥97.7%) in three samples. Additional filters selectively removed images with incomplete, highly overlapped, or missing metaphase cells, or with poor overall chromosome morphologies that increased FP rates. Image selection is optimized and FP DCs are minimized by combining multiple feature based segmentation filters and a novel image sorting procedure based on the known distribution of chromosome lengths. Applying the same image segmentation filtering procedures to both calibration and test samples reduced the average dose estimation error from 0.4 Gy to <0.2 Gy, obviating the need to first manually review these images. This reliable and scalable solution enables batch processing for multiple samples of unknown dose, and meets current requirements for triage radiation biodosimetry of high quality metaphase cell preparations. PMID:29026522
Jaccard, Nicolas; Griffin, Lewis D; Keser, Ana; Macown, Rhys J; Super, Alexandre; Veraitch, Farlan S; Szita, Nicolas
2014-01-01
The quantitative determination of key adherent cell culture characteristics such as confluency, morphology, and cell density is necessary for the evaluation of experimental outcomes and to provide a suitable basis for the establishment of robust cell culture protocols. Automated processing of images acquired using phase contrast microscopy (PCM), an imaging modality widely used for the visual inspection of adherent cell cultures, could enable the non-invasive determination of these characteristics. We present an image-processing approach that accurately detects cellular objects in PCM images through a combination of local contrast thresholding and post hoc correction of halo artifacts. The method was thoroughly validated using a variety of cell lines, microscope models and imaging conditions, demonstrating consistently high segmentation performance in all cases and very short processing times (<1 s per 1,208 × 960 pixels image). Based on the high segmentation performance, it was possible to precisely determine culture confluency, cell density, and the morphology of cellular objects, demonstrating the wide applicability of our algorithm for typical microscopy image processing pipelines. Furthermore, PCM image segmentation was used to facilitate the interpretation and analysis of fluorescence microscopy data, enabling the determination of temporal and spatial expression patterns of a fluorescent reporter. We created a software toolbox (PHANTAST) that bundles all the algorithms and provides an easy to use graphical user interface. Source-code for MATLAB and ImageJ is freely available under a permissive open-source license. Biotechnol. Bioeng. 2014;111: 504–517. © 2013 Wiley Periodicals, Inc. PMID:24037521
Tabe, Yoko; Takemura, Hiroyuki; Kimura, Konobu; Takahashi, Toshihiro; Yang, Haeun; Tsuchiya, Koji; Konishi, Aya; Uchihashi, Kinya; Horii, Takashi; Ohsaka, Akimichi
2018-01-01
Morphological microscopic examinations of nucleated cells in body fluid (BF) samples are performed to screen malignancy. However, the morphological differentiation is time-consuming and labor-intensive. This study aimed to develop a new flowcytometry-based gating analysis mode “XN-BF gating algorithm” to detect malignant cells using an automated hematology analyzer, Sysmex XN-1000. XN-BF mode was equipped with WDF white blood cell (WBC) differential channel. We added two algorithms to the WDF channel: Rule 1 detects larger and clumped cell signals compared to the leukocytes, targeting the clustered malignant cells; Rule 2 detects middle sized mononuclear cells containing less granules than neutrophils with similar fluorescence signal to monocytes, targeting hematological malignant cells and solid tumor cells. BF samples that meet, at least, one rule were detected as malignant. To evaluate this novel gating algorithm, 92 various BF samples were collected. Manual microscopic differentiation with the May-Grunwald Giemsa stain and WBC count with hemocytometer were also performed. The performance of these three methods were evaluated by comparing with the cytological diagnosis. The XN-BF gating algorithm achieved sensitivity of 63.0% and specificity of 87.8% with 68.0% for positive predictive value and 85.1% for negative predictive value in detecting malignant-cell positive samples. Manual microscopic WBC differentiation and WBC count demonstrated 70.4% and 66.7% of sensitivities, and 96.9% and 92.3% of specificities, respectively. The XN-BF gating algorithm can be a feasible tool in hematology laboratories for prompt screening of malignant cells in various BF samples. PMID:29425230
Ai, Tomohiko; Tabe, Yoko; Takemura, Hiroyuki; Kimura, Konobu; Takahashi, Toshihiro; Yang, Haeun; Tsuchiya, Koji; Konishi, Aya; Uchihashi, Kinya; Horii, Takashi; Ohsaka, Akimichi
2018-01-01
Morphological microscopic examinations of nucleated cells in body fluid (BF) samples are performed to screen malignancy. However, the morphological differentiation is time-consuming and labor-intensive. This study aimed to develop a new flowcytometry-based gating analysis mode "XN-BF gating algorithm" to detect malignant cells using an automated hematology analyzer, Sysmex XN-1000. XN-BF mode was equipped with WDF white blood cell (WBC) differential channel. We added two algorithms to the WDF channel: Rule 1 detects larger and clumped cell signals compared to the leukocytes, targeting the clustered malignant cells; Rule 2 detects middle sized mononuclear cells containing less granules than neutrophils with similar fluorescence signal to monocytes, targeting hematological malignant cells and solid tumor cells. BF samples that meet, at least, one rule were detected as malignant. To evaluate this novel gating algorithm, 92 various BF samples were collected. Manual microscopic differentiation with the May-Grunwald Giemsa stain and WBC count with hemocytometer were also performed. The performance of these three methods were evaluated by comparing with the cytological diagnosis. The XN-BF gating algorithm achieved sensitivity of 63.0% and specificity of 87.8% with 68.0% for positive predictive value and 85.1% for negative predictive value in detecting malignant-cell positive samples. Manual microscopic WBC differentiation and WBC count demonstrated 70.4% and 66.7% of sensitivities, and 96.9% and 92.3% of specificities, respectively. The XN-BF gating algorithm can be a feasible tool in hematology laboratories for prompt screening of malignant cells in various BF samples.
Detection of Cardiac Abnormalities from Multilead ECG using Multiscale Phase Alternation Features.
Tripathy, R K; Dandapat, S
2016-06-01
The cardiac activities such as the depolarization and the relaxation of atria and ventricles are observed in electrocardiogram (ECG). The changes in the morphological features of ECG are the symptoms of particular heart pathology. It is a cumbersome task for medical experts to visually identify any subtle changes in the morphological features during 24 hours of ECG recording. Therefore, the automated analysis of ECG signal is a need for accurate detection of cardiac abnormalities. In this paper, a novel method for automated detection of cardiac abnormalities from multilead ECG is proposed. The method uses multiscale phase alternation (PA) features of multilead ECG and two classifiers, k-nearest neighbor (KNN) and fuzzy KNN for classification of bundle branch block (BBB), myocardial infarction (MI), heart muscle defect (HMD) and healthy control (HC). The dual tree complex wavelet transform (DTCWT) is used to decompose the ECG signal of each lead into complex wavelet coefficients at different scales. The phase of the complex wavelet coefficients is computed and the PA values at each wavelet scale are used as features for detection and classification of cardiac abnormalities. A publicly available multilead ECG database (PTB database) is used for testing of the proposed method. The experimental results show that, the proposed multiscale PA features and the fuzzy KNN classifier have better performance for detection of cardiac abnormalities with sensitivity values of 78.12 %, 80.90 % and 94.31 % for BBB, HMD and MI classes. The sensitivity value of proposed method for MI class is compared with the state-of-art techniques from multilead ECG.
Visual Recognition Software for Binary Classification and its Application to Pollen Identification
NASA Astrophysics Data System (ADS)
Punyasena, S. W.; Tcheng, D. K.; Nayak, A.
2014-12-01
An underappreciated source of uncertainty in paleoecology is the uncertainty of palynological identifications. The confidence of any given identification is not regularly reported in published results, so cannot be incorporated into subsequent meta-analyses. Automated identifications systems potentially provide a means of objectively measuring the confidence of a given count or single identification, as well as a mechanism for increasing sample sizes and throughput. We developed the software ARLO (Automated Recognition with Layered Optimization) to tackle difficult visual classification problems such as pollen identification. ARLO applies pattern recognition and machine learning to the analysis of pollen images. The features that the system discovers are not the traditional features of pollen morphology. Instead, general purpose image features, such as pixel lines and grids of different dimensions, size, spacing, and resolution, are used. ARLO adapts to a given problem by searching for the most effective combination of feature representation and learning strategy. We present a two phase approach which uses our machine learning process to first segment pollen grains from the background and then classify pollen pixels and report species ratios. We conducted two separate experiments that utilized two distinct sets of algorithms and optimization procedures. The first analysis focused on reconstructing black and white spruce pollen ratios, training and testing our classification model at the slide level. This allowed us to directly compare our automated counts and expert counts to slides of known spruce ratios. Our second analysis focused on maximizing classification accuracy at the individual pollen grain level. Instead of predicting ratios of given slides, we predicted the species represented in a given image window. The resulting analysis was more scalable, as we were able to adapt the most efficient parts of the methodology from our first analysis. ARLO was able to distinguish between the pollen of black and white spruce with an accuracy of ~83.61%. This compared favorably to human expert performance. At the writing of this abstract, we are also experimenting with experimenting with the analysis of higher diversity samples, including modern tropical pollen material collected from ground pollen traps.
NASA Astrophysics Data System (ADS)
Jelinek, Herbert F.; Cree, Michael J.; Leandro, Jorge J. G.; Soares, João V. B.; Cesar, Roberto M.; Luckie, A.
2007-05-01
Proliferative diabetic retinopathy can lead to blindness. However, early recognition allows appropriate, timely intervention. Fluorescein-labeled retinal blood vessels of 27 digital images were automatically segmented using the Gabor wavelet transform and classified using traditional features such as area, perimeter, and an additional five morphological features based on the derivatives-of-Gaussian wavelet-derived data. Discriminant analysis indicated that traditional features do not detect early proliferative retinopathy. The best single feature for discrimination was the wavelet curvature with an area under the curve (AUC) of 0.76. Linear discriminant analysis with a selection of six features achieved an AUC of 0.90 (0.73-0.97, 95% confidence interval). The wavelet method was able to segment retinal blood vessels and classify the images according to the presence or absence of proliferative retinopathy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Michelic, S.K., E-mail: susanne.michelic@unileoben.ac.at; Loder, D.; Reip, T.
2015-02-15
Titanium-alloyed ferritic chromium steels are a competitive option to classical austenitic stainless steels owing to their similar corrosion resistance. The addition of titanium significantly influences their final steel cleanliness. The present contribution focuses on the detailed metallographic characterization of titanium nitrides, titanium carbides and titanium carbonitrides with regard to their size, morphology and composition. The methods used are manual and automated Scanning Electron Microscopy with Energy Dispersive X-ray Spectroscopy as well as optical microscopy. Additional thermodynamic calculations are performed to explain the precipitation procedure of the analyzed titanium nitrides. The analyses showed that homogeneous nucleation is decisive at an earlymore » process stage after the addition of titanium. Heterogeneous nucleation gets crucial with ongoing process time and essentially influences the final inclusion size of titanium nitrides. A detailed investigation of the nuclei for heterogeneous nucleation with automated Scanning Electron Microscopy proved to be difficult due to their small size. Manual Scanning Electron Microscopy and optical microscopy have to be applied. Furthermore, it was found that during solidification an additional layer around an existing titanium nitride can be formed which changes the final inclusion morphology significantly. These layers are also characterized in detail. Based on these different inclusion morphologies, in combination with thermodynamic results, tendencies regarding the formation and modification time of titanium containing inclusions in ferritic chromium steels are derived. - Graphical abstract: Display Omitted - Highlights: • The formation and modification of TiN in the steel 1.4520 was examined. • Heterogeneous nucleation essentially influences the final steel cleanliness. • In most cases heterogeneous nuclei in TiN inclusions are magnesium based. • Particle morphology provides important information on inclusion formation.« less
Robotic Automation of In Vivo Two-Photon Targeted Whole-Cell Patch-Clamp Electrophysiology.
Annecchino, Luca A; Morris, Alexander R; Copeland, Caroline S; Agabi, Oshiorenoya E; Chadderton, Paul; Schultz, Simon R
2017-08-30
Whole-cell patch-clamp electrophysiological recording is a powerful technique for studying cellular function. While in vivo patch-clamp recording has recently benefited from automation, it is normally performed "blind," meaning that throughput for sampling some genetically or morphologically defined cell types is unacceptably low. One solution to this problem is to use two-photon microscopy to target fluorescently labeled neurons. Combining this with robotic automation is difficult, however, as micropipette penetration induces tissue deformation, moving target cells from their initial location. Here we describe a platform for automated two-photon targeted patch-clamp recording, which solves this problem by making use of a closed loop visual servo algorithm. Our system keeps the target cell in focus while iteratively adjusting the pipette approach trajectory to compensate for tissue motion. We demonstrate platform validation with patch-clamp recordings from a variety of cells in the mouse neocortex and cerebellum. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.
Schwerdt, Ian J; Brenkmann, Alexandria; Martinson, Sean; Albrecht, Brent D; Heffernan, Sean; Klosterman, Michael R; Kirkham, Trenton; Tasdizen, Tolga; McDonald Iv, Luther W
2018-08-15
The use of a limited set of signatures in nuclear forensics and nuclear safeguards may reduce the discriminating power for identifying unknown nuclear materials, or for verifying processing at existing facilities. Nuclear proliferomics is a proposed new field of study that advocates for the acquisition of large databases of nuclear material properties from a variety of analytical techniques. As demonstrated on a common uranium trioxide polymorph, α-UO 3 , in this paper, nuclear proliferomics increases the ability to improve confidence in identifying the processing history of nuclear materials. Specifically, α-UO 3 was investigated from the calcination of unwashed uranyl peroxide at 350, 400, 450, 500, and 550 °C in air. Scanning electron microscopy (SEM) images were acquired of the surface morphology, and distinct qualitative differences are presented between unwashed and washed uranyl peroxide, as well as the calcination products from the unwashed uranyl peroxide at the investigated temperatures. Differential scanning calorimetry (DSC), UV-Vis spectrophotometry, powder X-ray diffraction (p-XRD), and thermogravimetric analysis-mass spectrometry (TGA-MS) were used to understand the source of these morphological differences as a function of calcination temperature. Additionally, the SEM images were manually segmented using Morphological Analysis for MAterials (MAMA) software to identify quantifiable differences in morphology for three different surface features present on the unwashed uranyl peroxide calcination products. No single quantifiable signature was sufficient to discern all calcination temperatures with a high degree of confidence; therefore, advanced statistical analysis was performed to allow the combination of a number of quantitative signatures, with their associated uncertainties, to allow for complete discernment by calcination history. Furthermore, machine learning was applied to the acquired SEM images to demonstrate automated discernment with at least 89% accuracy. Copyright © 2018 Elsevier B.V. All rights reserved.
Development of image analysis software for quantification of viable cells in microchips.
Georg, Maximilian; Fernández-Cabada, Tamara; Bourguignon, Natalia; Karp, Paola; Peñaherrera, Ana B; Helguera, Gustavo; Lerner, Betiana; Pérez, Maximiliano S; Mertelsmann, Roland
2018-01-01
Over the past few years, image analysis has emerged as a powerful tool for analyzing various cell biology parameters in an unprecedented and highly specific manner. The amount of data that is generated requires automated methods for the processing and analysis of all the resulting information. The software available so far are suitable for the processing of fluorescence and phase contrast images, but often do not provide good results from transmission light microscopy images, due to the intrinsic variation of the acquisition of images technique itself (adjustment of brightness / contrast, for instance) and the variability between image acquisition introduced by operators / equipment. In this contribution, it has been presented an image processing software, Python based image analysis for cell growth (PIACG), that is able to calculate the total area of the well occupied by cells with fusiform and rounded morphology in response to different concentrations of fetal bovine serum in microfluidic chips, from microscopy images in transmission light, in a highly efficient way.
NASA Astrophysics Data System (ADS)
Kydd, Jocelyn; Rajakaruna, Harshana; Briski, Elizabeta; Bailey, Sarah
2018-03-01
Many commercial ships will soon begin to use treatment systems to manage their ballast water and reduce the global transfer of harmful aquatic organisms and pathogens in accordance with upcoming International Maritime Organization regulations. As a result, rapid and accurate automated methods will be needed to monitoring compliance of ships' ballast water. We examined two automated particle counters for monitoring organisms ≥ 50 μm in minimum dimension: a High Resolution Laser Optical Plankton Counter (HR-LOPC), and a Flow Cytometer with digital imaging Microscope (FlowCAM), in comparison to traditional (manual) microscopy considering plankton concentration, size frequency distributions and particle size measurements. The automated tools tended to underestimate particle concentration compared to standard microscopy, but gave similar results in terms of relative abundance of individual taxa. For most taxa, particle size measurements generated by FlowCAM ABD (Area Based Diameter) were more similar to microscope measurements than were those by FlowCAM ESD (Equivalent Spherical Diameter), though there was a mismatch in size estimates for some organisms between the FlowCAM ABD and microscope due to orientation and complex morphology. When a single problematic taxon is very abundant, the resulting size frequency distribution curves can become skewed, as was observed with Asterionella in this study. In particular, special consideration is needed when utilizing automated tools to analyse samples containing colonial species. Re-analysis of the size frequency distributions with the removal of Asterionella from FlowCAM and microscope data resulted in more similar curves across methods with FlowCAM ABD having the best fit compared to the microscope, although microscope concentration estimates were still significantly higher than estimates from the other methods. The results of our study indicate that both automated tools can generate frequency distributions of particles that might be particularly useful if correction factors can be developed for known differences in well-studied aquatic ecosystems.
Automated Cell Detection and Morphometry on Growth Plate Images of Mouse Bone
Ascenzi, Maria-Grazia; Du, Xia; Harding, James I; Beylerian, Emily N; de Silva, Brian M; Gross, Ben J; Kastein, Hannah K; Wang, Weiguang; Lyons, Karen M; Schaeffer, Hayden
2014-01-01
Microscopy imaging of mouse growth plates is extensively used in biology to understand the effect of specific molecules on various stages of normal bone development and on bone disease. Until now, such image analysis has been conducted by manual detection. In fact, when existing automated detection techniques were applied, morphological variations across the growth plate and heterogeneity of image background color, including the faint presence of cells (chondrocytes) located deeper in tissue away from the image’s plane of focus, and lack of cell-specific features, interfered with identification of cell. We propose the first method of automated detection and morphometry applicable to images of cells in the growth plate of long bone. Through ad hoc sequential application of the Retinex method, anisotropic diffusion and thresholding, our new cell detection algorithm (CDA) addresses these challenges on bright-field microscopy images of mouse growth plates. Five parameters, chosen by the user in respect of image characteristics, regulate our CDA. Our results demonstrate effectiveness of the proposed numerical method relative to manual methods. Our CDA confirms previously established results regarding chondrocytes’ number, area, orientation, height and shape of normal growth plates. Our CDA also confirms differences previously found between the genetic mutated mouse Smad1/5CKO and its control mouse on fluorescence images. The CDA aims to aid biomedical research by increasing efficiency and consistency of data collection regarding arrangement and characteristics of chondrocytes. Our results suggest that automated extraction of data from microscopy imaging of growth plates can assist in unlocking information on normal and pathological development, key to the underlying biological mechanisms of bone growth. PMID:25525552
Geometry planning and image registration in magnetic particle imaging using bimodal fiducial markers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Werner, F., E-mail: f.werner@uke.de; Hofmann, M.; Them, K.
Purpose: Magnetic particle imaging (MPI) is a quantitative imaging modality that allows the distribution of superparamagnetic nanoparticles to be visualized. Compared to other imaging techniques like x-ray radiography, computed tomography (CT), and magnetic resonance imaging (MRI), MPI only provides a signal from the administered tracer, but no additional morphological information, which complicates geometry planning and the interpretation of MP images. The purpose of the authors’ study was to develop bimodal fiducial markers that can be visualized by MPI and MRI in order to create MP–MR fusion images. Methods: A certain arrangement of three bimodal fiducial markers was developed and usedmore » in a combined MRI/MPI phantom and also during in vivo experiments in order to investigate its suitability for geometry planning and image fusion. An algorithm for automated marker extraction in both MR and MP images and rigid registration was established. Results: The developed bimodal fiducial markers can be visualized by MRI and MPI and allow for geometry planning as well as automated registration and fusion of MR–MP images. Conclusions: To date, exact positioning of the object to be imaged within the field of view (FOV) and the assignment of reconstructed MPI signals to corresponding morphological regions has been difficult. The developed bimodal fiducial markers and the automated image registration algorithm help to overcome these difficulties.« less
Feasibility of Developing a Protocol for Automated Protist Analysis
2010-03-01
Acquisition Directorate Research & Development Center Report No: CG-D-02-ll Feasibility of Developing a Protocol for Automated Protist Analysis...Technical Information Service, Springfield, VA 22161. March 2010 Homeland Security Feasibility of Developing a Protocol for Automated Protist ...March 21)10 Feasibility of Developing a Protocol for Automated Protist Analysis 00 00 o CM Technical Report Documentation Page 1. Report No CG-D
An Automated Approach to Extracting River Bank Locations from Aerial Imagery Using Image Texture
2015-11-04
is more likely to be encountered in high latitudes. The technique recognizes areas of urban or rural built environments, such as mowed fields...optical remote sensing of river channel morphology and in-stream habitat : physical basis and feasability. Remote Sensing of the Environment 93: 493
Evaluation of the Red Blood Cell Advanced Software Application on the CellaVision DM96.
Criel, M; Godefroid, M; Deckers, B; Devos, H; Cauwelier, B; Emmerechts, J
2016-08-01
The CellaVision Advanced Red Blood Cell (RBC) Software Application is a new software for advanced morphological analysis of RBCs on a digital microscopy system. Upon automated precharacterization into 21 categories, the software offers the possibility of reclassification of RBCs by the operator. We aimed to define the optimal cut-off to detect morphological RBC abnormalities and to evaluate the precharacterization performance of this software. Thirty-eight blood samples of healthy donors and sixty-eight samples of hospitalized patients were analyzed. Different methodologies to define a cut-off between negativity and positivity were used. Sensitivity and specificity were calculated according to these different cut-offs using the manual microscopic method as the gold standard. Imprecision was assessed by measuring analytical within-run and between-run variability and by measuring between-observer variability. By optimizing the cut-off between negativity and positivity, sensitivities exceeded 80% for 'critical' RBC categories (target cells, tear drop cells, spherocytes, sickle cells, and parasites), while specificities exceeded 80% for the other RBC morphological categories. Results of within-run, between-run, and between-observer variabilities were all clinically acceptable. The CellaVision Advanced RBC Software Application is an easy-to-use software that helps to detect most RBC morphological abnormalities in a sensitive and specific way without increasing work load, provided the proper cut-offs are chosen. However, evaluation of the images by an experienced observer remains necessary. © 2016 John Wiley & Sons Ltd.
3D marker-controlled watershed for kidney segmentation in clinical CT exams.
Wieclawek, Wojciech
2018-02-27
Image segmentation is an essential and non trivial task in computer vision and medical image analysis. Computed tomography (CT) is one of the most accessible medical examination techniques to visualize the interior of a patient's body. Among different computer-aided diagnostic systems, the applications dedicated to kidney segmentation represent a relatively small group. In addition, literature solutions are verified on relatively small databases. The goal of this research is to develop a novel algorithm for fully automated kidney segmentation. This approach is designed for large database analysis including both physiological and pathological cases. This study presents a 3D marker-controlled watershed transform developed and employed for fully automated CT kidney segmentation. The original and the most complex step in the current proposition is an automatic generation of 3D marker images. The final kidney segmentation step is an analysis of the labelled image obtained from marker-controlled watershed transform. It consists of morphological operations and shape analysis. The implementation is conducted in a MATLAB environment, Version 2017a, using i.a. Image Processing Toolbox. 170 clinical CT abdominal studies have been subjected to the analysis. The dataset includes normal as well as various pathological cases (agenesis, renal cysts, tumors, renal cell carcinoma, kidney cirrhosis, partial or radical nephrectomy, hematoma and nephrolithiasis). Manual and semi-automated delineations have been used as a gold standard. Wieclawek Among 67 delineated medical cases, 62 cases are 'Very good', whereas only 5 are 'Good' according to Cohen's Kappa interpretation. The segmentation results show that mean values of Sensitivity, Specificity, Dice, Jaccard, Cohen's Kappa and Accuracy are 90.29, 99.96, 91.68, 85.04, 91.62 and 99.89% respectively. All 170 medical cases (with and without outlines) have been classified by three independent medical experts as 'Very good' in 143-148 cases, as 'Good' in 15-21 cases and as 'Moderate' in 6-8 cases. An automatic kidney segmentation approach for CT studies to compete with commonly known solutions was developed. The algorithm gives promising results, that were confirmed during validation procedure done on a relatively large database, including 170 CTs with both physiological and pathological cases.
Manufacture of a human mesenchymal stem cell population using an automated cell culture platform.
Thomas, Robert James; Chandra, Amit; Liu, Yang; Hourd, Paul C; Conway, Paul P; Williams, David J
2007-09-01
Tissue engineering and regenerative medicine are rapidly developing fields that use cells or cell-based constructs as therapeutic products for a wide range of clinical applications. Efforts to commercialise these therapies are driving a need for capable, scaleable, manufacturing technologies to ensure therapies are able to meet regulatory requirements and are economically viable at industrial scale production. We report the first automated expansion of a human bone marrow derived mesenchymal stem cell population (hMSCs) using a fully automated cell culture platform. Differences in cell population growth profile, attributed to key methodological differences, were observed between the automated protocol and a benchmark manual protocol. However, qualitatively similar cell output, assessed by cell morphology and the expression of typical hMSC markers, was obtained from both systems. Furthermore, the critical importance of minor process variation, e.g. the effect of cell seeding density on characteristics such as population growth kinetics and cell phenotype, was observed irrespective of protocol type. This work highlights the importance of careful process design in therapeutic cell manufacture and demonstrates the potential of automated culture for future optimisation and scale up studies required for the translation of regenerative medicine products from the laboratory to the clinic.
Development of Automated Image Analysis Software for Suspended Marine Particle Classification
2002-09-30
Development of Automated Image Analysis Software for Suspended Marine Particle Classification Scott Samson Center for Ocean Technology...and global water column. 1 OBJECTIVES The project’s objective is to develop automated image analysis software to reduce the effort and time
An Intelligent Automation Platform for Rapid Bioprocess Design.
Wu, Tianyi; Zhou, Yuhong
2014-08-01
Bioprocess development is very labor intensive, requiring many experiments to characterize each unit operation in the process sequence to achieve product safety and process efficiency. Recent advances in microscale biochemical engineering have led to automated experimentation. A process design workflow is implemented sequentially in which (1) a liquid-handling system performs high-throughput wet lab experiments, (2) standalone analysis devices detect the data, and (3) specific software is used for data analysis and experiment design given the user's inputs. We report an intelligent automation platform that integrates these three activities to enhance the efficiency of such a workflow. A multiagent intelligent architecture has been developed incorporating agent communication to perform the tasks automatically. The key contribution of this work is the automation of data analysis and experiment design and also the ability to generate scripts to run the experiments automatically, allowing the elimination of human involvement. A first-generation prototype has been established and demonstrated through lysozyme precipitation process design. All procedures in the case study have been fully automated through an intelligent automation platform. The realization of automated data analysis and experiment design, and automated script programming for experimental procedures has the potential to increase lab productivity. © 2013 Society for Laboratory Automation and Screening.
Du, Yuncheng; Budman, Hector M; Duever, Thomas A
2016-06-01
Accurate automated quantitative analysis of living cells based on fluorescence microscopy images can be very useful for fast evaluation of experimental outcomes and cell culture protocols. In this work, an algorithm is developed for fast differentiation of normal and apoptotic viable Chinese hamster ovary (CHO) cells. For effective segmentation of cell images, a stochastic segmentation algorithm is developed by combining a generalized polynomial chaos expansion with a level set function-based segmentation algorithm. This approach provides a probabilistic description of the segmented cellular regions along the boundary, from which it is possible to calculate morphological changes related to apoptosis, i.e., the curvature and length of a cell's boundary. These features are then used as inputs to a support vector machine (SVM) classifier that is trained to distinguish between normal and apoptotic viable states of CHO cell images. The use of morphological features obtained from the stochastic level set segmentation of cell images in combination with the trained SVM classifier is more efficient in terms of differentiation accuracy as compared with the original deterministic level set method.
Image analysis of ocular fundus for retinopathy characterization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ushizima, Daniela; Cuadros, Jorge
2010-02-05
Automated analysis of ocular fundus images is a common procedure in countries as England, including both nonemergency examination and retinal screening of patients with diabetes mellitus. This involves digital image capture and transmission of the images to a digital reading center for evaluation and treatment referral. In collaboration with the Optometry Department, University of California, Berkeley, we have tested computer vision algorithms to segment vessels and lesions in ground-truth data (DRIVE database) and hundreds of images of non-macular centric and nonuniform illumination views of the eye fundus from EyePACS program. Methods under investigation involve mathematical morphology (Figure 1) for imagemore » enhancement and pattern matching. Recently, we have focused in more efficient techniques to model the ocular fundus vasculature (Figure 2), using deformable contours. Preliminary results show accurate segmentation of vessels and high level of true-positive microaneurysms.« less
Signal and noise modeling in confocal laser scanning fluorescence microscopy.
Herberich, Gerlind; Windoffer, Reinhard; Leube, Rudolf E; Aach, Til
2012-01-01
Fluorescence confocal laser scanning microscopy (CLSM) has revolutionized imaging of subcellular structures in biomedical research by enabling the acquisition of 3D time-series of fluorescently-tagged proteins in living cells, hence forming the basis for an automated quantification of their morphological and dynamic characteristics. Due to the inherently weak fluorescence, CLSM images exhibit a low SNR. We present a novel model for the transfer of signal and noise in CLSM that is both theoretically sound as well as corroborated by a rigorous analysis of the pixel intensity statistics via measurement of the 3D noise power spectra, signal-dependence and distribution. Our model provides a better fit to the data than previously proposed models. Further, it forms the basis for (i) the simulation of the CLSM imaging process indispensable for the quantitative evaluation of CLSM image analysis algorithms, (ii) the application of Poisson denoising algorithms and (iii) the reconstruction of the fluorescence signal.
NASA Astrophysics Data System (ADS)
Srinivasan, Yeshwanth; Hernes, Dana; Tulpule, Bhakti; Yang, Shuyu; Guo, Jiangling; Mitra, Sunanda; Yagneswaran, Sriraja; Nutter, Brian; Jeronimo, Jose; Phillips, Benny; Long, Rodney; Ferris, Daron
2005-04-01
Automated segmentation and classification of diagnostic markers in medical imagery are challenging tasks. Numerous algorithms for segmentation and classification based on statistical approaches of varying complexity are found in the literature. However, the design of an efficient and automated algorithm for precise classification of desired diagnostic markers is extremely image-specific. The National Library of Medicine (NLM), in collaboration with the National Cancer Institute (NCI), is creating an archive of 60,000 digitized color images of the uterine cervix. NLM is developing tools for the analysis and dissemination of these images over the Web for the study of visual features correlated with precancerous neoplasia and cancer. To enable indexing of images of the cervix, it is essential to develop algorithms for the segmentation of regions of interest, such as acetowhitened regions, and automatic identification and classification of regions exhibiting mosaicism and punctation. Success of such algorithms depends, primarily, on the selection of relevant features representing the region of interest. We present color and geometric features based statistical classification and segmentation algorithms yielding excellent identification of the regions of interest. The distinct classification of the mosaic regions from the non-mosaic ones has been obtained by clustering multiple geometric and color features of the segmented sections using various morphological and statistical approaches. Such automated classification methodologies will facilitate content-based image retrieval from the digital archive of uterine cervix and have the potential of developing an image based screening tool for cervical cancer.
Oosterwijk, J C; Knepflé, C F; Mesker, W E; Vrolijk, H; Sloos, W C; Pattenier, H; Ravkin, I; van Ommen, G J; Kanhai, H H; Tanke, H J
1998-01-01
This article explores the feasibility of the use of automated microscopy and image analysis to detect the presence of rare fetal nucleated red blood cells (NRBCs) circulating in maternal blood. The rationales for enrichment and for automated image analysis for "rare-event" detection are reviewed. We also describe the application of automated image analysis to 42 maternal blood samples, using a protocol consisting of one-step enrichment followed by immunocytochemical staining for fetal hemoglobin (HbF) and FISH for X- and Y-chromosomal sequences. Automated image analysis consisted of multimode microscopy and subsequent visual evaluation of image memories containing the selected objects. The FISH results were compared with the results of conventional karyotyping of the chorionic villi. By use of manual screening, 43% of the slides were found to be positive (>=1 NRBC), with a mean number of 11 NRBCs (range 1-40). By automated microscopy, 52% were positive, with on average 17 NRBCs (range 1-111). There was a good correlation between both manual and automated screening, but the NRBC yield from automated image analysis was found to be superior to that from manual screening (P=.0443), particularly when the NRBC count was >15. Seven (64%) of 11 XY fetuses were correctly diagnosed by FISH analysis of automatically detected cells, and all discrepancies were restricted to the lower cell-count range. We believe that automated microscopy and image analysis reduce the screening workload, are more sensitive than manual evaluation, and can be used to detect rare HbF-containing NRBCs in maternal blood. PMID:9837832
Morphological analysis of pore size and connectivity in a thick mixed-culture biofilm.
Rosenthal, Alex F; Griffin, James S; Wagner, Michael; Packman, Aaron I; Balogun, Oluwaseyi; Wells, George F
2018-05-19
Morphological parameters are commonly used to predict transport and metabolic kinetics in biofilms. Yet, quantification of biofilm morphology remains challenging due to imaging technology limitations and lack of robust analytical approaches. We present a novel set of imaging and image analysis techniques to estimate internal porosity, pore size distributions, and pore network connectivity to a depth of 1 mm at a resolution of 10 µm in a biofilm exhibiting both heterotrophic and nitrifying activity. Optical coherence tomography (OCT) scans revealed an extensive pore network with diameters as large as 110 µm directly connected to the biofilm surface and surrounding fluid. Thin section fluorescence in situ hybridization microscopy revealed ammonia oxidizing bacteria (AOB) distributed through the entire thickness of the biofilm. AOB were particularly concentrated in the biofilm around internal pores. Areal porosity values estimated from OCT scans were consistently lower than those estimated from multiphoton laser scanning microscopy, though the two imaging modalities showed a statistically significant correlation (r = 0.49, p<0.0001). Estimates of areal porosity were moderately sensitive to grey level threshold selection, though several automated thresholding algorithms yielded similar values to those obtained by manually thresholding performed by a panel of environmental engineering researchers (±25% relative error). These findings advance our ability to quantitatively describe the geometry of biofilm internal pore networks at length scales relevant to engineered biofilm reactors and suggest that internal pore structures provide crucial habitat for nitrifier growth. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
Wagner, Daniel-Christoph; Scheibe, Johanna; Glocke, Isabelle; Weise, Gesa; Deten, Alexander; Boltze, Johannes; Kranz, Alexander
2013-01-01
The astrocytic response to ischemic brain injury is characterized by specific alterations of glial cell morphology and function. Various studies described both beneficial and detrimental aspects of activated astrocytes, suggesting the existence of different subtypes. We investigated this issue using a novel object-based approach to study characteristics of astrogliosis after stroke. Spontaneously hypertensive rats received permanent middle cerebral artery occlusion. After 96 h, brain specimens were removed, fixed and stained for GFAP, glutamine synthetase (GS), S100Beta and Musashi1 (Msh1). Three regions of interest were defined (contralateral hemisphere, ipsilateral remote zone and infarct border zone), and confocal stacks were acquired (n=5 biological with each n=4 technical replicates). The stacks were background-corrected and colocalization between the selected markers and GFAP was determined using an automated thresholding algorithm. The fluorescence and colocalization channels were then converted into 3D-objects using both intensity and volume as filters to ultimately determine the final volumes of marker expression and colocalization, as well as the morphological changes of astrocyte process arborisation. We found that both S100Beta and Msh1 determined the same GFAP-positive astroglial cell population albeit the cellular compartments differed. GFAP stained most of the astrocyte processes and is hence suitable for the analysis of qualitative characteristics of astrogliosis. Due to its peri-nuclear localization, Msh1 is appropriate to estimate the total number of astrocytes even in regions with severe reactive astrogliosis. GS expression in GFAP-positive astrocytes was high in the remote zone and low at the infarct border, indicating the existence of astrocyte subclasses.
NASA Astrophysics Data System (ADS)
Garg, Ishita; Karwoski, Ronald A.; Camp, Jon J.; Bartholmai, Brian J.; Robb, Richard A.
2005-04-01
Chronic obstructive pulmonary diseases (COPD) are debilitating conditions of the lung and are the fourth leading cause of death in the United States. Early diagnosis is critical for timely intervention and effective treatment. The ability to quantify particular imaging features of specific pathology and accurately assess progression or response to treatment with current imaging tools is relatively poor. The goal of this project was to develop automated segmentation techniques that would be clinically useful as computer assisted diagnostic tools for COPD. The lungs were segmented using an optimized segmentation threshold and the trachea was segmented using a fixed threshold characteristic of air. The segmented images were smoothed by a morphological close operation using spherical elements of different sizes. The results were compared to other segmentation approaches using an optimized threshold to segment the trachea. Comparison of the segmentation results from 10 datasets showed that the method of trachea segmentation using a fixed air threshold followed by morphological closing with spherical element of size 23x23x5 yielded the best results. Inclusion of greater number of pulmonary vessels in the lung volume is important for the development of computer assisted diagnostic tools because the physiological changes of COPD can result in quantifiable anatomic changes in pulmonary vessels. Using a fixed threshold to segment the trachea removed airways from the lungs to a better extent as compared to using an optimized threshold. Preliminary measurements gathered from patient"s CT scans suggest that segmented images can be used for accurate analysis of total lung volume and volumes of regional lung parenchyma. Additionally, reproducible segmentation allows for quantification of specific pathologic features, such as lower intensity pixels, which are characteristic of abnormal air spaces in diseases like emphysema.
CP-CHARM: segmentation-free image classification made accessible.
Uhlmann, Virginie; Singh, Shantanu; Carpenter, Anne E
2016-01-27
Automated classification using machine learning often relies on features derived from segmenting individual objects, which can be difficult to automate. WND-CHARM is a previously developed classification algorithm in which features are computed on the whole image, thereby avoiding the need for segmentation. The algorithm obtained encouraging results but requires considerable computational expertise to execute. Furthermore, some benchmark sets have been shown to be subject to confounding artifacts that overestimate classification accuracy. We developed CP-CHARM, a user-friendly image-based classification algorithm inspired by WND-CHARM in (i) its ability to capture a wide variety of morphological aspects of the image, and (ii) the absence of requirement for segmentation. In order to make such an image-based classification method easily accessible to the biological research community, CP-CHARM relies on the widely-used open-source image analysis software CellProfiler for feature extraction. To validate our method, we reproduced WND-CHARM's results and ensured that CP-CHARM obtained comparable performance. We then successfully applied our approach on cell-based assay data and on tissue images. We designed these new training and test sets to reduce the effect of batch-related artifacts. The proposed method preserves the strengths of WND-CHARM - it extracts a wide variety of morphological features directly on whole images thereby avoiding the need for cell segmentation, but additionally, it makes the methods easily accessible for researchers without computational expertise by implementing them as a CellProfiler pipeline. It has been demonstrated to perform well on a wide range of bioimage classification problems, including on new datasets that have been carefully selected and annotated to minimize batch effects. This provides for the first time a realistic and reliable assessment of the whole image classification strategy.
Kwak, Kichang; Yoon, Uicheul; Lee, Dong-Kyun; Kim, Geon Ha; Seo, Sang Won; Na, Duk L; Shim, Hack-Joon; Lee, Jong-Min
2013-09-01
The hippocampus has been known to be an important structure as a biomarker for Alzheimer's disease (AD) and other neurological and psychiatric diseases. However, it requires accurate, robust and reproducible delineation of hippocampal structures. In this study, an automated hippocampal segmentation method based on a graph-cuts algorithm combined with atlas-based segmentation and morphological opening was proposed. First of all, the atlas-based segmentation was applied to define initial hippocampal region for a priori information on graph-cuts. The definition of initial seeds was further elaborated by incorporating estimation of partial volume probabilities at each voxel. Finally, morphological opening was applied to reduce false positive of the result processed by graph-cuts. In the experiments with twenty-seven healthy normal subjects, the proposed method showed more reliable results (similarity index=0.81±0.03) than the conventional atlas-based segmentation method (0.72±0.04). Also as for segmentation accuracy which is measured in terms of the ratios of false positive and false negative, the proposed method (precision=0.76±0.04, recall=0.86±0.05) produced lower ratios than the conventional methods (0.73±0.05, 0.72±0.06) demonstrating its plausibility for accurate, robust and reliable segmentation of hippocampus. Copyright © 2013 Elsevier Inc. All rights reserved.
Egger, Robert; Narayanan, Rajeevan T.; Helmstaedter, Moritz; de Kock, Christiaan P. J.; Oberlaender, Marcel
2012-01-01
The three-dimensional (3D) structure of neural circuits is commonly studied by reconstructing individual or small groups of neurons in separate preparations. Investigation of structural organization principles or quantification of dendritic and axonal innervation thus requires integration of many reconstructed morphologies into a common reference frame. Here we present a standardized 3D model of the rat vibrissal cortex and introduce an automated registration tool that allows for precise placement of single neuron reconstructions. We (1) developed an automated image processing pipeline to reconstruct 3D anatomical landmarks, i.e., the barrels in Layer 4, the pia and white matter surfaces and the blood vessel pattern from high-resolution images, (2) quantified these landmarks in 12 different rats, (3) generated an average 3D model of the vibrissal cortex and (4) used rigid transformations and stepwise linear scaling to register 94 neuron morphologies, reconstructed from in vivo stainings, to the standardized cortex model. We find that anatomical landmarks vary substantially across the vibrissal cortex within an individual rat. In contrast, the 3D layout of the entire vibrissal cortex remains remarkably preserved across animals. This allows for precise registration of individual neuron reconstructions with approximately 30 µm accuracy. Our approach could be used to reconstruct and standardize other anatomically defined brain areas and may ultimately lead to a precise digital reference atlas of the rat brain. PMID:23284282
Practical protocols for fast histopathology by Fourier transform infrared spectroscopic imaging
NASA Astrophysics Data System (ADS)
Keith, Frances N.; Reddy, Rohith K.; Bhargava, Rohit
2008-02-01
Fourier transform infrared (FT-IR) spectroscopic imaging is an emerging technique that combines the molecular selectivity of spectroscopy with the spatial specificity of optical microscopy. We demonstrate a new concept in obtaining high fidelity data using commercial array detectors coupled to a microscope and Michelson interferometer. Next, we apply the developed technique to rapidly provide automated histopathologic information for breast cancer. Traditionally, disease diagnoses are based on optical examinations of stained tissue and involve a skilled recognition of morphological patterns of specific cell types (histopathology). Consequently, histopathologic determinations are a time consuming, subjective process with innate intra- and inter-operator variability. Utilizing endogenous molecular contrast inherent in vibrational spectra, specially designed tissue microarrays and pattern recognition of specific biochemical features, we report an integrated algorithm for automated classifications. The developed protocol is objective, statistically significant and, being compatible with current tissue processing procedures, holds potential for routine clinical diagnoses. We first demonstrate that the classification of tissue type (histology) can be accomplished in a manner that is robust and rigorous. Since data quality and classifier performance are linked, we quantify the relationship through our analysis model. Last, we demonstrate the application of the minimum noise fraction (MNF) transform to improve tissue segmentation.
Larson, Matthew E.; Bement, William M.
2017-01-01
Proper spindle positioning at anaphase onset is essential for normal tissue organization and function. Here we develop automated spindle-tracking software and apply it to characterize mitotic spindle dynamics in the Xenopus laevis embryonic epithelium. We find that metaphase spindles first undergo a sustained rotation that brings them on-axis with their final orientation. This sustained rotation is followed by a set of striking stereotyped rotational oscillations that bring the spindle into near contact with the cortex and then move it rapidly away from the cortex. These oscillations begin to subside soon before anaphase onset. Metrics extracted from the automatically tracked spindles indicate that final spindle position is determined largely by cell morphology and that spindles consistently center themselves in the XY-plane before anaphase onset. Finally, analysis of the relationship between spindle oscillations and spindle position relative to the cortex reveals an association between cortical contact and anaphase onset. We conclude that metaphase spindles in epithelia engage in a stereotyped “dance,” that this dance culminates in proper spindle positioning and orientation, and that completion of the dance is linked to anaphase onset. PMID:28100633
Fish, Kenneth N; Sweet, Robert A; Deo, Anthony J; Lewis, David A
2008-11-13
A number of human brain diseases have been associated with disturbances in the structure and function of cortical synapses. Answering fundamental questions about the synaptic machinery in these disease states requires the ability to image and quantify small synaptic structures in tissue sections and to evaluate protein levels at these major sites of function. We developed a new automated segmentation imaging method specifically to answer such fundamental questions. The method takes advantage of advances in spinning disk confocal microscopy, and combines information from multiple iterations of a fluorescence intensity/morphological segmentation protocol to construct three-dimensional object masks of immunoreactive (IR) puncta. This new methodology is unique in that high- and low-fluorescing IR puncta are equally masked, allowing for quantification of the number of fluorescently-labeled puncta in tissue sections. In addition, the shape of the final object masks highly represents their corresponding original data. Thus, the object masks can be used to extract information about the IR puncta (e.g., average fluorescence intensity of proteins of interest). Importantly, the segmentation method presented can be easily adapted for use with most existing microscopy analysis packages.
Context-based automated defect classification system using multiple morphological masks
Gleason, Shaun S.; Hunt, Martin A.; Sari-Sarraf, Hamed
2002-01-01
Automatic detection of defects during the fabrication of semiconductor wafers is largely automated, but the classification of those defects is still performed manually by technicians. This invention includes novel digital image analysis techniques that generate unique feature vector descriptions of semiconductor defects as well as classifiers that use these descriptions to automatically categorize the defects into one of a set of pre-defined classes. Feature extraction techniques based on multiple-focus images, multiple-defect mask images, and segmented semiconductor wafer images are used to create unique feature-based descriptions of the semiconductor defects. These feature-based defect descriptions are subsequently classified by a defect classifier into categories that depend on defect characteristics and defect contextual information, that is, the semiconductor process layer(s) with which the defect comes in contact. At the heart of the system is a knowledge database that stores and distributes historical semiconductor wafer and defect data to guide the feature extraction and classification processes. In summary, this invention takes as its input a set of images containing semiconductor defect information, and generates as its output a classification for the defect that describes not only the defect itself, but also the location of that defect with respect to the semiconductor process layers.
Brownian motion curve-based textural classification and its application in cancer diagnosis.
Mookiah, Muthu Rama Krishnan; Shah, Pratik; Chakraborty, Chandan; Ray, Ajoy K
2011-06-01
To develop an automated diagnostic methodology based on textural features of the oral mucosal epithelium to discriminate normal and oral submucous fibrosis (OSF). A total of 83 normal and 29 OSF images from histopathologic sections of the oral mucosa are considered. The proposed diagnostic mechanism consists of two parts: feature extraction using Brownian motion curve (BMC) and design ofa suitable classifier. The discrimination ability of the features has been substantiated by statistical tests. An error back-propagation neural network (BPNN) is used to classify OSF vs. normal. In development of an automated oral cancer diagnostic module, BMC has played an important role in characterizing textural features of the oral images. Fisher's linear discriminant analysis yields 100% sensitivity and 85% specificity, whereas BPNN leads to 92.31% sensitivity and 100% specificity, respectively. In addition to intensity and morphology-based features, textural features are also very important, especially in histopathologic diagnosis of oral cancer. In view of this, a set of textural features are extracted using the BMC for the diagnosis of OSF. Finally, a textural classifier is designed using BPNN, which leads to a diagnostic performance with 96.43% accuracy. (Anal Quant
Aquino, Arturo; Gegundez-Arias, Manuel Emilio; Marin, Diego
2010-11-01
Optic disc (OD) detection is an important step in developing systems for automated diagnosis of various serious ophthalmic pathologies. This paper presents a new template-based methodology for segmenting the OD from digital retinal images. This methodology uses morphological and edge detection techniques followed by the Circular Hough Transform to obtain a circular OD boundary approximation. It requires a pixel located within the OD as initial information. For this purpose, a location methodology based on a voting-type algorithm is also proposed. The algorithms were evaluated on the 1200 images of the publicly available MESSIDOR database. The location procedure succeeded in 99% of cases, taking an average computational time of 1.67 s. with a standard deviation of 0.14 s. On the other hand, the segmentation algorithm rendered an average common area overlapping between automated segmentations and true OD regions of 86%. The average computational time was 5.69 s with a standard deviation of 0.54 s. Moreover, a discussion on advantages and disadvantages of the models more generally used for OD segmentation is also presented in this paper.
Valente, Mariana; Araújo, Ana; Esteves, Tiago; Laundos, Tiago L; Freire, Ana G; Quelhas, Pedro; Pinto-do-Ó, Perpétua; Nascimento, Diana S
2015-12-02
Cardiac therapies are commonly tested preclinically in small-animal models of myocardial infarction. Following functional evaluation, post-mortem histological analysis is essential to assess morphological and molecular alterations underlying the effectiveness of treatment. However, non-methodical and inadequate sampling of the left ventricle often leads to misinterpretations and variability, making direct study comparisons unreliable. Protocols are provided for representative sampling of the ischemic mouse heart followed by morphometric analysis of the left ventricle. Extending the use of this sampling to other types of in situ analysis is also illustrated through the assessment of neovascularization and cellular engraftment in a cell-based therapy setting. This is of interest to the general cardiovascular research community as it details methods for standardization and simplification of histo-morphometric evaluation of emergent heart therapies. © 2015 by John Wiley & Sons, Inc. Copyright © 2015 John Wiley & Sons, Inc.
A semi-automated method for bone age assessment using cervical vertebral maturation.
Baptista, Roberto S; Quaglio, Camila L; Mourad, Laila M E H; Hummel, Anderson D; Caetano, Cesar Augusto C; Ortolani, Cristina Lúcia F; Pisa, Ivan T
2012-07-01
To propose a semi-automated method for pattern classification to predict individuals' stage of growth based on morphologic characteristics that are described in the modified cervical vertebral maturation (CVM) method of Baccetti et al. A total of 188 lateral cephalograms were collected, digitized, evaluated manually, and grouped into cervical stages by two expert examiners. Landmarks were located on each image and measured. Three pattern classifiers based on the Naïve Bayes algorithm were built and assessed using a software program. The classifier with the greatest accuracy according to the weighted kappa test was considered best. The classifier showed a weighted kappa coefficient of 0.861 ± 0.020. If an adjacent estimated pre-stage or poststage value was taken to be acceptable, the classifier would show a weighted kappa coefficient of 0.992 ± 0.019. Results from this study show that the proposed semi-automated pattern classification method can help orthodontists identify the stage of CVM. However, additional studies are needed before this semi-automated classification method for CVM assessment can be implemented in clinical practice.
Progress in Fully Automated Abdominal CT Interpretation
Summers, Ronald M.
2016-01-01
OBJECTIVE Automated analysis of abdominal CT has advanced markedly over just the last few years. Fully automated assessment of organs, lymph nodes, adipose tissue, muscle, bowel, spine, and tumors are some examples where tremendous progress has been made. Computer-aided detection of lesions has also improved dramatically. CONCLUSION This article reviews the progress and provides insights into what is in store in the near future for automated analysis for abdominal CT, ultimately leading to fully automated interpretation. PMID:27101207
Friedman, S N; Bambrough, P J; Kotsarini, C; Khandanpour, N; Hoggard, N
2012-12-01
Despite the established role of MRI in the diagnosis of brain tumours, histopathological assessment remains the clinically used technique, especially for the glioma group. Relative cerebral blood volume (rCBV) is a dynamic susceptibility-weighted contrast-enhanced perfusion MRI parameter that has been shown to correlate to tumour grade, but assessment requires a specialist and is time consuming. We developed analysis software to determine glioma gradings from perfusion rCBV scans in a manner that is quick, easy and does not require a specialist operator. MRI perfusion data from 47 patients with different histopathological grades of glioma were analysed with custom-designed software. Semi-automated analysis was performed with a specialist and non-specialist operator separately determining the maximum rCBV value corresponding to the tumour. Automated histogram analysis was performed by calculating the mean, standard deviation, median, mode, skewness and kurtosis of rCBV values. All values were compared with the histopathologically assessed tumour grade. A strong correlation between specialist and non-specialist observer measurements was found. Significantly different values were obtained between tumour grades using both semi-automated and automated techniques, consistent with previous results. The raw (unnormalised) data single-pixel maximum rCBV semi-automated analysis value had the strongest correlation with glioma grade. Standard deviation of the raw data had the strongest correlation of the automated analysis. Semi-automated calculation of raw maximum rCBV value was the best indicator of tumour grade and does not require a specialist operator. Both semi-automated and automated MRI perfusion techniques provide viable non-invasive alternatives to biopsy for glioma tumour grading.
Seitz, B; Müller, E E; Langenbucher, A; Kus, M M; Naumann, G O
1995-09-01
This prospective study intended to quantify and classify morphological changes of the corneal endothelium in pseudoexfoliation syndrome (PSX) after having tested reproducibility and validity of a new automated technique for analysing corneal endothelium. We used a contact specular microscope combined with a video camera (Tomey EM-1000) and a computer (IBM compatible PC, 486DX33) with suitable software (Tomey EM-1100, version 0.94). Video images of corneal endothelium (area: 0.312 mm2) are passed directly into the computer input by means of a frame grabber and are automatically processed. Missing or falsely recognized cell borders are corrected using the mouse. We examined 85 eyes with PSX and 33 healthy control eyes. At first, retest-stability and validity of the cell density measurements were assessed in the PSX-eyes. A qualitative analysis of the corneal endothelium followed. The cell density measurements showed a high retest-stability (reliability coefficient r = 0.974). The values of the automated method (2040 +/- 285 cells/mm2) and those of manual cell counting (2041 +/- 275 cells/mm2) did not differ significantly (p = 0.441). The mean difference was 3.1 +/- 2.4%. Comparing the 85 PSX-eyes (2052 +/- 264 cells/mm2) to the 33 control eyes (2372 +/- 276 cells/mm2), there was a significant reduction of cell density (p < 0.001). The cell density of the 69 PSX-eyes with glaucoma (2014 +/- 254 cells/mm2) was significantly lower than that of the 16 PSX-eyes without glaucoma (2214 +/- 251 cells/mm2) (p = 0.008). Eighty-five percent of the 85 PSX-eyes showed polymegalism, 77% pleomorphism; 68% had white deposits and 42% guttae. White deposits and guttae were significantly more frequent and more intensive in PSX-eyes than in control eyes. PSX-eyes with and those without glaucoma showed no significant differences concerning the four qualitative parameters. The automated method for analysing corneal endothelium quickly provides reproducible and valid results using the correction mode of the software. Semiquantitative analysis of qualitative parameters permits a more differentiated assessment of keratopathy in pseudoexfoliation syndrome than does mere consideration of endothelial cell density. Both evaluations are recommended to assess the risk of a diffuse endothelial decompensation before intraocular surgery.
Effects of automation of information-processing functions on teamwork.
Wright, Melanie C; Kaber, David B
2005-01-01
We investigated the effects of automation as applied to different stages of information processing on team performance in a complex decision-making task. Forty teams of 2 individuals performed a simulated Theater Defense Task. Four automation conditions were simulated with computer assistance applied to realistic combinations of information acquisition, information analysis, and decision selection functions across two levels of task difficulty. Multiple measures of team effectiveness and team coordination were used. Results indicated different forms of automation have different effects on teamwork. Compared with a baseline condition, an increase in automation of information acquisition led to an increase in the ratio of information transferred to information requested; an increase in automation of information analysis resulted in higher team coordination ratings; and automation of decision selection led to better team effectiveness under low levels of task difficulty but at the cost of higher workload. The results support the use of early and intermediate forms of automation related to acquisition and analysis of information in the design of team tasks. Decision-making automation may provide benefits in more limited contexts. Applications of this research include the design and evaluation of automation in team environments.
Introducing Explorer of Taxon Concepts with a case study on spider measurement matrix building.
Cui, Hong; Xu, Dongfang; Chong, Steven S; Ramirez, Martin; Rodenhausen, Thomas; Macklin, James A; Ludäscher, Bertram; Morris, Robert A; Soto, Eduardo M; Koch, Nicolás Mongiardino
2016-11-17
Taxonomic descriptions are traditionally composed in natural language and published in a format that cannot be directly used by computers. The Exploring Taxon Concepts (ETC) project has been developing a set of web-based software tools that convert morphological descriptions published in telegraphic style to character data that can be reused and repurposed. This paper introduces the first semi-automated pipeline, to our knowledge, that converts morphological descriptions into taxon-character matrices to support systematics and evolutionary biology research. We then demonstrate and evaluate the use of the ETC Input Creation - Text Capture - Matrix Generation pipeline to generate body part measurement matrices from a set of 188 spider morphological descriptions and report the findings. From the given set of spider taxonomic publications, two versions of input (original and normalized) were generated and used by the ETC Text Capture and ETC Matrix Generation tools. The tools produced two corresponding spider body part measurement matrices, and the matrix from the normalized input was found to be much more similar to a gold standard matrix hand-curated by the scientist co-authors. Special conventions utilized in the original descriptions (e.g., the omission of measurement units) were attributed to the lower performance of using the original input. The results show that simple normalization of the description text greatly increased the quality of the machine-generated matrix and reduced edit effort. The machine-generated matrix also helped identify issues in the gold standard matrix. ETC Text Capture and ETC Matrix Generation are low-barrier and effective tools for extracting measurement values from spider taxonomic descriptions and are more effective when the descriptions are self-contained. Special conventions that make the description text less self-contained challenge automated extraction of data from biodiversity descriptions and hinder the automated reuse of the published knowledge. The tools will be updated to support new requirements revealed in this case study.
An Intelligent Automation Platform for Rapid Bioprocess Design
Wu, Tianyi
2014-01-01
Bioprocess development is very labor intensive, requiring many experiments to characterize each unit operation in the process sequence to achieve product safety and process efficiency. Recent advances in microscale biochemical engineering have led to automated experimentation. A process design workflow is implemented sequentially in which (1) a liquid-handling system performs high-throughput wet lab experiments, (2) standalone analysis devices detect the data, and (3) specific software is used for data analysis and experiment design given the user’s inputs. We report an intelligent automation platform that integrates these three activities to enhance the efficiency of such a workflow. A multiagent intelligent architecture has been developed incorporating agent communication to perform the tasks automatically. The key contribution of this work is the automation of data analysis and experiment design and also the ability to generate scripts to run the experiments automatically, allowing the elimination of human involvement. A first-generation prototype has been established and demonstrated through lysozyme precipitation process design. All procedures in the case study have been fully automated through an intelligent automation platform. The realization of automated data analysis and experiment design, and automated script programming for experimental procedures has the potential to increase lab productivity. PMID:24088579
Improved detection of soma location and morphology in fluorescence microscopy images of neurons.
Kayasandik, Cihan Bilge; Labate, Demetrio
2016-12-01
Automated detection and segmentation of somas in fluorescent images of neurons is a major goal in quantitative studies of neuronal networks, including applications of high-content-screenings where it is required to quantify multiple morphological properties of neurons. Despite recent advances in image processing targeted to neurobiological applications, existing algorithms of soma detection are often unreliable, especially when processing fluorescence image stacks of neuronal cultures. In this paper, we introduce an innovative algorithm for the detection and extraction of somas in fluorescent images of networks of cultured neurons where somas and other structures exist in the same fluorescent channel. Our method relies on a new geometrical descriptor called Directional Ratio and a collection of multiscale orientable filters to quantify the level of local isotropy in an image. To optimize the application of this approach, we introduce a new construction of multiscale anisotropic filters that is implemented by separable convolution. Extensive numerical experiments using 2D and 3D confocal images show that our automated algorithm reliably detects somas, accurately segments them, and separates contiguous ones. We include a detailed comparison with state-of-the-art existing methods to demonstrate that our algorithm is extremely competitive in terms of accuracy, reliability and computational efficiency. Our algorithm will facilitate the development of automated platforms for high content neuron image processing. A Matlab code is released open-source and freely available to the scientific community. Copyright © 2016 Elsevier B.V. All rights reserved.
Automated Root Tracking with "Root System Analyzer"
NASA Astrophysics Data System (ADS)
Schnepf, Andrea; Jin, Meina; Ockert, Charlotte; Bol, Roland; Leitner, Daniel
2015-04-01
Crucial factors for plant development are water and nutrient availability in soils. Thus, root architecture is a main aspect of plant productivity and needs to be accurately considered when describing root processes. Images of root architecture contain a huge amount of information, and image analysis helps to recover parameters describing certain root architectural and morphological traits. The majority of imaging systems for root systems are designed for two-dimensional images, such as RootReader2, GiA Roots, SmartRoot, EZ-Rhizo, and Growscreen, but most of them are semi-automated and involve mouse-clicks in each root by the user. "Root System Analyzer" is a new, fully automated approach for recovering root architectural parameters from two-dimensional images of root systems. Individual roots can still be corrected manually in a user interface if required. The algorithm starts with a sequence of segmented two-dimensional images showing the dynamic development of a root system. For each image, morphological operators are used for skeletonization. Based on this, a graph representation of the root system is created. A dynamic root architecture model helps to determine which edges of the graph belong to an individual root. The algorithm elongates each root at the root tip and simulates growth confined within the already existing graph representation. The increment of root elongation is calculated assuming constant growth. For each root, the algorithm finds all possible paths and elongates the root in the direction of the optimal path. In this way, each edge of the graph is assigned to one or more coherent roots. Image sequences of root systems are handled in such a way that the previous image is used as a starting point for the current image. The algorithm is implemented in a set of Matlab m-files. Output of Root System Analyzer is a data structure that includes for each root an identification number, the branching order, the time of emergence, the parent identification number, the distance between branching point to the parent root base, the root length, the root radius and the nodes that belong to each individual root path. This information is relevant for the analysis of dynamic root system development as well as the parameterisation of root architecture models. Here, we show results of Root System Analyzer applied to analyse the root systems of wheat plants grown in rhizotrons. Different treatments with respect to soil moisture and apatite concentrations were used to test the effects of those conditions on root system development. Photographs of the root systems were taken at high spatial and temporal resolution and root systems are automatically tracked.
Retinal imaging and image analysis.
Abràmoff, Michael D; Garvin, Mona K; Sonka, Milan
2010-01-01
Many important eye diseases as well as systemic diseases manifest themselves in the retina. While a number of other anatomical structures contribute to the process of vision, this review focuses on retinal imaging and image analysis. Following a brief overview of the most prevalent causes of blindness in the industrialized world that includes age-related macular degeneration, diabetic retinopathy, and glaucoma, the review is devoted to retinal imaging and image analysis methods and their clinical implications. Methods for 2-D fundus imaging and techniques for 3-D optical coherence tomography (OCT) imaging are reviewed. Special attention is given to quantitative techniques for analysis of fundus photographs with a focus on clinically relevant assessment of retinal vasculature, identification of retinal lesions, assessment of optic nerve head (ONH) shape, building retinal atlases, and to automated methods for population screening for retinal diseases. A separate section is devoted to 3-D analysis of OCT images, describing methods for segmentation and analysis of retinal layers, retinal vasculature, and 2-D/3-D detection of symptomatic exudate-associated derangements, as well as to OCT-based analysis of ONH morphology and shape. Throughout the paper, aspects of image acquisition, image analysis, and clinical relevance are treated together considering their mutually interlinked relationships.
Retinal Imaging and Image Analysis
Abràmoff, Michael D.; Garvin, Mona K.; Sonka, Milan
2011-01-01
Many important eye diseases as well as systemic diseases manifest themselves in the retina. While a number of other anatomical structures contribute to the process of vision, this review focuses on retinal imaging and image analysis. Following a brief overview of the most prevalent causes of blindness in the industrialized world that includes age-related macular degeneration, diabetic retinopathy, and glaucoma, the review is devoted to retinal imaging and image analysis methods and their clinical implications. Methods for 2-D fundus imaging and techniques for 3-D optical coherence tomography (OCT) imaging are reviewed. Special attention is given to quantitative techniques for analysis of fundus photographs with a focus on clinically relevant assessment of retinal vasculature, identification of retinal lesions, assessment of optic nerve head (ONH) shape, building retinal atlases, and to automated methods for population screening for retinal diseases. A separate section is devoted to 3-D analysis of OCT images, describing methods for segmentation and analysis of retinal layers, retinal vasculature, and 2-D/3-D detection of symptomatic exudate-associated derangements, as well as to OCT-based analysis of ONH morphology and shape. Throughout the paper, aspects of image acquisition, image analysis, and clinical relevance are treated together considering their mutually interlinked relationships. PMID:22275207
BlastNeuron for Automated Comparison, Retrieval and Clustering of 3D Neuron Morphologies.
Wan, Yinan; Long, Fuhui; Qu, Lei; Xiao, Hang; Hawrylycz, Michael; Myers, Eugene W; Peng, Hanchuan
2015-10-01
Characterizing the identity and types of neurons in the brain, as well as their associated function, requires a means of quantifying and comparing 3D neuron morphology. Presently, neuron comparison methods are based on statistics from neuronal morphology such as size and number of branches, which are not fully suitable for detecting local similarities and differences in the detailed structure. We developed BlastNeuron to compare neurons in terms of their global appearance, detailed arborization patterns, and topological similarity. BlastNeuron first compares and clusters 3D neuron reconstructions based on global morphology features and moment invariants, independent of their orientations, sizes, level of reconstruction and other variations. Subsequently, BlastNeuron performs local alignment between any pair of retrieved neurons via a tree-topology driven dynamic programming method. A 3D correspondence map can thus be generated at the resolution of single reconstruction nodes. We applied BlastNeuron to three datasets: (1) 10,000+ neuron reconstructions from a public morphology database, (2) 681 newly and manually reconstructed neurons, and (3) neurons reconstructions produced using several independent reconstruction methods. Our approach was able to accurately and efficiently retrieve morphologically and functionally similar neuron structures from large morphology database, identify the local common structures, and find clusters of neurons that share similarities in both morphology and molecular profiles.
1987-11-01
differential qualita- tive (DQ) analysis, which solves the task, providing explanations suitable for use by design systems, automated diagnosis, intelligent...solves the task, providing explanations suitable for use by design systems, automated diagnosis, intelligent tutoring systems, and explanation based...comparative analysis as an important component; the explanation is used in many different ways. * One way method of automated design is the principlvd
Hayes, Ashley R; Gayzik, F Scott; Moreno, Daniel P; Martin, R Shayn; Stitzel, Joel D
The purpose of this study was to use data from a multi-modality image set of males and females representing the 5(th), 50(th), and 95(th) percentile (n=6) to examine abdominal organ location, morphology, and rib coverage variations between supine and seated postures. Medical images were acquired from volunteers in three image modalities including Computed Tomography (CT), Magnetic Resonance Imaging (MRI), and upright MRI (uMRI). A manual and semi-automated segmentation method was used to acquire data and a registration technique was employed to conduct a comparative analysis between abdominal organs (liver, spleen, and kidneys) in both postures. Location of abdominal organs, defined by center of gravity movement, varied between postures and was found to be significant (p=0.002 to p=0.04) in multiple directions for each organ. In addition, morphology changes, including compression and expansion, were seen in each organ as a result of postural changes. Rib coverage, defined as the projected area of the ribs onto the abdominal organs, was measured in frontal, lateral, and posterior projections, and also varied between postures. A significant change in rib coverage between postures was measured for the spleen and right kidney (p=0.03 and p=0.02). The results indicate that posture affects the location, morphology and rib coverage area of abdominal organs and these implications should be noted in computational modeling efforts focused on a seated posture.
Accuracy of a remote quantitative image analysis in the whole slide images.
Słodkowska, Janina; Markiewicz, Tomasz; Grala, Bartłomiej; Kozłowski, Wojciech; Papierz, Wielisław; Pleskacz, Katarzyna; Murawski, Piotr
2011-03-30
The rationale for choosing a remote quantitative method supporting a diagnostic decision requires some empirical studies and knowledge on scenarios including valid telepathology standards. The tumours of the central nervous system [CNS] are graded on the base of the morphological features and the Ki-67 labelling Index [Ki-67 LI]. Various methods have been applied for Ki-67 LI estimation. Recently we have introduced the Computerized Analysis of Medical Images [CAMI] software for an automated Ki-67 LI counting in the digital images. Aims of our study was to explore the accuracy and reliability of a remote assessment of Ki-67 LI with CAMI software applied to the whole slide images [WSI]. The WSI representing CNS tumours: 18 meningiomas and 10 oligodendrogliomas were stored on the server of the Warsaw University of Technology. The digital copies of entire glass slides were created automatically by the Aperio ScanScope CS with objective 20x or 40x. Aperio's Image Scope software provided functionality for a remote viewing of WSI. The Ki-67 LI assessment was carried on within 2 out of 20 selected fields of view (objective 40x) representing the highest labelling areas in each WSI. The Ki-67 LI counting was performed by 3 various methods: 1) the manual reading in the light microscope - LM, 2) the automated counting with CAMI software on the digital images - DI , and 3) the remote quantitation on the WSIs - as WSI method. The quality of WSIs and technical efficiency of the on-line system were analysed. The comparative statistical analysis was performed for the results obtained by 3 methods of Ki-67 LI counting. The preliminary analysis showed that in 18% of WSI the results of Ki-67 LI differed from those obtained in other 2 methods of counting when the quality of the glass slides was below the standard range. The results of our investigations indicate that the remote automated Ki-67 LI analysis performed with the CAMI algorithm on the whole slide images of meningiomas and oligodendrogliomas could be successfully used as an alternative method to the manual reading as well as to the digital images quantitation with CAMI software. According to our observation a need of a remote supervision/consultation and training for the effective use of remote quantitative analysis of WSI is necessary.
Electrocardiogram ST-Segment Morphology Delineation Method Using Orthogonal Transformations
2016-01-01
Differentiation between ischaemic and non-ischaemic transient ST segment events of long term ambulatory electrocardiograms is a persisting weakness in present ischaemia detection systems. Traditional ST segment level measuring is not a sufficiently precise technique due to the single point of measurement and severe noise which is often present. We developed a robust noise resistant orthogonal-transformation based delineation method, which allows tracing the shape of transient ST segment morphology changes from the entire ST segment in terms of diagnostic and morphologic feature-vector time series, and also allows further analysis. For these purposes, we developed a new Legendre Polynomials based Transformation (LPT) of ST segment. Its basis functions have similar shapes to typical transient changes of ST segment morphology categories during myocardial ischaemia (level, slope and scooping), thus providing direct insight into the types of time domain morphology changes through the LPT feature-vector space. We also generated new Karhunen and Lo ève Transformation (KLT) ST segment basis functions using a robust covariance matrix constructed from the ST segment pattern vectors derived from the Long Term ST Database (LTST DB). As for the delineation of significant transient ischaemic and non-ischaemic ST segment episodes, we present a study on the representation of transient ST segment morphology categories, and an evaluation study on the classification power of the KLT- and LPT-based feature vectors to classify between ischaemic and non-ischaemic ST segment episodes of the LTST DB. Classification accuracy using the KLT and LPT feature vectors was 90% and 82%, respectively, when using the k-Nearest Neighbors (k = 3) classifier and 10-fold cross-validation. New sets of feature-vector time series for both transformations were derived for the records of the LTST DB which is freely available on the PhysioNet website and were contributed to the LTST DB. The KLT and LPT present new possibilities for human-expert diagnostics, and for automated ischaemia detection. PMID:26863140
NASA Astrophysics Data System (ADS)
Gorlach, Igor; Wessel, Oliver
2008-09-01
In the global automotive industry, for decades, vehicle manufacturers have continually increased the level of automation of production systems in order to be competitive. However, there is a new trend to decrease the level of automation, especially in final car assembly, for reasons of economy and flexibility. In this research, the final car assembly lines at three production sites of Volkswagen are analysed in order to determine the best level of automation for each, in terms of manufacturing costs, productivity, quality and flexibility. The case study is based on the methodology proposed by the Fraunhofer Institute. The results of the analysis indicate that fully automated assembly systems are not necessarily the best option in terms of cost, productivity and quality combined, which is attributed to high complexity of final car assembly systems; some de-automation is therefore recommended. On the other hand, the analysis shows that low automation can result in poor product quality due to reasons related to plant location, such as inadequate workers' skills, motivation, etc. Hence, the automation strategy should be formulated on the basis of analysis of all relevant aspects of the manufacturing process, such as costs, quality, productivity and flexibility in relation to the local context. A more balanced combination of automated and manual assembly operations provides better utilisation of equipment, reduces production costs and improves throughput.
ERIC Educational Resources Information Center
DOLBY, J.L.; AND OTHERS
THE STUDY IS CONCERNED WITH THE LINGUISTIC PROBLEM INVOLVED IN TEXT COMPRESSION--EXTRACTING, INDEXING, AND THE AUTOMATIC CREATION OF SPECIAL-PURPOSE CITATION DICTIONARIES. IN SPITE OF EARLY SUCCESS IN USING LARGE-SCALE COMPUTERS TO AUTOMATE CERTAIN HUMAN TASKS, THESE PROBLEMS REMAIN AMONG THE MOST DIFFICULT TO SOLVE. ESSENTIALLY, THE PROBLEM IS TO…
DOT National Transportation Integrated Search
1997-01-01
SPECIAL CONNECTOR RAMPS LINKING THE AUTOMATED LANES AT AUTOMATED HIGHWAY-TO-AUTOMATED HIGHWAY INTERCHANGES MAY BE NEEDED TO ENABLE CONTINUOUS AUTOMATED DRIVING BETWEEN TWO CROSSING HIGHWAYS. ALTHOUGH A TYPICAL CLOVERLEAF CONFIGURATION HAS ONLY TWO LE...
ERIC Educational Resources Information Center
Hull, Daniel M.; Lovett, James E.
This task analysis report for the Robotics/Automated Systems Technician (RAST) curriculum project first provides a RAST job description. It then discusses the task analysis, including the identification of tasks, the grouping of tasks according to major areas of specialty, and the comparison of the competencies to existing or new courses to…
NASA Astrophysics Data System (ADS)
Pasternack, G. B.; Hopkins, C.
2017-12-01
A river channel and its associated riparian corridor exhibit a pattern of nested, geomorphically imprinted, lateral inundation zones (IZs). Each zone plays a key role in fluvial geomorphic processes and ecological functions. Within each zone, distinct landforms (aka geomorphic or morphological units, MUs) reside at the 0.1-10 channel width scale. These features are basic units linking river corridor morphology with local ecosystem services. Objective, automated delineation of nested inundation zones and morphological units remains a significant scientific challenge. This study describes and demonstrates new, objective methods for solving this problem, using the 35-km alluvial lower Yuba River as a testbed. A detrended, high-resolution digital elevation model constructed from near-census topographic and bathymetric data was produced and used in a hypsograph analysis, a commonly used method in oceanographic studies capable of identifying slope breaks at IZ transitions. Geomorphic interpretation mindful of the river's setting was required to properly describe each IZ identified by the hypsograph analysis. Then, a 2D hydrodynamic model was used to determine what flow yields the wetted area that most closely matches each IZ domain. The model also provided meter-scale rasters of depth and velocity useful for MU mapping. Even though MUs are discharge-independent landforms, they can be revealed by analyzing their overlying hydraulics at low flows. Baseflow depth and velocity rasters are used along with a hydraulic landform classification system to quantitatively delineate in-channel bed MU types. In-channel bar and off-channel flood and valley MUs are delineated using a combination of hydraulic and geomorphic indicators, such as depth and velocity rasters for different discharges, topographic contours, NAIP imagery, and a raster of vegetation. The ability to objectively delineate inundation zones and morphological units in tandem allows for better informed river management and restoration strategies as well as scientific studies about abiotic-biotic linkages.
Predicting Future Morphological Changes of Lesions from Radiotracer Uptake in 18F-FDG-PET Images
Bagci, Ulas; Yao, Jianhua; Miller-Jaster, Kirsten; Chen, Xinjian; Mollura, Daniel J.
2013-01-01
We introduce a novel computational framework to enable automated identification of texture and shape features of lesions on 18F-FDG-PET images through a graph-based image segmentation method. The proposed framework predicts future morphological changes of lesions with high accuracy. The presented methodology has several benefits over conventional qualitative and semi-quantitative methods, due to its fully quantitative nature and high accuracy in each step of (i) detection, (ii) segmentation, and (iii) feature extraction. To evaluate our proposed computational framework, thirty patients received 2 18F-FDG-PET scans (60 scans total), at two different time points. Metastatic papillary renal cell carcinoma, cerebellar hemongioblastoma, non-small cell lung cancer, neurofibroma, lymphomatoid granulomatosis, lung neoplasm, neuroendocrine tumor, soft tissue thoracic mass, nonnecrotizing granulomatous inflammation, renal cell carcinoma with papillary and cystic features, diffuse large B-cell lymphoma, metastatic alveolar soft part sarcoma, and small cell lung cancer were included in this analysis. The radiotracer accumulation in patients' scans was automatically detected and segmented by the proposed segmentation algorithm. Delineated regions were used to extract shape and textural features, with the proposed adaptive feature extraction framework, as well as standardized uptake values (SUV) of uptake regions, to conduct a broad quantitative analysis. Evaluation of segmentation results indicates that our proposed segmentation algorithm has a mean dice similarity coefficient of 85.75±1.75%. We found that 28 of 68 extracted imaging features were correlated well with SUVmax (p<0.05), and some of the textural features (such as entropy and maximum probability) were superior in predicting morphological changes of radiotracer uptake regions longitudinally, compared to single intensity feature such as SUVmax. We also found that integrating textural features with SUV measurements significantly improves the prediction accuracy of morphological changes (Spearman correlation coefficient = 0.8715, p<2e-16). PMID:23431398
Liu, Ting; Maurovich-Horvat, Pál; Mayrhofer, Thomas; Puchner, Stefan B; Lu, Michael T; Ghemigian, Khristine; Kitslaar, Pieter H; Broersen, Alexander; Pursnani, Amit; Hoffmann, Udo; Ferencik, Maros
2018-02-01
Semi-automated software can provide quantitative assessment of atherosclerotic plaques on coronary CT angiography (CTA). The relationship between established qualitative high-risk plaque features and quantitative plaque measurements has not been studied. We analyzed the association between quantitative plaque measurements and qualitative high-risk plaque features on coronary CTA. We included 260 patients with plaque who underwent coronary CTA in the Rule Out Myocardial Infarction/Ischemia Using Computer Assisted Tomography (ROMICAT) II trial. Quantitative plaque assessment and qualitative plaque characterization were performed on a per coronary segment basis. Quantitative coronary plaque measurements included plaque volume, plaque burden, remodeling index, and diameter stenosis. In qualitative analysis, high-risk plaque was present if positive remodeling, low CT attenuation plaque, napkin-ring sign or spotty calcium were detected. Univariable and multivariable logistic regression analyses were performed to assess the association between quantitative and qualitative high-risk plaque assessment. Among 888 segments with coronary plaque, high-risk plaque was present in 391 (44.0%) segments by qualitative analysis. In quantitative analysis, segments with high-risk plaque had higher total plaque volume, low CT attenuation plaque volume, plaque burden and remodeling index. Quantitatively assessed low CT attenuation plaque volume (odds ratio 1.12 per 1 mm 3 , 95% CI 1.04-1.21), positive remodeling (odds ratio 1.25 per 0.1, 95% CI 1.10-1.41) and plaque burden (odds ratio 1.53 per 0.1, 95% CI 1.08-2.16) were associated with high-risk plaque. Quantitative coronary plaque characteristics (low CT attenuation plaque volume, positive remodeling and plaque burden) measured by semi-automated software correlated with qualitative assessment of high-risk plaque features.
Automated Dispersion and Orientation Analysis for Carbon Nanotube Reinforced Polymer Composites
Gao, Yi; Li, Zhuo; Lin, Ziyin; Zhu, Liangjia; Tannenbaum, Allen; Bouix, Sylvain; Wong, C.P.
2012-01-01
The properties of carbon nanotube (CNT)/polymer composites are strongly dependent on the dispersion and orientation of CNTs in the host matrix. Quantification of the dispersion and orientation of CNTs by microstructure observation and image analysis has been demonstrated as a useful way to understand the structure-property relationship of CNT/polymer composites. However, due to the various morphologies and large amount of CNTs in one image, automatic and accurate identification of CNTs has become the bottleneck for dispersion/orientation analysis. To solve this problem, shape identification is performed for each pixel in the filler identification step, so that individual CNT can be exacted from images automatically. The improved filler identification enables more accurate analysis of CNT dispersion and orientation. The obtained dispersion index and orientation index of both synthetic and real images from model compounds correspond well with the observations. Moreover, these indices help to explain the electrical properties of CNT/Silicone composite, which is used as a model compound. This method can also be extended to other polymer composites with high aspect ratio fillers. PMID:23060008
ERIC Educational Resources Information Center
Hull, Daniel M.; Lovett, James E.
The six new robotics and automated systems specialty courses developed by the Robotics/Automated Systems Technician (RAST) project are described in this publication. Course titles are Fundamentals of Robotics and Automated Systems, Automated Systems and Support Components, Controllers for Robots and Automated Systems, Robotics and Automated…
NASA Astrophysics Data System (ADS)
Hidalgo-Aguirre, Maribel; Gitelman, Julian; Lesk, Mark Richard; Costantino, Santiago
2015-11-01
Optical coherence tomography (OCT) imaging has become a standard diagnostic tool in ophthalmology, providing essential information associated with various eye diseases. In order to investigate the dynamics of the ocular fundus, we present a simple and accurate automated algorithm to segment the inner limiting membrane in video-rate optic nerve head spectral domain (SD) OCT images. The method is based on morphological operations including a two-step contrast enhancement technique, proving to be very robust when dealing with low signal-to-noise ratio images and pathological eyes. An analysis algorithm was also developed to measure neuroretinal tissue deformation from the segmented retinal profiles. The performance of the algorithm is demonstrated, and deformation results are presented for healthy and glaucomatous eyes.
Forecasting Flare Activity Using Deep Convolutional Neural Networks
NASA Astrophysics Data System (ADS)
Hernandez, T.
2017-12-01
Current operational flare forecasting relies on human morphological analysis of active regions and the persistence of solar flare activity through time (i.e. that the Sun will continue to do what it is doing right now: flaring or remaining calm). In this talk we present the results of applying deep Convolutional Neural Networks (CNNs) to the problem of solar flare forecasting. CNNs operate by training a set of tunable spatial filters that, in combination with neural layer interconnectivity, allow CNNs to automatically identify significant spatial structures predictive for classification and regression problems. We will start by discussing the applicability and success rate of the approach, the advantages it has over non-automated forecasts, and how mining our trained neural network provides a fresh look into the mechanisms behind magnetic energy storage and release.
Galactic satellite systems: radial distribution and environment dependence of galaxy morphology
NASA Astrophysics Data System (ADS)
Ann, H. B.; Park, Changbom; Choi, Yun-Young
2008-09-01
We have studied the radial distribution of the early (E/S0) and late (S/Irr) types of satellites around bright host galaxies. We made a volume-limited sample of 4986 satellites brighter than Mr = -18.0 associated with 2254 hosts brighter than Mr = -19.0 from the Sloan Digital Sky Survey Data Release 5 sample. The morphology of satellites is determined by an automated morphology classifier, but the host galaxies are visually classified. We found segregation of satellite morphology as a function of the projected distance from the host galaxy. The amplitude and shape of the early-type satellite fraction profile are found to depend on the host luminosity. This is the morphology-radius/density relation at the galactic scale. There is a strong tendency for morphology conformity between the host galaxy and its satellites. The early-type fraction of satellites hosted by early-type galaxies is systematically larger than that of late-type hosts, and is a strong function of the distance from the host galaxies. Fainter satellites are more vulnerable to the morphology transformation effects of hosts. Dependence of satellite morphology on the large-scale background density was detected. The fraction of early-type satellites increases in high-density regions for both early- and late-type hosts. It is argued that the conformity in morphology of galactic satellite system is mainly originated by the hydrodynamical and radiative effects of hosts on satellites.
Automated Program Analysis for Cybersecurity (APAC)
2016-07-14
AUTOMATED PROGRAM ANALYSIS FOR CYBERSECURITY (APAC) FIVE DIRECTIONS, INC JULY 2016 FINAL TECHNICAL REPORT APPROVED... CYBERSECURITY (APAC) 5a. CONTRACT NUMBER FA8750-14-C-0050 5b. GRANT NUMBER N/A 5c. PROGRAM ELEMENT NUMBER 61101E 6. AUTHOR(S) William Arbaugh...AC Team Adversarial Challenge Team, responsible for creating malicious applications APAC Automated Program Analysis for Cybersecurity BAE BAE Systems
ADP Analysis project for the Human Resources Management Division
NASA Technical Reports Server (NTRS)
Tureman, Robert L., Jr.
1993-01-01
The ADP (Automated Data Processing) Analysis Project was conducted for the Human Resources Management Division (HRMD) of NASA's Langley Research Center. The three major areas of work in the project were computer support, automated inventory analysis, and an ADP study for the Division. The goal of the computer support work was to determine automation needs of Division personnel and help them solve computing problems. The goal of automated inventory analysis was to find a way to analyze installed software and usage on a Macintosh. Finally, the ADP functional systems study for the Division was designed to assess future HRMD needs concerning ADP organization and activities.
Automated frame selection process for high-resolution microendoscopy
NASA Astrophysics Data System (ADS)
Ishijima, Ayumu; Schwarz, Richard A.; Shin, Dongsuk; Mondrik, Sharon; Vigneswaran, Nadarajah; Gillenwater, Ann M.; Anandasabapathy, Sharmila; Richards-Kortum, Rebecca
2015-04-01
We developed an automated frame selection algorithm for high-resolution microendoscopy video sequences. The algorithm rapidly selects a representative frame with minimal motion artifact from a short video sequence, enabling fully automated image analysis at the point-of-care. The algorithm was evaluated by quantitative comparison of diagnostically relevant image features and diagnostic classification results obtained using automated frame selection versus manual frame selection. A data set consisting of video sequences collected in vivo from 100 oral sites and 167 esophageal sites was used in the analysis. The area under the receiver operating characteristic curve was 0.78 (automated selection) versus 0.82 (manual selection) for oral sites, and 0.93 (automated selection) versus 0.92 (manual selection) for esophageal sites. The implementation of fully automated high-resolution microendoscopy at the point-of-care has the potential to reduce the number of biopsies needed for accurate diagnosis of precancer and cancer in low-resource settings where there may be limited infrastructure and personnel for standard histologic analysis.
Kim, Jungkyu; Jensen, Erik C; Stockton, Amanda M; Mathies, Richard A
2013-08-20
A fully integrated multilayer microfluidic chemical analyzer for automated sample processing and labeling, as well as analysis using capillary zone electrophoresis is developed and characterized. Using lifting gate microfluidic control valve technology, a microfluidic automaton consisting of a two-dimensional microvalve cellular array is fabricated with soft lithography in a format that enables facile integration with a microfluidic capillary electrophoresis device. The programmable sample processor performs precise mixing, metering, and routing operations that can be combined to achieve automation of complex and diverse assay protocols. Sample labeling protocols for amino acid, aldehyde/ketone and carboxylic acid analysis are performed automatically followed by automated transfer and analysis by the integrated microfluidic capillary electrophoresis chip. Equivalent performance to off-chip sample processing is demonstrated for each compound class; the automated analysis resulted in a limit of detection of ~16 nM for amino acids. Our microfluidic automaton provides a fully automated, portable microfluidic analysis system capable of autonomous analysis of diverse compound classes in challenging environments.
Automated detection of a prostate Ni-Ti stent in electronic portal images.
Carl, Jesper; Nielsen, Henning; Nielsen, Jane; Lund, Bente; Larsen, Erik Hoejkjaer
2006-12-01
Planning target volumes (PTV) in fractionated radiotherapy still have to be outlined with wide margins to the clinical target volume due to uncertainties arising from daily shift of the prostate position. A recently proposed new method of visualization of the prostate is based on insertion of a thermo-expandable Ni-Ti stent. The current study proposes a new detection algorithm for automated detection of the Ni-Ti stent in electronic portal images. The algorithm is based on the Ni-Ti stent having a cylindrical shape with a fixed diameter, which was used as the basis for an automated detection algorithm. The automated method uses enhancement of lines combined with a grayscale morphology operation that looks for enhanced pixels separated with a distance similar to the diameter of the stent. The images in this study are all from prostate cancer patients treated with radiotherapy in a previous study. Images of a stent inserted in a humanoid phantom demonstrated a localization accuracy of 0.4-0.7 mm which equals the pixel size in the image. The automated detection of the stent was compared to manual detection in 71 pairs of orthogonal images taken in nine patients. The algorithm was successful in 67 of 71 pairs of images. The method is fast, has a high success rate, good accuracy, and has a potential for unsupervised localization of the prostate before radiotherapy, which would enable automated repositioning before treatment and allow for the use of very tight PTV margins.
Minor, K S; Willits, J A; Marggraf, M P; Jones, M N; Lysaker, P H
2018-04-25
Conveying information cohesively is an essential element of communication that is disrupted in schizophrenia. These disruptions are typically expressed through disorganized symptoms, which have been linked to neurocognitive, social cognitive, and metacognitive deficits. Automated analysis can objectively assess disorganization within sentences, between sentences, and across paragraphs by comparing explicit communication to a large text corpus. Little work in schizophrenia has tested: (1) links between disorganized symptoms measured via automated analysis and neurocognition, social cognition, or metacognition; and (2) if automated analysis explains incremental variance in cognitive processes beyond clinician-rated scales. Disorganization was measured in schizophrenia (n = 81) with Coh-Metrix 3.0, an automated program that calculates basic and complex language indices. Trained staff also assessed neurocognition, social cognition, metacognition, and clinician-rated disorganization. Findings showed that all three cognitive processes were significantly associated with at least one automated index of disorganization. When automated analysis was compared with a clinician-rated scale, it accounted for significant variance in neurocognition and metacognition beyond the clinician-rated measure. When combined, these two methods explained 28-31% of the variance in neurocognition, social cognition, and metacognition. This study illustrated how automated analysis can highlight the specific role of disorganization in neurocognition, social cognition, and metacognition. Generally, those with poor cognition also displayed more disorganization in their speech-making it difficult for listeners to process essential information needed to tie the speaker's ideas together. Our findings showcase how implementing a mixed-methods approach in schizophrenia can explain substantial variance in cognitive processes.
Multi-tissue and multi-scale approach for nuclei segmentation in H&E stained images.
Salvi, Massimo; Molinari, Filippo
2018-06-20
Accurate nuclei detection and segmentation in histological images is essential for many clinical purposes. While manual annotations are time-consuming and operator-dependent, full automated segmentation remains a challenging task due to the high variability of cells intensity, size and morphology. Most of the proposed algorithms for the automated segmentation of nuclei were designed for specific organ or tissues. The aim of this study was to develop and validate a fully multiscale method, named MANA (Multiscale Adaptive Nuclei Analysis), for nuclei segmentation in different tissues and magnifications. MANA was tested on a dataset of H&E stained tissue images with more than 59,000 annotated nuclei, taken from six organs (colon, liver, bone, prostate, adrenal gland and thyroid) and three magnifications (10×, 20×, 40×). Automatic results were compared with manual segmentations and three open-source software designed for nuclei detection. For each organ, MANA obtained always an F1-score higher than 0.91, with an average F1 of 0.9305 ± 0.0161. The average computational time was about 20 s independently of the number of nuclei to be detected (anyway, higher than 1000), indicating the efficiency of the proposed technique. To the best of our knowledge, MANA is the first fully automated multi-scale and multi-tissue algorithm for nuclei detection. Overall, the robustness and versatility of MANA allowed to achieve, on different organs and magnifications, performances in line or better than those of state-of-art algorithms optimized for single tissues.
Automated Calibration For Numerical Models Of Riverflow
NASA Astrophysics Data System (ADS)
Fernandez, Betsaida; Kopmann, Rebekka; Oladyshkin, Sergey
2017-04-01
Calibration of numerical models is fundamental since the beginning of all types of hydro system modeling, to approximate the parameters that can mimic the overall system behavior. Thus, an assessment of different deterministic and stochastic optimization methods is undertaken to compare their robustness, computational feasibility, and global search capacity. Also, the uncertainty of the most suitable methods is analyzed. These optimization methods minimize the objective function that comprises synthetic measurements and simulated data. Synthetic measurement data replace the observed data set to guarantee an existing parameter solution. The input data for the objective function derivate from a hydro-morphological dynamics numerical model which represents an 180-degree bend channel. The hydro- morphological numerical model shows a high level of ill-posedness in the mathematical problem. The minimization of the objective function by different candidate methods for optimization indicates a failure in some of the gradient-based methods as Newton Conjugated and BFGS. Others reveal partial convergence, such as Nelder-Mead, Polak und Ribieri, L-BFGS-B, Truncated Newton Conjugated, and Trust-Region Newton Conjugated Gradient. Further ones indicate parameter solutions that range outside the physical limits, such as Levenberg-Marquardt and LeastSquareRoot. Moreover, there is a significant computational demand for genetic optimization methods, such as Differential Evolution and Basin-Hopping, as well as for Brute Force methods. The Deterministic Sequential Least Square Programming and the scholastic Bayes Inference theory methods present the optimal optimization results. keywords: Automated calibration of hydro-morphological dynamic numerical model, Bayesian inference theory, deterministic optimization methods.
Evaluation of an automated karyotyping system for chromosome aberration analysis
NASA Technical Reports Server (NTRS)
Prichard, Howard M.
1987-01-01
Chromosome aberration analysis is a promising complement to conventional radiation dosimetry, particularly in the complex radiation fields encountered in the space environment. The capabilities of a recently developed automated karyotyping system were evaluated both to determine current capabilities and limitations and to suggest areas where future development should be emphasized. Cells exposed to radiometric chemicals and to photon and particulate radiation were evaluated by manual inspection and by automated karyotyping. It was demonstrated that the evaluated programs were appropriate for image digitization, storage, and transmission. However, automated and semi-automated scoring techniques must be advanced significantly if in-flight chromosome aberration analysis is to be practical. A degree of artificial intelligence may be necessary to realize this goal.
PLACE: an open-source python package for laboratory automation, control, and experimentation.
Johnson, Jami L; Tom Wörden, Henrik; van Wijk, Kasper
2015-02-01
In modern laboratories, software can drive the full experimental process from data acquisition to storage, processing, and analysis. The automation of laboratory data acquisition is an important consideration for every laboratory. When implementing a laboratory automation scheme, important parameters include its reliability, time to implement, adaptability, and compatibility with software used at other stages of experimentation. In this article, we present an open-source, flexible, and extensible Python package for Laboratory Automation, Control, and Experimentation (PLACE). The package uses modular organization and clear design principles; therefore, it can be easily customized or expanded to meet the needs of diverse laboratories. We discuss the organization of PLACE, data-handling considerations, and then present an example using PLACE for laser-ultrasound experiments. Finally, we demonstrate the seamless transition to post-processing and analysis with Python through the development of an analysis module for data produced by PLACE automation. © 2014 Society for Laboratory Automation and Screening.
2011-01-01
Background Fluorescence in situ hybridization (FISH) is very accurate method for measuring HER2 gene copies, as a sign of potential breast cancer. This method requires small tissue samples, and has a high sensitivity to detect abnormalities from a histological section. By using multiple colors, this method allows the detection of multiple targets simultaneously. The target parts in the cells become visible as colored dots. The HER-2 probes are visible as orange stained spots under a fluorescent microscope while probes for centromere 17 (CEP-17), the chromosome on which the gene HER-2/neu is located, are visible as green spots. Methods The conventional analysis involves the scoring of the ratio of HER-2/neu over CEP 17 dots within each cell nucleus and then averaging the scores for a number of 60 cells. A ratio of 2.0 of HER-2/neu to CEP 17 copy number denotes amplification. Several methods have been proposed for the detection and automated evaluation (dot counting) of FISH signals. In this paper the combined method based on the mathematical morphology (MM) and inverse multifractal (IMF) analysis is suggested. Similar method was applied recently in detection of microcalcifications in digital mammograms, and was very successful. Results The combined MM using top-hat and bottom-hat filters, and the IMF method was applied to FISH images from Molecular Biology Lab, Department of Pathology, Wielkoposka Cancer Center, Poznan. Initial results indicate that this method can be applied to FISH images for the evaluation of HER2/neu status. Conclusions Mathematical morphology and multifractal approach are used for colored dot detection and counting in FISH images. Initial results derived on clinical cases are promising. Note that the overlapping of colored dots, particularly red/orange dots, needs additional improvements in post-processing. PMID:21489192
AUTOMATED SOLID PHASE EXTRACTION GC/MS FOR ANALYSIS OF SEMIVOLATILES IN WATER AND SEDIMENTS
Data is presented on the development of a new automated system combining solid phase extraction (SPE) with GC/MS spectrometry for the single-run analysis of water samples containing a broad range of organic compounds. The system uses commercially available automated in-line sampl...
Desmarais, Samantha M.; Tropini, Carolina; Miguel, Amanda; Cava, Felipe; Monds, Russell D.; de Pedro, Miguel A.; Huang, Kerwyn Casey
2015-01-01
The bacterial cell wall is a network of glycan strands cross-linked by short peptides (peptidoglycan); it is responsible for the mechanical integrity of the cell and shape determination. Liquid chromatography can be used to measure the abundance of the muropeptide subunits composing the cell wall. Characteristics such as the degree of cross-linking and average glycan strand length are known to vary across species. However, a systematic comparison among strains of a given species has yet to be undertaken, making it difficult to assess the origins of variability in peptidoglycan composition. We present a protocol for muropeptide analysis using ultra performance liquid chromatography (UPLC) and demonstrate that UPLC achieves resolution comparable with that of HPLC while requiring orders of magnitude less injection volume and a fraction of the elution time. We also developed a software platform to automate the identification and quantification of chromatographic peaks, which we demonstrate has improved accuracy relative to other software. This combined experimental and computational methodology revealed that peptidoglycan composition was approximately maintained across strains from three Gram-negative species despite taxonomical and morphological differences. Peptidoglycan composition and density were maintained after we systematically altered cell size in Escherichia coli using the antibiotic A22, indicating that cell shape is largely decoupled from the biochemistry of peptidoglycan synthesis. High-throughput, sensitive UPLC combined with our automated software for chromatographic analysis will accelerate the discovery of peptidoglycan composition and the molecular mechanisms of cell wall structure determination. PMID:26468288
Paproki, A; Engstrom, C; Chandra, S S; Neubert, A; Fripp, J; Crozier, S
2014-09-01
To validate an automatic scheme for the segmentation and quantitative analysis of the medial meniscus (MM) and lateral meniscus (LM) in magnetic resonance (MR) images of the knee. We analysed sagittal water-excited double-echo steady-state MR images of the knee from a subset of the Osteoarthritis Initiative (OAI) cohort. The MM and LM were automatically segmented in the MR images based on a deformable model approach. Quantitative parameters including volume, subluxation and tibial-coverage were automatically calculated for comparison (Wilcoxon tests) between knees with variable radiographic osteoarthritis (rOA), medial and lateral joint space narrowing (mJSN, lJSN) and pain. Automatic segmentations and estimated parameters were evaluated for accuracy using manual delineations of the menisci in 88 pathological knee MR examinations at baseline and 12 months time-points. The median (95% confidence-interval (CI)) Dice similarity index (DSI) (2 ∗|Auto ∩ Manual|/(|Auto|+|Manual|)∗ 100) between manual and automated segmentations for the MM and LM volumes were 78.3% (75.0-78.7), 83.9% (82.1-83.9) at baseline and 75.3% (72.8-76.9), 83.0% (81.6-83.5) at 12 months. Pearson coefficients between automatic and manual segmentation parameters ranged from r = 0.70 to r = 0.92. MM in rOA/mJSN knees had significantly greater subluxation and smaller tibial-coverage than no-rOA/no-mJSN knees. LM in rOA knees had significantly greater volumes and tibial-coverage than no-rOA knees. Our automated method successfully segmented the menisci in normal and osteoarthritic knee MR images and detected meaningful morphological differences with respect to rOA and joint space narrowing (JSN). Our approach will facilitate analyses of the menisci in prospective MR cohorts such as the OAI for investigations into pathophysiological changes occurring in early osteoarthritis (OA) development. Copyright © 2014 Osteoarthritis Research Society International. Published by Elsevier Ltd. All rights reserved.
Increased cerebellar gray matter volume in head chefs.
Cerasa, Antonio; Sarica, Alessia; Martino, Iolanda; Fabbricatore, Carmelo; Tomaiuolo, Francesco; Rocca, Federico; Caracciolo, Manuela; Quattrone, Aldo
2017-01-01
Chefs exert expert motor and cognitive performances on a daily basis. Neuroimaging has clearly shown that that long-term skill learning (i.e., athletes, musicians, chess player or sommeliers) induces plastic changes in the brain thus enabling tasks to be performed faster and more accurately. How a chef's expertise is embodied in a specific neural network has never been investigated. Eleven Italian head chefs with long-term brigade management expertise and 11 demographically-/ psychologically- matched non-experts underwent morphological evaluations. Voxel-based analysis performed with SUIT, as well as, automated volumetric measurement assessed with Freesurfer, revealed increased gray matter volume in the cerebellum in chefs compared to non-experts. The most significant changes were detected in the anterior vermis and the posterior cerebellar lobule. The magnitude of the brigade staff and the higher performance in the Tower of London test correlated with these specific gray matter increases, respectively. We found that chefs are characterized by an anatomical variability involving the cerebellum. This confirms the role of this region in the development of similar expert brains characterized by learning dexterous skills, such as pianists, rock climbers and basketball players. However, the nature of the cellular events underlying the detected morphological differences remains an open question.
2011-01-01
Background Studies of nuclear function in many organisms, especially those with tough cell walls, are limited by lack of availability of simple, economical methods for large-scale preparation of clean, undamaged nuclei. Findings Here we present a useful method for nuclear isolation from the important model organism, the fission yeast, Schizosaccharomyces pombe. To preserve in vivo molecular configurations, we flash-froze the yeast cells in liquid nitrogen. Then we broke their tough cell walls, without damaging their nuclei, by grinding in a precision-controlled motorized mortar-and-pestle apparatus. The cryo-ground cells were resuspended and thawed in a buffer designed to preserve nuclear morphology, and the nuclei were enriched by differential centrifugation. The washed nuclei were free from contaminating nucleases and have proven well-suited as starting material for genome-wide chromatin analysis and for preparation of fragile DNA replication intermediates. Conclusions We have developed a simple, reproducible, economical procedure for large-scale preparation of endogenous-nuclease-free, morphologically intact nuclei from fission yeast. With appropriate modifications, this procedure may well prove useful for isolation of nuclei from other organisms with, or without, tough cell walls. PMID:22088094
Givens, Robert M; Mesner, Larry D; Hamlin, Joyce L; Buck, Michael J; Huberman, Joel A
2011-11-16
Studies of nuclear function in many organisms, especially those with tough cell walls, are limited by lack of availability of simple, economical methods for large-scale preparation of clean, undamaged nuclei. Here we present a useful method for nuclear isolation from the important model organism, the fission yeast, Schizosaccharomyces pombe. To preserve in vivo molecular configurations, we flash-froze the yeast cells in liquid nitrogen. Then we broke their tough cell walls, without damaging their nuclei, by grinding in a precision-controlled motorized mortar-and-pestle apparatus. The cryo-ground cells were resuspended and thawed in a buffer designed to preserve nuclear morphology, and the nuclei were enriched by differential centrifugation. The washed nuclei were free from contaminating nucleases and have proven well-suited as starting material for genome-wide chromatin analysis and for preparation of fragile DNA replication intermediates. We have developed a simple, reproducible, economical procedure for large-scale preparation of endogenous-nuclease-free, morphologically intact nuclei from fission yeast. With appropriate modifications, this procedure may well prove useful for isolation of nuclei from other organisms with, or without, tough cell walls.
Castellano, Maila; Conzatti, Lucia; Turturro, Antonio; Costa, Giovanna; Busca, Guido
2007-05-03
A good dispersion of silica into elastomers, typically used in tire tread production, is obtained by grafting of the silica with multifunctional organosilanes. In this study, the influence of the chemical structure of a triethoxysilane (TES), octadecyltriethoxysilane (ODTES), and ODTES/bistriethoxysilylpropyltetrasulfane (TESPT) mixture was investigated by inverse gas chromatography (IGC) at infinite dilution. Thermodynamic results indicate a higher polarity of the silica surface modified with TES as compared to that of the unmodified silica due to new OH groups deriving from the hydrolysis of ethoxy groups of the silane; the long hydrocarbon substituent of the ODTES lies on the surface of silica and reduces the dispersive component of the silica surface tension. A comparison with silica modified with TESPT is discussed. An accurate morphological investigation by transmission electron microscopy (TEM) and automated image analysis (AIA) was carried out on aggregates of silica dispersed into a SBR compound loaded with 35 phr (per hundred rubber) of untreated and TESPT-treated silica. Morphological descriptors such as the projected area/perimeter ratio (A/P) and roundness (P2/4piA) provided direct and quantitative indications about the distribution of the filler into the rubber matrix.
Image-based query-by-example for big databases of galaxy images
NASA Astrophysics Data System (ADS)
Shamir, Lior; Kuminski, Evan
2017-01-01
Very large astronomical databases containing millions or even billions of galaxy images have been becoming increasingly important tools in astronomy research. However, in many cases the very large size makes it more difficult to analyze these data manually, reinforcing the need for computer algorithms that can automate the data analysis process. An example of such task is the identification of galaxies of a certain morphology of interest. For instance, if a rare galaxy is identified it is reasonable to expect that more galaxies of similar morphology exist in the database, but it is virtually impossible to manually search these databases to identify such galaxies. Here we describe computer vision and pattern recognition methodology that receives a galaxy image as an input, and searches automatically a large dataset of galaxies to return a list of galaxies that are visually similar to the query galaxy. The returned list is not necessarily complete or clean, but it provides a substantial reduction of the original database into a smaller dataset, in which the frequency of objects visually similar to the query galaxy is much higher. Experimental results show that the algorithm can identify rare galaxies such as ring galaxies among datasets of 10,000 astronomical objects.
Tian, Cancan; Zheng, Xiujuan; Han, Yuan; Sun, Xiaoguang; Chen, Kewei; Huang, Qiu
2013-11-01
This work presents a novel semi-automated renal region-of-interest (ROI) determination method that is user friendly, time saving, and yet provides a robust glomerular filtration rate (GFR) estimation highly consistent with the reference method. We reviewed data from 57 patients who underwent (99m)Tc-diethylenetriaminepentaacetic acid renal scintigraphy and were diagnosed with abnormal renal function. The renal and background ROIs were delineated by the proposed multi-step, semi-automated method, which integrates temporal/morphologic information via visual inspection and computer-aided calculations. The total GFR was estimated using the proposed method (sGFR) performed by 2 junior clinicians (A and B) with 1 and 3 years of experience, respectively (sGFR_a, sGFR_b), and compared with the reference total GFR (rGFR) estimated by a senior clinician with 20 years of experience who manually delineated the kidney and background ROIs. All GFR calculations herein were conducted using the Gates method. Data from 10 patients with unilateral or non-functioning kidneys were excluded from the analysis. For the remaining patients, sGFR correlated well with rGFR (r(s/rGFR_a) = 0.957, P < 0.001 and r(s/rGFR_b) = 0.951, P < 0.001) and sGFR_a correlated well with sGFR_b (r(a/b) = 0.997, P < 0.001). Moreover, the Bland-Altman plots for sGFR_a and sGFR_b confirm the high reproducibility of the proposed method between different operators. Finally, the proposed procedure is almost 3 times faster than the routinely used procedure in clinical practice. The results suggest that this method is easy to use, highly reproducible, and accurate in measuring the GFR of patients with low renal function. The method is being further extended to a fully automated procedure.
Lian, Yanyun; Song, Zhijian
2014-01-01
Brain tumor segmentation from magnetic resonance imaging (MRI) is an important step toward surgical planning, treatment planning, monitoring of therapy. However, manual tumor segmentation commonly used in clinic is time-consuming and challenging, and none of the existed automated methods are highly robust, reliable and efficient in clinic application. An accurate and automated tumor segmentation method has been developed for brain tumor segmentation that will provide reproducible and objective results close to manual segmentation results. Based on the symmetry of human brain, we employed sliding-window technique and correlation coefficient to locate the tumor position. At first, the image to be segmented was normalized, rotated, denoised, and bisected. Subsequently, through vertical and horizontal sliding-windows technique in turn, that is, two windows in the left and the right part of brain image moving simultaneously pixel by pixel in two parts of brain image, along with calculating of correlation coefficient of two windows, two windows with minimal correlation coefficient were obtained, and the window with bigger average gray value is the location of tumor and the pixel with biggest gray value is the locating point of tumor. At last, the segmentation threshold was decided by the average gray value of the pixels in the square with center at the locating point and 10 pixels of side length, and threshold segmentation and morphological operations were used to acquire the final tumor region. The method was evaluated on 3D FSPGR brain MR images of 10 patients. As a result, the average ratio of correct location was 93.4% for 575 slices containing tumor, the average Dice similarity coefficient was 0.77 for one scan, and the average time spent on one scan was 40 seconds. An fully automated, simple and efficient segmentation method for brain tumor is proposed and promising for future clinic use. Correlation coefficient is a new and effective feature for tumor location.
Development of an automated asbestos counting software based on fluorescence microscopy.
Alexandrov, Maxym; Ichida, Etsuko; Nishimura, Tomoki; Aoki, Kousuke; Ishida, Takenori; Hirota, Ryuichi; Ikeda, Takeshi; Kawasaki, Tetsuo; Kuroda, Akio
2015-01-01
An emerging alternative to the commonly used analytical methods for asbestos analysis is fluorescence microscopy (FM), which relies on highly specific asbestos-binding probes to distinguish asbestos from interfering non-asbestos fibers. However, all types of microscopic asbestos analysis require laborious examination of large number of fields of view and are prone to subjective errors and large variability between asbestos counts by different analysts and laboratories. A possible solution to these problems is automated counting of asbestos fibers by image analysis software, which would lower the cost and increase the reliability of asbestos testing. This study seeks to develop a fiber recognition and counting software for FM-based asbestos analysis. We discuss the main features of the developed software and the results of its testing. Software testing showed good correlation between automated and manual counts for the samples with medium and high fiber concentrations. At low fiber concentrations, the automated counts were less accurate, leading us to implement correction mode for automated counts. While the full automation of asbestos analysis would require further improvements in accuracy of fiber identification, the developed software could already assist professional asbestos analysts and record detailed fiber dimensions for the use in epidemiological research.
Desland, Fiona A; Afzal, Aqeela; Warraich, Zuha; Mocco, J
2014-01-01
Animal models of stroke have been crucial in advancing our understanding of the pathophysiology of cerebral ischemia. Currently, the standards for determining neurological deficit in rodents are the Bederson and Garcia scales, manual assessments scoring animals based on parameters ranked on a narrow scale of severity. Automated open field analysis of a live-video tracking system that analyzes animal behavior may provide a more sensitive test. Results obtained from the manual Bederson and Garcia scales did not show significant differences between pre- and post-stroke animals in a small cohort. When using the same cohort, however, post-stroke data obtained from automated open field analysis showed significant differences in several parameters. Furthermore, large cohort analysis also demonstrated increased sensitivity with automated open field analysis versus the Bederson and Garcia scales. These early data indicate use of automated open field analysis software may provide a more sensitive assessment when compared to traditional Bederson and Garcia scales.
Automated detection of the retinal from OCT spectral domain images of healthy eyes
NASA Astrophysics Data System (ADS)
Giovinco, Gaspare; Savastano, Maria Cristina; Ventre, Salvatore; Tamburrino, Antonello
2015-06-01
Optical coherence tomography (OCT) has become one of the most relevant diagnostic tools for retinal diseases. Besides being a non-invasive technique, one distinguished feature is its unique capability of providing (in vivo) cross-sectional view of the retinal. Specifically, OCT images show the retinal layers. From the clinical point of view, the identification of the retinal layers opens new perspectives to study the correlation between morphological and functional aspects of the retinal tissue. The main contribution of this paper is a new method/algorithm for the automated segmentation of cross-sectional images of the retina of healthy eyes, obtained by means of spectral domain optical coherence tomography (SD-OCT). Specifically, the proposed segmentation algorithm provides the automated detection of different retinal layers. Tests on experimental SD-OCT scans performed by three different instruments/manufacturers have been successfully carried out and compared to a manual segmentation made by an independent ophthalmologist, showing the generality and the effectiveness of the proposed method.
Automated detection of retinal layers from OCT spectral-domain images of healthy eyes
NASA Astrophysics Data System (ADS)
Giovinco, Gaspare; Savastano, Maria Cristina; Ventre, Salvatore; Tamburrino, Antonello
2015-12-01
Optical coherence tomography (OCT) has become one of the most relevant diagnostic tools for retinal diseases. Besides being a non-invasive technique, one distinguished feature is its unique capability of providing (in vivo) cross-sectional view of the retina. Specifically, OCT images show the retinal layers. From the clinical point of view, the identification of the retinal layers opens new perspectives to study the correlation between morphological and functional aspects of the retinal tissue. The main contribution of this paper is a new method/algorithm for the automated segmentation of cross-sectional images of the retina of healthy eyes, obtained by means of spectral-domain optical coherence tomography (SD-OCT). Specifically, the proposed segmentation algorithm provides the automated detection of different retinal layers. Tests on experimental SD-OCT scans performed by three different instruments/manufacturers have been successfully carried out and compared to a manual segmentation made by an independent ophthalmologist, showing the generality and the effectiveness of the proposed method.
NASA Astrophysics Data System (ADS)
Klemt, Christian; Modat, Marc; Pichat, Jonas; Cardoso, M. J.; Henckel, Joahnn; Hart, Alister; Ourselin, Sebastien
2015-03-01
Metal-on-metal (MoM) hip arthroplasties have been utilised over the last 15 years to restore hip function for 1.5 million patients worldwide. Althoug widely used, this hip arthroplasty releases metal wear debris which lead to muscle atrophy. The degree of muscle wastage differs across patients ranging from mild to severe. The longterm outcomes for patients with MoM hip arthroplasty are reduced for increasing degrees of muscle atrophy, highlighting the need to automatically segment pathological muscles. The automated segmentation of pathological soft tissues is challenging as these lack distinct boundaries and morphologically differ across subjects. As a result, there is no method reported in the literature which has been successfully applied to automatically segment pathological muscles. We propose the first automated framework to delineate severely atrophied muscles by applying a novel automated segmentation propagation framework to patients with MoM hip arthroplasty. The proposed algorithm was used to automatically quantify muscle wastage in these patients.
Automated Sneak Circuit Analysis Technique
1990-06-01
the OrCAD/SDT module Port facility. 2. The terminals of all in- circuit voltage sources (e , batteries) must be labeled using the OrCAD/SDT module port...ELECTE 1 MAY 2 01994 _- AUTOMATED SNEAK CIRCUIT ANALYSIS TECHNIQUEIt~ w I wtA who RADC 94-14062 Systems Reliability & Engineering Division Rome...Air Develpment Center Best Avai~lable copy AUTOMATED SNEAK CIRCUIT ANALYSIS TECHNIQUE RADC June 1990 Systems Reliability & Engineering Division Rome Air
Effectiveness of Automated Chinese Sentence Scoring with Latent Semantic Analysis
ERIC Educational Resources Information Center
Liao, Chen-Huei; Kuo, Bor-Chen; Pai, Kai-Chih
2012-01-01
Automated scoring by means of Latent Semantic Analysis (LSA) has been introduced lately to improve the traditional human scoring system. The purposes of the present study were to develop a LSA-based assessment system to evaluate children's Chinese sentence construction skills and to examine the effectiveness of LSA-based automated scoring function…
AUTOMATED LITERATURE PROCESSING HANDLING AND ANALYSIS SYSTEM--FIRST GENERATION.
ERIC Educational Resources Information Center
Redstone Scientific Information Center, Redstone Arsenal, AL.
THE REPORT PRESENTS A SUMMARY OF THE DEVELOPMENT AND THE CHARACTERISTICS OF THE FIRST GENERATION OF THE AUTOMATED LITERATURE PROCESSING, HANDLING AND ANALYSIS (ALPHA-1) SYSTEM. DESCRIPTIONS OF THE COMPUTER TECHNOLOGY OF ALPHA-1 AND THE USE OF THIS AUTOMATED LIBRARY TECHNIQUE ARE PRESENTED. EACH OF THE SUBSYSTEMS AND MODULES NOW IN OPERATION ARE…
Haass-Koffler, Carolina L; Naeemuddin, Mohammad; Bartlett, Selena E
2012-08-31
The most common software analysis tools available for measuring fluorescence images are for two-dimensional (2D) data that rely on manual settings for inclusion and exclusion of data points, and computer-aided pattern recognition to support the interpretation and findings of the analysis. It has become increasingly important to be able to measure fluorescence images constructed from three-dimensional (3D) datasets in order to be able to capture the complexity of cellular dynamics and understand the basis of cellular plasticity within biological systems. Sophisticated microscopy instruments have permitted the visualization of 3D fluorescence images through the acquisition of multispectral fluorescence images and powerful analytical software that reconstructs the images from confocal stacks that then provide a 3D representation of the collected 2D images. Advanced design-based stereology methods have progressed from the approximation and assumptions of the original model-based stereology even in complex tissue sections. Despite these scientific advances in microscopy, a need remains for an automated analytic method that fully exploits the intrinsic 3D data to allow for the analysis and quantification of the complex changes in cell morphology, protein localization and receptor trafficking. Current techniques available to quantify fluorescence images include Meta-Morph (Molecular Devices, Sunnyvale, CA) and Image J (NIH) which provide manual analysis. Imaris (Andor Technology, Belfast, Northern Ireland) software provides the feature MeasurementPro, which allows the manual creation of measurement points that can be placed in a volume image or drawn on a series of 2D slices to create a 3D object. This method is useful for single-click point measurements to measure a line distance between two objects or to create a polygon that encloses a region of interest, but it is difficult to apply to complex cellular network structures. Filament Tracer (Andor) allows automatic detection of the 3D neuronal filament-like however, this module has been developed to measure defined structures such as neurons, which are comprised of dendrites, axons and spines (tree-like structure). This module has been ingeniously utilized to make morphological measurements to non-neuronal cells, however, the output data provide information of an extended cellular network by using a software that depends on a defined cell shape rather than being an amorphous-shaped cellular model. To overcome the issue of analyzing amorphous-shaped cells and making the software more suitable to a biological application, Imaris developed Imaris Cell. This was a scientific project with the Eidgenössische Technische Hochschule, which has been developed to calculate the relationship between cells and organelles. While the software enables the detection of biological constraints, by forcing one nucleus per cell and using cell membranes to segment cells, it cannot be utilized to analyze fluorescence data that are not continuous because ideally it builds cell surface without void spaces. To our knowledge, at present no user-modifiable automated approach that provides morphometric information from 3D fluorescence images has been developed that achieves cellular spatial information of an undefined shape (Figure 1). We have developed an analytical platform using the Imaris core software module and Imaris XT interfaced to MATLAB (Mat Works, Inc.). These tools allow the 3D measurement of cells without a pre-defined shape and with inconsistent fluorescence network components. Furthermore, this method will allow researchers who have extended expertise in biological systems, but not familiarity to computer applications, to perform quantification of morphological changes in cell dynamics.
Pizarro, Ricardo A; Cheng, Xi; Barnett, Alan; Lemaitre, Herve; Verchinski, Beth A; Goldman, Aaron L; Xiao, Ena; Luo, Qian; Berman, Karen F; Callicott, Joseph H; Weinberger, Daniel R; Mattay, Venkata S
2016-01-01
High-resolution three-dimensional magnetic resonance imaging (3D-MRI) is being increasingly used to delineate morphological changes underlying neuropsychiatric disorders. Unfortunately, artifacts frequently compromise the utility of 3D-MRI yielding irreproducible results, from both type I and type II errors. It is therefore critical to screen 3D-MRIs for artifacts before use. Currently, quality assessment involves slice-wise visual inspection of 3D-MRI volumes, a procedure that is both subjective and time consuming. Automating the quality rating of 3D-MRI could improve the efficiency and reproducibility of the procedure. The present study is one of the first efforts to apply a support vector machine (SVM) algorithm in the quality assessment of structural brain images, using global and region of interest (ROI) automated image quality features developed in-house. SVM is a supervised machine-learning algorithm that can predict the category of test datasets based on the knowledge acquired from a learning dataset. The performance (accuracy) of the automated SVM approach was assessed, by comparing the SVM-predicted quality labels to investigator-determined quality labels. The accuracy for classifying 1457 3D-MRI volumes from our database using the SVM approach is around 80%. These results are promising and illustrate the possibility of using SVM as an automated quality assessment tool for 3D-MRI.
NASA Astrophysics Data System (ADS)
Win, Khin Yadanar; Choomchuay, Somsak; Hamamoto, Kazuhiko
2017-06-01
The automated segmentation of cell nuclei is an essential stage in the quantitative image analysis of cell nuclei extracted from smear cytology images of pleural fluid. Cell nuclei can indicate cancer as the characteristics of cell nuclei are associated with cells proliferation and malignancy in term of size, shape and the stained color. Nevertheless, automatic nuclei segmentation has remained challenging due to the artifacts caused by slide preparation, nuclei heterogeneity such as the poor contrast, inconsistent stained color, the cells variation, and cells overlapping. In this paper, we proposed a watershed-based method that is capable to segment the nuclei of the variety of cells from cytology pleural fluid smear images. Firstly, the original image is preprocessed by converting into the grayscale image and enhancing by adjusting and equalizing the intensity using histogram equalization. Next, the cell nuclei are segmented using OTSU thresholding as the binary image. The undesirable artifacts are eliminated using morphological operations. Finally, the distance transform based watershed method is applied to isolate the touching and overlapping cell nuclei. The proposed method is tested with 25 Papanicolaou (Pap) stained pleural fluid images. The accuracy of our proposed method is 92%. The method is relatively simple, and the results are very promising.
Villa-Uriol, M. C.; Berti, G.; Hose, D. R.; Marzo, A.; Chiarini, A.; Penrose, J.; Pozo, J.; Schmidt, J. G.; Singh, P.; Lycett, R.; Larrabide, I.; Frangi, A. F.
2011-01-01
Cerebral aneurysms are a multi-factorial disease with severe consequences. A core part of the European project @neurIST was the physical characterization of aneurysms to find candidate risk factors associated with aneurysm rupture. The project investigated measures based on morphological, haemodynamic and aneurysm wall structure analyses for more than 300 cases of ruptured and unruptured aneurysms, extracting descriptors suitable for statistical studies. This paper deals with the unique challenges associated with this task, and the implemented solutions. The consistency of results required by the subsequent statistical analyses, given the heterogeneous image data sources and multiple human operators, was met by a highly automated toolchain combined with training. A testimonial of the successful automation is the positive evaluation of the toolchain by over 260 clinicians during various hands-on workshops. The specification of the analyses required thorough investigations of modelling and processing choices, discussed in a detailed analysis protocol. Finally, an abstract data model governing the management of the simulation-related data provides a framework for data provenance and supports future use of data and toolchain. This is achieved by enabling the easy modification of the modelling approaches and solution details through abstract problem descriptions, removing the need of repetition of manual processing work. PMID:22670202
Automated Detection of Atrial Fibrillation Based on Time-Frequency Analysis of Seismocardiograms.
Hurnanen, Tero; Lehtonen, Eero; Tadi, Mojtaba Jafari; Kuusela, Tom; Kiviniemi, Tuomas; Saraste, Antti; Vasankari, Tuija; Airaksinen, Juhani; Koivisto, Tero; Pankaala, Mikko
2017-09-01
In this paper, a novel method to detect atrial fibrillation (AFib) from a seismocardiogram (SCG) is presented. The proposed method is based on linear classification of the spectral entropy and a heart rate variability index computed from the SCG. The performance of the developed algorithm is demonstrated on data gathered from 13 patients in clinical setting. After motion artifact removal, in total 119 min of AFib data and 126 min of sinus rhythm data were considered for automated AFib detection. No other arrhythmias were considered in this study. The proposed algorithm requires no direct heartbeat peak detection from the SCG data, which makes it tolerant against interpersonal variations in the SCG morphology, and noise. Furthermore, the proposed method relies solely on the SCG and needs no complementary electrocardiography to be functional. For the considered data, the detection method performs well even on relatively low quality SCG signals. Using a majority voting scheme that takes five randomly selected segments from a signal and classifies these segments using the proposed algorithm, we obtained an average true positive rate of [Formula: see text] and an average true negative rate of [Formula: see text] for detecting AFib in leave-one-out cross-validation. This paper facilitates adoption of microelectromechanical sensor based heart monitoring devices for arrhythmia detection.
NASA Astrophysics Data System (ADS)
Krappe, Sebastian; Benz, Michaela; Gryanik, Alexander; Tannich, Egbert; Wegner, Christine; Stamminger, Marc; Wittenberg, Thomas; Münzenmayer, Chrisitan
2017-03-01
Malaria is one of the world's most common and serious tropical diseases, caused by parasites of the genus plasmodia that are transmitted by Anopheles mosquitoes. Various parts of Asia and Latin America are affected but highest malaria incidence is found in Sub-Saharan Africa. Standard diagnosis of malaria comprises microscopic detection of parasites in stained thick and thin blood films. As the process of slide reading under the microscope is an error-prone and tedious issue we are developing computer-assisted microscopy systems to support detection and diagnosis of malaria. In this paper we focus on a deep learning (DL) approach for the detection of plasmodia and the evaluation of the proposed approach in comparison with two reference approaches. The proposed classification schemes have been evaluated with more than 180,000 automatically detected and manually classified plasmodia candidate objects from so-called thick smears. Automated solutions for the morphological analysis of malaria blood films could apply such a classifier to detect plasmodia in the highly complex image data of thick smears and thereby shortening the examination time. With such a system diagnosis of malaria infections should become a less tedious, more reliable and reproducible and thus a more objective process. Better quality assurance, improved documentation and global data availability are additional benefits.
An, Gao; Hong, Li; Zhou, Xiao-Bing; Yang, Qiong; Li, Mei-Qing; Tang, Xiang-Yang
2017-03-01
We investigated and compared the functionality of two 3D visualization software provided by a CT vendor and a third-party vendor, respectively. Using surgical anatomical measurement as baseline, we evaluated the accuracy of 3D visualization and verified their utility in computer-aided anatomical analysis. The study cohort consisted of 50 adult cadavers fixed with the classical formaldehyde method. The computer-aided anatomical analysis was based on CT images (in DICOM format) acquired by helical scan with contrast enhancement, using a CT vendor provided 3D visualization workstation (Syngo) and a third-party 3D visualization software (Mimics) that was installed on a PC. Automated and semi-automated segmentations were utilized in the 3D visualization workstation and software, respectively. The functionality and efficiency of automated and semi-automated segmentation methods were compared. Using surgical anatomical measurement as a baseline, the accuracy of 3D visualization based on automated and semi-automated segmentations was quantitatively compared. In semi-automated segmentation, the Mimics 3D visualization software outperformed the Syngo 3D visualization workstation. No significant difference was observed in anatomical data measurement by the Syngo 3D visualization workstation and the Mimics 3D visualization software (P>0.05). Both the Syngo 3D visualization workstation provided by a CT vendor and the Mimics 3D visualization software by a third-party vendor possessed the needed functionality, efficiency and accuracy for computer-aided anatomical analysis. Copyright © 2016 Elsevier GmbH. All rights reserved.
Banks, Victoria A; Stanton, Neville A
2016-11-01
To the average driver, the concept of automation in driving infers that they can become completely 'hands and feet free'. This is a common misconception, however, one that has been shown through the application of Network Analysis to new Cruise Assist technologies that may feature on our roads by 2020. Through the adoption of a Systems Theoretic approach, this paper introduces the concept of driver-initiated automation which reflects the role of the driver in highly automated driving systems. Using a combination of traditional task analysis and the application of quantitative network metrics, this agent-based modelling paper shows how the role of the driver remains an integral part of the driving system implicating the need for designers to ensure they are provided with the tools necessary to remain actively in-the-loop despite giving increasing opportunities to delegate their control to the automated subsystems. Practitioner Summary: This paper describes and analyses a driver-initiated command and control system of automation using representations afforded by task and social networks to understand how drivers remain actively involved in the task. A network analysis of different driver commands suggests that such a strategy does maintain the driver in the control loop.
Analysis of trust in autonomy for convoy operations
NASA Astrophysics Data System (ADS)
Gremillion, Gregory M.; Metcalfe, Jason S.; Marathe, Amar R.; Paul, Victor J.; Christensen, James; Drnec, Kim; Haynes, Benjamin; Atwater, Corey
2016-05-01
With growing use of automation in civilian and military contexts that engage cooperatively with humans, the operator's level of trust in the automated system is a major factor in determining the efficacy of the human-autonomy teams. Suboptimal levels of human trust in autonomy (TiA) can be detrimental to joint team performance. This mis-calibrated trust can manifest in several ways, such as distrust and complete disuse of the autonomy or complacency, which results in an unsupervised autonomous system. This work investigates human behaviors that may reflect TiA in the context of an automated driving task, with the goal of improving team performance. Subjects performed a simulated leaderfollower driving task with an automated driving assistant. The subjects had could choose to engage an automated lane keeping and active cruise control system of varying performance levels. Analysis of the experimental data was performed to identify contextual features of the simulation environment that correlated to instances of automation engagement and disengagement. Furthermore, behaviors that potentially indicate inappropriate TiA levels were identified in the subject trials using estimates of momentary risk and agent performance, as functions of these contextual features. Inter-subject and intra-subject trends in automation usage and performance were also identified. This analysis indicated that for poorer performing automation, TiA decreases with time, while higher performing automation induces less drift toward diminishing usage, and in some cases increases in TiA. Subject use of automation was also found to be largely influenced by course features.
Human performance consequences of stages and levels of automation: an integrated meta-analysis.
Onnasch, Linda; Wickens, Christopher D; Li, Huiyang; Manzey, Dietrich
2014-05-01
We investigated how automation-induced human performance consequences depended on the degree of automation (DOA). Function allocation between human and automation can be represented in terms of the stages and levels taxonomy proposed by Parasuraman, Sheridan, and Wickens. Higher DOAs are achieved both by later stages and higher levels within stages. A meta-analysis based on data of 18 experiments examines the mediating effects of DOA on routine system performance, performance when the automation fails, workload, and situation awareness (SA). The effects of DOA on these measures are summarized by level of statistical significance. We found (a) a clear automation benefit for routine system performance with increasing DOA, (b) a similar but weaker pattern for workload when automation functioned properly, and (c) a negative impact of higher DOA on failure system performance and SA. Most interesting was the finding that negative consequences of automation seem to be most likely when DOA moved across a critical boundary, which was identified between automation supporting information analysis and automation supporting action selection. Results support the proposed cost-benefit trade-off with regard to DOA. It seems that routine performance and workload on one hand, and the potential loss of SA and manual skills on the other hand, directly trade off and that appropriate function allocation can serve only one of the two aspects. Findings contribute to the body of research on adequate function allocation by providing an overall picture through quantitatively combining data from a variety of studies across varying domains.
1982-01-27
Visible 3. 3 Ea r th Location, Colocation, and Normalization 4. IMAGE ANALYSIS 4. 1 Interactive Capabilities 4.2 Examples 5. AUTOMATED CLOUD...computer Interactive Data Access System (McIDAS) before image analysis and algorithm development were done. Earth-location is an automated procedure to...the factor l / s in (SSE) toward the gain settings given in Table 5. 4. IMAGE ANALYSIS 4.1 Interactive Capabilities The development of automated
NASA Astrophysics Data System (ADS)
Kelvin, Lee Steven
This thesis explores the relation between galaxy structure, morphology and stellar mass. In the first part I present single-Sersic two-dimensional model fits to 167,600 galaxies modelled independently in the ugrizYJHK bandpasses using reprocessed Sloan Digital Sky Survey Data Release Seven (SDSS DR7) and UKIRT Infrared Deep Sky Survey Large Area Survey (UKIDSS LAS) imaging data available via the Galaxy and Mass Assembly (GAMA) data base. In order to facilitate this study, we developed Structural Investigation of Galaxies via Model Analysis (SIGMA): an automated wrapper around several contemporary astronomy software packages. We confirm that variations in global structural measurements with wavelength arise due to the effects of dust attenuation and stellar population/metallicity gradients within galaxies. In the second part of this thesis we establish a volume-limited sample of 3,845 galaxies in the local Universe and visually classify these galaxies according to their morphological Hubble type. We find that single-Sersic photometry accurately reproduces the morphology luminosity functions predicted in the literature. We employ multi-component Sersic profiling to provide bulge-disk decompositions for this sample, allowing for the luminosity and stellar mass to be divided between the key structural components: spheroids and disks. Grouping the stellar mass in these structures by the evolutionary mechanisms that formed them, we find that hot-mode collapse, merger or otherwise turbulent mechanisms account for ~46% of the total stellar mass budget, cold-mode gas accretion and splashback mechanisms account for ~48% of the total stellar mass budget and secular evolutionary processes for ~6.5% of the total stellar mass budget in the local (z<0.06) Universe.
Automated Image Analysis Corrosion Working Group Update: February 1, 2018
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wendelberger, James G.
These are slides for the automated image analysis corrosion working group update. The overall goals were: automate the detection and quantification of features in images (faster, more accurate), how to do this (obtain data, analyze data), focus on Laser Scanning Confocal Microscope (LCM) data (laser intensity, laser height/depth, optical RGB, optical plus laser RGB).
Automated Sensitivity Analysis of Interplanetary Trajectories
NASA Technical Reports Server (NTRS)
Knittel, Jeremy; Hughes, Kyle; Englander, Jacob; Sarli, Bruno
2017-01-01
This work describes a suite of Python tools known as the Python EMTG Automated Trade Study Application (PEATSA). PEATSA was written to automate the operation of trajectory optimization software, simplify the process of performing sensitivity analysis, and was ultimately found to out-perform a human trajectory designer in unexpected ways. These benefits will be discussed and demonstrated on sample mission designs.
An automated procedure for detection of IDP's dwellings using VHR satellite imagery
NASA Astrophysics Data System (ADS)
Jenerowicz, Malgorzata; Kemper, Thomas; Soille, Pierre
2011-11-01
This paper presents the results for the estimation of dwellings structures in Al Salam IDP Camp, Southern Darfur, based on Very High Resolution multispectral satellite images obtained by implementation of Mathematical Morphology analysis. A series of image processing procedures, feature extraction methods and textural analysis have been applied in order to provide reliable information about dwellings structures. One of the issues in this context is related to similarity of the spectral response of thatched dwellings' roofs and the surroundings in the IDP camps, where the exploitation of multispectral information is crucial. This study shows the advantage of automatic extraction approach and highlights the importance of detailed spatial and spectral information analysis based on multi-temporal dataset. The additional data fusion of high-resolution panchromatic band with lower resolution multispectral bands of WorldView-2 satellite has positive influence on results and thereby can be useful for humanitarian aid agency, providing support of decisions and estimations of population especially in situations when frequent revisits by space imaging system are the only possibility of continued monitoring.
Dabbah, M A; Graham, J; Petropoulos, I N; Tavakoli, M; Malik, R A
2011-10-01
Diabetic peripheral neuropathy (DPN) is one of the most common long term complications of diabetes. Corneal confocal microscopy (CCM) image analysis is a novel non-invasive technique which quantifies corneal nerve fibre damage and enables diagnosis of DPN. This paper presents an automatic analysis and classification system for detecting nerve fibres in CCM images based on a multi-scale adaptive dual-model detection algorithm. The algorithm exploits the curvilinear structure of the nerve fibres and adapts itself to the local image information. Detected nerve fibres are then quantified and used as feature vectors for classification using random forest (RF) and neural networks (NNT) classifiers. We show, in a comparative study with other well known curvilinear detectors, that the best performance is achieved by the multi-scale dual model in conjunction with the NNT classifier. An evaluation of clinical effectiveness shows that the performance of the automated system matches that of ground-truth defined by expert manual annotation. Copyright © 2011 Elsevier B.V. All rights reserved.
Steinman, Joe; Koletar, Margaret M.; Stefanovic, Bojana; Sled, John G.
2017-01-01
Ex vivo 2-photon fluorescence microscopy (2PFM) with optical clearing enables vascular imaging deep into tissue. However, optical clearing may also produce spherical aberrations if the objective lens is not index-matched to the clearing material, while the perfusion, clearing, and fixation procedure may alter vascular morphology. We compared in vivo and ex vivo 2PFM in mice, focusing on apparent differences in microvascular signal and morphology. Following in vivo imaging, the mice (four total) were perfused with a fluorescent gel and their brains fructose-cleared. The brain regions imaged in vivo were imaged ex vivo. Vessels were segmented in both images using an automated tracing algorithm that accounts for the spatially varying PSF in the ex vivo images. This spatial variance is induced by spherical aberrations caused by imaging fructose-cleared tissue with a water-immersion objective. Alignment of the ex vivo image to the in vivo image through a non-linear warping algorithm enabled comparison of apparent vessel diameter, as well as differences in signal. Shrinkage varied as a function of diameter, with capillaries rendered smaller ex vivo by 13%, while penetrating vessels shrunk by 34%. The pial vasculature attenuated in vivo microvascular signal by 40% 300 μm below the tissue surface, but this effect was absent ex vivo. On the whole, ex vivo imaging was found to be valuable for studying deep cortical vasculature. PMID:29053753
NASA Astrophysics Data System (ADS)
Richards-Kortum, Rebecca
2016-03-01
Esophageal squamous cell neoplasia (ESCN) is the sixth leading cause of cancer death worldwide. Most deaths due to ESCN occur in developing countries, with highest risk areas in northern China. Lugol's chromoendoscopy (LCE) is the gold-standard for ESCN screening; while the sensitivity of LCE for ESCN is >95%, LCE suffers poor specificity (< 65%) due to false positive findings from inflammatory lesions. High resolution microendoscopy (HRME) uses a low-cost, fiber-optic fluorescence microscope to image morphology of the surface epithelium without need for biopsy. We developed a tablet-interfaced HRME with automated, real-time image analysis. In an in vivo study of 177 patients referred for endoscopy in China, use of the algorithm identified neoplasia with a sensitivity and specificity of 95% and 91% compared to the gold standard of histology.
A model-based approach for automated in vitro cell tracking and chemotaxis analyses.
Debeir, Olivier; Camby, Isabelle; Kiss, Robert; Van Ham, Philippe; Decaestecker, Christine
2004-07-01
Chemotaxis may be studied in two main ways: 1) counting cells passing through an insert (e.g., using Boyden chambers), and 2) directly observing cell cultures (e.g., using Dunn chambers), both in response to stationary concentration gradients. This article promotes the use of Dunn chambers and in vitro cell-tracking, achieved by video microscopy coupled with automatic image analysis software, in order to extract quantitative and qualitative measurements characterizing the response of cells to a diffusible chemical agent. Previously, we set up a videomicroscopy system coupled with image analysis software that was able to compute cell trajectories from in vitro cell cultures. In the present study, we are introducing a new software increasing the application field of this system to chemotaxis studies. This software is based on an adapted version of the active contour methodology, enabling each cell to be efficiently tracked for hours and resulting in detailed descriptions of individual cell trajectories. The major advantages of this method come from an improved robustness with respect to variability in cell morphologies between different cell lines and dynamical changes in cell shape during cell migration. Moreover, the software includes a very small number of parameters which do not require overly sensitive tuning. Finally, the running time of the software is very short, allowing improved possibilities in acquisition frequency and, consequently, improved descriptions of complex cell trajectories, i.e. trajectories including cell division and cell crossing. We validated this software on several artificial and real cell culture experiments in Dunn chambers also including comparisons with manual (human-controlled) analyses. We developed new software and data analysis tools for automated cell tracking which enable cell chemotaxis to be efficiently analyzed. Copyright 2004 Wiley-Liss, Inc.
NASA Astrophysics Data System (ADS)
Viger, R. J.; Van Beusekom, A. E.
2016-12-01
The treatment of glaciers in modeling requires information about their shape and extent. This presentation discusses new methods and their application in a new glacier-capable variant of the USGS PRMS model, a physically-based, spatially distributed daily time-step model designed to simulate the runoff and evolution of glaciers through time. In addition to developing parameters describing PRMS land surfaces (hydrologic response units, HRUs), several of the analyses and products are likely of interest to cryospheric science community in general. The first method is a (fully automated) variation of logic previously presented in the literature for definition of the glacier centerline. Given that the surface of a glacier might be convex, using traditional topographic analyses based on a DEM to trace a path down the glacier is not reliable. Instead a path is derived based on a cost function. Although only a single path is presented in our results, the method can be easily modified to delineate a branched network of centerlines for each glacier. The second method extends the glacier terminus downslope by an arbitrary distance, according to local surface topography. This product is can be used to explore possible, if unlikely, scenarios under which glacier area grows. More usefully, this method can be used to approximate glacier extents from previous years without needing historical imagery. The final method presents an approach for segmenting the glacier into altitude-based HRUs. Successful integration of this information with traditional approaches for discretizing the non-glacierized portions of a basin requires several additional steps. These include synthesizing the glacier centerline network with one developed with a traditional DEM analysis, ensuring that flow can be routed under and beyond glaciers to a basin outlet. Results are presented based on analysis of the Copper River Basin, Alaska.
NASA Astrophysics Data System (ADS)
Wu, Binlin; Mukherjee, Sushmita; Jain, Manu
2016-03-01
Distinguishing chromophobe renal cell carcinoma (chRCC) from oncocytoma on hematoxylin and eosin images may be difficult and require time-consuming ancillary procedures. Multiphoton microscopy (MPM), an optical imaging modality, was used to rapidly generate sub-cellular histological resolution images from formalin-fixed unstained tissue sections from chRCC and oncocytoma.Tissues were excited using 780nm wavelength and emission signals (including second harmonic generation and autofluorescence) were collected in different channels between 390 nm and 650 nm. Granular structure in the cell cytoplasm was observed in both chRCC and oncocytoma. Quantitative morphometric analysis was conducted to distinguish chRCC and oncocytoma. To perform the analysis, cytoplasm and granules in tumor cells were segmented from the images. Their area and fluorescence intensity were found in different channels. Multiple features were measured to quantify the morphological and fluorescence properties. Linear support vector machine (SVM) was used for classification. Re-substitution validation, cross validation and receiver operating characteristic (ROC) curve were implemented to evaluate the efficacy of the SVM classifier. A wrapper feature algorithm was used to select the optimal features which provided the best predictive performance in separating the two tissue types (classes). Statistical measures such as sensitivity, specificity, accuracy and area under curve (AUC) of ROC were calculated to evaluate the efficacy of the classification. Over 80% accuracy was achieved as the predictive performance. This method, if validated on a larger and more diverse sample set, may serve as an automated rapid diagnostic tool to differentiate between chRCC and oncocytoma. An advantage of such automated methods are that they are free from investigator bias and variability.
Process development for automated solar cell and module production. Task 4: Automated array assembly
NASA Technical Reports Server (NTRS)
1980-01-01
A process sequence which can be used in conjunction with automated equipment for the mass production of solar cell modules for terrestrial use was developed. The process sequence was then critically analyzed from a technical and economic standpoint to determine the technological readiness of certain process steps for implementation. The steps receiving analysis were: back contact metallization, automated cell array layup/interconnect, and module edge sealing. For automated layup/interconnect, both hard automation and programmable automation (using an industrial robot) were studied. The programmable automation system was then selected for actual hardware development.
Sahore, Vishal; Sonker, Mukul; Nielsen, Anna V; Knob, Radim; Kumar, Suresh; Woolley, Adam T
2018-01-01
We have developed multichannel integrated microfluidic devices for automated preconcentration, labeling, purification, and separation of preterm birth (PTB) biomarkers. We fabricated multilayer poly(dimethylsiloxane)-cyclic olefin copolymer (PDMS-COC) devices that perform solid-phase extraction (SPE) and microchip electrophoresis (μCE) for automated PTB biomarker analysis. The PDMS control layer had a peristaltic pump and pneumatic valves for flow control, while the PDMS fluidic layer had five input reservoirs connected to microchannels and a μCE system. The COC layers had a reversed-phase octyl methacrylate porous polymer monolith for SPE and fluorescent labeling of PTB biomarkers. We determined μCE conditions for two PTB biomarkers, ferritin (Fer) and corticotropin-releasing factor (CRF). We used these integrated microfluidic devices to preconcentrate and purify off-chip-labeled Fer and CRF in an automated fashion. Finally, we performed a fully automated on-chip analysis of unlabeled PTB biomarkers, involving SPE, labeling, and μCE separation with 1 h total analysis time. These integrated systems have strong potential to be combined with upstream immunoaffinity extraction, offering a compact sample-to-answer biomarker analysis platform. Graphical abstract Pressure-actuated integrated microfluidic devices have been developed for automated solid-phase extraction, fluorescent labeling, and microchip electrophoresis of preterm birth biomarkers.
1978-10-20
Preparation of the Battlefield (IPB) - Phase A An Automated Approach to Terrain and Mobility Cocridor Analysis Prepared For The ;*ttlefield Systems... the Battlefield (IPB) - Phase A An Automated Approach to Terrain and Mobility Corridcr Analysis, Prepared For The Battlefield Systems Integration... series of snapshots developed for Option A. The situation snapshots would be deteloped in like manner for each option, and stored in an
NASA Astrophysics Data System (ADS)
Aldrin, John C.; Coughlin, Chris; Forsyth, David S.; Welter, John T.
2014-02-01
Progress is presented on the development and implementation of automated data analysis (ADA) software to address the burden in interpreting ultrasonic inspection data for large composite structures. The automated data analysis algorithm is presented in detail, which follows standard procedures for analyzing signals for time-of-flight indications and backwall amplitude dropout. ADA processing results are presented for test specimens that include inserted materials and discontinuities produced under poor manufacturing conditions.
Hovnanians, Ninel; Win, Theresa; Makkiya, Mohammed; Zheng, Qi; Taub, Cynthia
2017-11-01
To assess the efficiency and reproducibility of automated measurements of left ventricular (LV) volumes and LV ejection fraction (LVEF) in comparison to manually traced biplane Simpson's method. This is a single-center prospective study. Apical four- and two-chamber views were acquired in patients in sinus rhythm. Two operators independently measured LV volumes and LVEF using biplane Simpson's method. In addition, the image analysis software a2DQ on the Philips EPIQ system was applied to automatically assess the LV volumes and LVEF. Time spent on each analysis, using both methods, was documented. Concordance of echocardiographic measures was evaluated using intraclass correlation (ICC) and Bland-Altman analysis. Manual tracing and automated measurement of LV volumes and LVEF were performed in 184 patients with a mean age of 67.3 ± 17.3 years and BMI 28.0 ± 6.8 kg/m 2 . ICC and Bland-Altman analysis showed good agreements between manual and automated methods measuring LVEF, end-systolic, and end-diastolic volumes. The average analysis time was significantly less using the automated method than manual tracing (116 vs 217 seconds/patient, P < .0001). Automated measurement using the novel image analysis software a2DQ on the Philips EPIQ system produced accurate, efficient, and reproducible assessment of LV volumes and LVEF compared with manual measurement. © 2017, Wiley Periodicals, Inc.
Functional MRI Preprocessing in Lesioned Brains: Manual Versus Automated Region of Interest Analysis
Garrison, Kathleen A.; Rogalsky, Corianne; Sheng, Tong; Liu, Brent; Damasio, Hanna; Winstein, Carolee J.; Aziz-Zadeh, Lisa S.
2015-01-01
Functional magnetic resonance imaging (fMRI) has significant potential in the study and treatment of neurological disorders and stroke. Region of interest (ROI) analysis in such studies allows for testing of strong a priori clinical hypotheses with improved statistical power. A commonly used automated approach to ROI analysis is to spatially normalize each participant’s structural brain image to a template brain image and define ROIs using an atlas. However, in studies of individuals with structural brain lesions, such as stroke, the gold standard approach may be to manually hand-draw ROIs on each participant’s non-normalized structural brain image. Automated approaches to ROI analysis are faster and more standardized, yet are susceptible to preprocessing error (e.g., normalization error) that can be greater in lesioned brains. The manual approach to ROI analysis has high demand for time and expertise, but may provide a more accurate estimate of brain response. In this study, commonly used automated and manual approaches to ROI analysis were directly compared by reanalyzing data from a previously published hypothesis-driven cognitive fMRI study, involving individuals with stroke. The ROI evaluated is the pars opercularis of the inferior frontal gyrus. Significant differences were identified in task-related effect size and percent-activated voxels in this ROI between the automated and manual approaches to ROI analysis. Task interactions, however, were consistent across ROI analysis approaches. These findings support the use of automated approaches to ROI analysis in studies of lesioned brains, provided they employ a task interaction design. PMID:26441816
A proposition for the classification of carbonaceous chondritic micrometeorites
NASA Technical Reports Server (NTRS)
Rietmeijer, Frans J. M.
1994-01-01
Classification of interplanetary dust particles (IDP's) should be unambiguous and, if possible, provide an opportunity to interrelate these ultrafine IDP's with the matrices of undifferentiated meteorites. I prefer a scheme of chemical groupings and petrologic classes that is based on primary IDP properties that can be determined without prejudice by individual investigators. For IDP's of 2-50 microns these properties are bulk elemental chemistry, morphology, shape, and optical properties. The two major chemical groups are readily determined by energy dispersive spectroscopic analysis using the scanning or analytical electron microscope. Refinement of chondritic IDP classification is possible using the dominant mineral species, e.g. olivine, pyroxene, and layer silicates, and is readily inferred from FTIR, and automated chemical analysis. Petrographic analysis of phyllosilicate-rich IDP's will identify smectite-rich and serpentine-rich particles. Chondritic IDP's are also classified according to morphology, viz., CP and CF IDP's are aggregate particles that differ significantly in porosity, while the dense CS IDP's have a smooth surface. The CP IDP's are characterized by an anhydrous silicate mineralogy, but small amounts of layer silicates may be present. Distinction between the CP and CF IDP's is somewhat ambiguous, but the unique CP IDP's are fluffy, or porous, ultrafine-grained aggregates. The CP IDP's, which may contain silicate whiskers, are the most carbon-rich extraterrestrial material presently known. The CF IDP's are much less porous that CP IDP's. Using particle type definitions, CP IDP's in the NASA JSC Cosmic Dust Catalogs are approx. 15 percent of all IDP's that include nonchondritic spheres. Most aggregate particles are of the CF type.
Granulometric profiling of aeolian dust deposits by automated image analysis
NASA Astrophysics Data System (ADS)
Varga, György; Újvári, Gábor; Kovács, János; Jakab, Gergely; Kiss, Klaudia; Szalai, Zoltán
2016-04-01
Determination of granulometric parameters is of growing interest in the Earth sciences. Particle size data of sedimentary deposits provide insights into the physicochemical environment of transport, accumulation and post-depositional alterations of sedimentary particles, and are important proxies applied in paleoclimatic reconstructions. It is especially true for aeolian dust deposits with a fairly narrow grain size range as a consequence of the extremely selective nature of wind sediment transport. Therefore, various aspects of aeolian sedimentation (wind strength, distance to source(s), possible secondary source regions and modes of sedimentation and transport) can be reconstructed only from precise grain size data. As terrestrial wind-blown deposits are among the most important archives of past environmental changes, proper explanation of the proxy data is a mandatory issue. Automated imaging provides a unique technique to gather direct information on granulometric characteristics of sedimentary particles. Granulometric data obtained from automatic image analysis of Malvern Morphologi G3-ID is a rarely applied new technique for particle size and shape analyses in sedimentary geology. Size and shape data of several hundred thousand (or even million) individual particles were automatically recorded in this study from 15 loess and paleosoil samples from the captured high-resolution images. Several size (e.g. circle-equivalent diameter, major axis, length, width, area) and shape parameters (e.g. elongation, circularity, convexity) were calculated by the instrument software. At the same time, the mean light intensity after transmission through each particle is automatically collected by the system as a proxy of optical properties of the material. Intensity values are dependent on chemical composition and/or thickness of the particles. The results of the automated imaging were compared to particle size data determined by three different laser diffraction instruments (Malvern Mastersizer 3000 with a Hydro LV unit; Fritsch Analysette 22 Microtec Plus and Horiba Partica LA-950 v2) and SEM micrographs. To date, there has been very few data published on automated image analyses of size and shape parameters of sedimentary deposits, accordingly many uncertainties exist about the relationship among the results of the different applied methods. Support of the Hungarian Research Fund OTKA under contract PD108708 (for G. Varga) is gratefully acknowledged. It was additionally supported (for G. Varga) by the Bolyai János Research Scholarship of the Hungarian Academy of Sciences.
Rowland, Joel C.; Shelef, Eitan; Pope, Paul A.; ...
2016-07-15
Remotely sensed imagery of rivers has long served as a means for characterizing channel properties and detection of planview change. In the last decade the dramatic increase in the availability of satellite imagery and processing tools has created the potential to greatly expand the spatial and temporal scale of our understanding of river morphology and dynamics. To date, the majority of GIS and automated analyses of planview changes in rivers from remotely sensed data has been developed for single-threaded meandering river systems. These methods have limited applicability to many of the earth's rivers with complex multi-channel planforms. Here we presentmore » the methodologies of a set of analysis algorithms collectively called Spatially Continuous Riverbank Erosion and Accretion Measurements (SCREAM). SCREAM analyzes planview river metrics regardless of river morphology. These algorithms quantify both the erosion and accretion rates of riverbanks from binary masks of channels generated from imagery acquired at two time periods. Additionally, the program quantifies the area of change between river channels and the surrounding floodplain and area of islands lost or formed between these two time periods. To examine variations in erosion rates in relation to local channel attributes and make rate comparisons between river systems of varying sizes, the program determines channel widths and bank curvature at every bank pixel. SCREAM was developed and tested on rivers with diverse and complex planform morphologies in imagery acquired from a range of observational platforms with varying spatial resolutions. Here, validation and verification of SCREAM-generated metrics against manual measurements show no significant measurement errors in determination of channel width, erosion, and bank aspects. SCREAM has the potential to provide data for both the quantitative examination of the controls on erosion rates and for the comparison of these rates across river systems ranging broadly in size and planform morphology.« less
Flow Classification and Cave Discharge Characteristics in Unsaturated Karst Formation
NASA Astrophysics Data System (ADS)
Mariethoz, G.; Mahmud, K.; Baker, A.; Treble, P. C.
2015-12-01
In this study we utilize the spatial array of automated cave drip monitoring in two large chambers of the Golgotha Cave, SW Australia, developed in Quaternary aeolianite (dune limestone), with the aim of understanding infiltration water movement via the relationships between infiltration, stalactite morphology and groundwater recharge. Mahmud et al. (2015) used the Terrestrial LiDAR measurements to analyze stalactite morphology and to characterize possible flow locations in this cave. Here we identify the stalactites feeding the drip loggers and classify each as matrix (soda straw or icicle), fracture or combined-flow. These morphology-based classifications are compared with flow characteristics from the drip logger time series and the discharge from each stalactite is calculated. The total estimated discharge from each area is compared with infiltration estimates to better understand flow from the surface to the cave ceilings of the studied areas. The drip discharge data agrees with the morphology-based flow classification in terms of flow and geometrical characteristics of cave ceiling stalactites. No significant relationships were observed between the drip logger discharge, skewness and coefficient of variation with overburden thickness, due to the possibility of potential vadose-zone storage volume and increasing complexity of the karst architecture. However, these properties can be used to characterize different flow categories. A correlation matrix demonstrates that similar flow categories are positively correlated, implying significant influence of spatial distribution. The infiltration water comes from a larger surface area, suggesting that infiltration is being focused to the studied ceiling areas of each chamber. Most of the ceiling in the cave site is dry, suggesting the possibility of capillary effects with water moving around the cave rather than passing through it. Reference:Mahmud et al. (2015), Terrestrial Lidar Survey and Morphological Analysis to Identify Infiltration Properties in the Tamala Limestone, Western Australia, IEEE JSTARS, DOI: 10.1109/JSTARS.2015.2451088, in Press.
NanoTopoChip: High-throughput nanotopographical cell instruction.
Hulshof, Frits F B; Zhao, Yiping; Vasilevich, Aliaksei; Beijer, Nick R M; de Boer, Meint; Papenburg, Bernke J; van Blitterswijk, Clemens; Stamatialis, Dimitrios; de Boer, Jan
2017-10-15
Surface topography is able to influence cell phenotype in numerous ways and offers opportunities to manipulate cells and tissues. In this work, we develop the Nano-TopoChip and study the cell instructive effects of nanoscale topographies. A combination of deep UV projection lithography and conventional lithography was used to fabricate a library of more than 1200 different defined nanotopographies. To illustrate the cell instructive effects of nanotopography, actin-RFP labeled U2OS osteosarcoma cells were cultured and imaged on the Nano-TopoChip. Automated image analysis shows that of many cell morphological parameters, cell spreading, cell orientation and actin morphology are mostly affected by the nanotopographies. Additionally, by using modeling, the changes of cell morphological parameters could by predicted by several feature shape parameters such as lateral size and spacing. This work overcomes the technological challenges of fabricating high quality defined nanoscale features on unprecedented large surface areas of a material relevant for tissue culture such as PS and the screening system is able to infer nanotopography - cell morphological parameter relationships. Our screening platform provides opportunities to identify and study the effect of nanotopography with beneficial properties for the culture of various cell types. The nanotopography of biomaterial surfaces can be modified to influence adhering cells with the aim to improve the performance of medical implants and tissue culture substrates. However, the necessary knowledge of the underlying mechanisms remains incomplete. One reason for this is the limited availability of high-resolution nanotopographies on relevant biomaterials, suitable to conduct systematic biological studies. The present study shows the fabrication of a library of nano-sized surface topographies with high fidelity. The potential of this library, called the 'NanoTopoChip' is shown in a proof of principle HTS study which demonstrates how cells are affected by nanotopographies. The large dataset, acquired by quantitative high-content imaging, allowed us to use predictive modeling to describe how feature dimensions affect cell morphology. Copyright © 2017 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved.
Automated Sensitivity Analysis of Interplanetary Trajectories for Optimal Mission Design
NASA Technical Reports Server (NTRS)
Knittel, Jeremy; Hughes, Kyle; Englander, Jacob; Sarli, Bruno
2017-01-01
This work describes a suite of Python tools known as the Python EMTG Automated Trade Study Application (PEATSA). PEATSA was written to automate the operation of trajectory optimization software, simplify the process of performing sensitivity analysis, and was ultimately found to out-perform a human trajectory designer in unexpected ways. These benefits will be discussed and demonstrated on sample mission designs.
Automated Assignment of MS/MS Cleavable Cross-Links in Protein 3D-Structure Analysis
NASA Astrophysics Data System (ADS)
Götze, Michael; Pettelkau, Jens; Fritzsche, Romy; Ihling, Christian H.; Schäfer, Mathias; Sinz, Andrea
2015-01-01
CID-MS/MS cleavable cross-linkers hold an enormous potential for an automated analysis of cross-linked products, which is essential for conducting structural proteomics studies. The created characteristic fragment ion patterns can easily be used for an automated assignment and discrimination of cross-linked products. To date, there are only a few software solutions available that make use of these properties, but none allows for an automated analysis of cleavable cross-linked products. The MeroX software fills this gap and presents a powerful tool for protein 3D-structure analysis in combination with MS/MS cleavable cross-linkers. We show that MeroX allows an automatic screening of characteristic fragment ions, considering static and variable peptide modifications, and effectively scores different types of cross-links. No manual input is required for a correct assignment of cross-links and false discovery rates are calculated. The self-explanatory graphical user interface of MeroX provides easy access for an automated cross-link search platform that is compatible with commonly used data file formats, enabling analysis of data originating from different instruments. The combination of an MS/MS cleavable cross-linker with a dedicated software tool for data analysis provides an automated workflow for 3D-structure analysis of proteins. MeroX is available at
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1984-03-01
The objectives of the analysis are to evaluate the application of a number of building automation system capabilities using the Park Plaza Building as a case study. The study looks at the energy and cost effectiveness of some energy management strategies of the building automation system as well as some energy management strategies that are not currently a part of the building automation system. The strategies are also evaluated in terms of their reliability and usefulness in this building.
Non-gynecologic cytology on liquid-based preparations: A morphologic review of facts and artifacts.
Hoda, Rana S
2007-10-01
Liquid-based preparations (LBP) are increasingly being used both for gynecologic (gyn) and non-gynecologic (non-gyn) cytology including fine needle aspirations (FNA). The two FDA-approved LBP currently in use include ThinPrep (TP), (Cytyc Corp, Marlborough, MA) and SurePath (SP), (TriPath Imaging Inc., Burlington, NC). TP was approved for cervico-vaginal (Pap test) cytology in 1996 and SP in 1999 and both have since also been used for non-gyn cytology. In the LBP, instead of being smeared, cells are rinsed into a liquid preservative collection medium and processed on automated devices. Even after a decade of use, the morphological interpretation of LBP remains a diagnostic challenge because of somewhat altered morphology and artifacts or facts resulting from the fixation and processing techniques. These changes include cleaner background with altered or reduced background and extracellular elements; architectural changes such as smaller cell clusters and sheets, breakage of papillae; altered cell distribution with more dyscohesion and changes in cellular morphology with enhanced nuclear features, smaller cell size and slightly more three-dimensional (3-D) clusters. Herein, we review the published literature on morphological aspects of LBP for non-gyn cytology. (c) 2007 Wiley-Liss, Inc.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Karnowski, Thomas Paul; Giancardo, Luca; Li, Yaquin
2013-01-01
Automated retina image analysis has reached a high level of maturity in recent years, and thus the question of how validation is performed in these systems is beginning to grow in importance. One application of retina image analysis is in telemedicine, where an automated system could enable the automated detection of diabetic retinopathy and other eye diseases as a low-cost method for broad-based screening. In this work we discuss our experiences in developing a telemedical network for retina image analysis, including our progression from a manual diagnosis network to a more fully automated one. We pay special attention to howmore » validations of our algorithm steps are performed, both using data from the telemedicine network and other public databases.« less
Automated tumor analysis for molecular profiling in lung cancer
Boyd, Clinton; James, Jacqueline A.; Loughrey, Maurice B.; Hougton, Joseph P.; Boyle, David P.; Kelly, Paul; Maxwell, Perry; McCleary, David; Diamond, James; McArt, Darragh G.; Tunstall, Jonathon; Bankhead, Peter; Salto-Tellez, Manuel
2015-01-01
The discovery and clinical application of molecular biomarkers in solid tumors, increasingly relies on nucleic acid extraction from FFPE tissue sections and subsequent molecular profiling. This in turn requires the pathological review of haematoxylin & eosin (H&E) stained slides, to ensure sample quality, tumor DNA sufficiency by visually estimating the percentage tumor nuclei and tumor annotation for manual macrodissection. In this study on NSCLC, we demonstrate considerable variation in tumor nuclei percentage between pathologists, potentially undermining the precision of NSCLC molecular evaluation and emphasising the need for quantitative tumor evaluation. We subsequently describe the development and validation of a system called TissueMark for automated tumor annotation and percentage tumor nuclei measurement in NSCLC using computerized image analysis. Evaluation of 245 NSCLC slides showed precise automated tumor annotation of cases using Tissuemark, strong concordance with manually drawn boundaries and identical EGFR mutational status, following manual macrodissection from the image analysis generated tumor boundaries. Automated analysis of cell counts for % tumor measurements by Tissuemark showed reduced variability and significant correlation (p < 0.001) with benchmark tumor cell counts. This study demonstrates a robust image analysis technology that can facilitate the automated quantitative analysis of tissue samples for molecular profiling in discovery and diagnostics. PMID:26317646
Choël, Marie; Deboudt, Karine; Osán, János; Flament, Pascal; Van Grieken, René
2005-09-01
Atmospheric aerosols consist of a complex heterogeneous mixture of particles. Single-particle analysis techniques are known to provide unique information on the size-resolved chemical composition of aerosols. A scanning electron microscope (SEM) combined with a thin-window energy-dispersive X-ray (EDX) detector enables the morphological and elemental analysis of single particles down to 0.1 microm with a detection limit of 1-10 wt %, low-Z elements included. To obtain data statistically representative of the air masses sampled, a computer-controlled procedure can be implemented in order to run hundreds of single-particle analyses (typically 1000-2000) automatically in a relatively short period of time (generally 4-8 h, depending on the setup and on the particle loading). However, automated particle analysis by SEM-EDX raises two practical challenges: the accuracy of the particle recognition and the reliability of the quantitative analysis, especially for micrometer-sized particles with low atomic number contents. Since low-Z analysis is hampered by the use of traditional polycarbonate membranes, an alternate choice of substrate is a prerequisite. In this work, boron is being studied as a promising material for particle microanalysis. As EDX is generally said to probe a volume of approximately 1 microm3, geometry effects arise from the finite size of microparticles. These particle geometry effects must be corrected by means of a robust concentration calculation procedure. Conventional quantitative methods developed for bulk samples generate elemental concentrations considerably in error when applied to microparticles. A new methodology for particle microanalysis, combining the use of boron as the substrate material and a reverse Monte Carlo quantitative program, was tested on standard particles ranging from 0.25 to 10 microm. We demonstrate that the quantitative determination of low-Z elements in microparticles is achievable and that highly accurate results can be obtained using the automatic data processing described here compared to conventional methods.
Automated X-ray image analysis for cargo security: Critical review and future promise.
Rogers, Thomas W; Jaccard, Nicolas; Morton, Edward J; Griffin, Lewis D
2017-01-01
We review the relatively immature field of automated image analysis for X-ray cargo imagery. There is increasing demand for automated analysis methods that can assist in the inspection and selection of containers, due to the ever-growing volumes of traded cargo and the increasing concerns that customs- and security-related threats are being smuggled across borders by organised crime and terrorist networks. We split the field into the classical pipeline of image preprocessing and image understanding. Preprocessing includes: image manipulation; quality improvement; Threat Image Projection (TIP); and material discrimination and segmentation. Image understanding includes: Automated Threat Detection (ATD); and Automated Contents Verification (ACV). We identify several gaps in the literature that need to be addressed and propose ideas for future research. Where the current literature is sparse we borrow from the single-view, multi-view, and CT X-ray baggage domains, which have some characteristics in common with X-ray cargo.
Marin, Diego; Gegundez-Arias, Manuel E; Suero, Angel; Bravo, Jose M
2015-02-01
Development of automatic retinal disease diagnosis systems based on retinal image computer analysis can provide remarkably quicker screening programs for early detection. Such systems are mainly focused on the detection of the earliest ophthalmic signs of illness and require previous identification of fundal landmark features such as optic disc (OD), fovea or blood vessels. A methodology for accurate center-position location and OD retinal region segmentation on digital fundus images is presented in this paper. The methodology performs a set of iterative opening-closing morphological operations on the original retinography intensity channel to produce a bright region-enhanced image. Taking blood vessel confluence at the OD into account, a 2-step automatic thresholding procedure is then applied to obtain a reduced region of interest, where the center and the OD pixel region are finally obtained by performing the circular Hough transform on a set of OD boundary candidates generated through the application of the Prewitt edge detector. The methodology was evaluated on 1200 and 1748 fundus images from the publicly available MESSIDOR and MESSIDOR-2 databases, acquired from diabetic patients and thus being clinical cases of interest within the framework of automated diagnosis of retinal diseases associated to diabetes mellitus. This methodology proved highly accurate in OD-center location: average Euclidean distance between the methodology-provided and actual OD-center position was 6.08, 9.22 and 9.72 pixels for retinas of 910, 1380 and 1455 pixels in size, respectively. On the other hand, OD segmentation evaluation was performed in terms of Jaccard and Dice coefficients, as well as the mean average distance between estimated and actual OD boundaries. Comparison with the results reported by other reviewed OD segmentation methodologies shows our proposal renders better overall performance. Its effectiveness and robustness make this proposed automated OD location and segmentation method a suitable tool to be integrated into a complete prescreening system for early diagnosis of retinal diseases. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Pan, Panpan; Chen, Jingdi; Fan, Tiantang; Hu, Yimin; Wu, Tao; Zhang, Qiqing
2016-04-01
This research aims to prepare the biphasic-induced magnetic composite microcapsules (BIMCM) as a promising environmental stimuli-responsive delivery vehicle to dispose the problem of drug burst effect. The paper presented a novel automated in situ click technology of magnetic chitosan/nano hydroxyapatite (CS/nHA) microcapsules. Fe3O4 magnetic nanoparticles (MNP) and nHA were simultaneously in situ crystallized by one-step process. Icariin (ICA), a plant-derived flavonol glycoside, was combined to study drug release properties of BIMCM. BIMCM were characterized by Fourier transform infrared spectroscopy (FTIR), X-ray diffraction (XRD), X-ray photoelectron spectroscopy (XPS), scanning electron microscopy (SEM), and Thermal gravimetric analysis/Differential Scanning Calorimetry(TGA/DSC) in order to reveal their component and surface morphology as well as the role of the in situ generated Fe3O4 MNP and nHA. The magnetic test showed the BIMCM were super-paramagnetic. Both in situ generated Fe3O4 MNP and nHA serve as stable inorganic crosslinkers in BIMCM to form many intermolecular crosslinkages for the movability of the CS chains. This makes ICA loaded microcapsules take on a sustained release behavior and results in the self-adjusting of surface morphology, decreasing of swelling and degradation rates. In addition, in vitro tests were systematically carried out to examine the biocompatibility of the microcapsules by MTT test, Wright-Giemsa dying assay and AO/EB fluorescent staining method. These results demonstrated that successful introduction of the in situ click Fe3O4 MNP provided an alternative strategy because of magnetic sensitivity and sustained release. As such, the novel ICA loaded biphasic-induced magnetic CS/nHA/MNP microcapsules are expected to find potential applications in drug delivery system for bone repair. Copyright © 2015 Elsevier B.V. All rights reserved.
CANEapp: a user-friendly application for automated next generation transcriptomic data analysis.
Velmeshev, Dmitry; Lally, Patrick; Magistri, Marco; Faghihi, Mohammad Ali
2016-01-13
Next generation sequencing (NGS) technologies are indispensable for molecular biology research, but data analysis represents the bottleneck in their application. Users need to be familiar with computer terminal commands, the Linux environment, and various software tools and scripts. Analysis workflows have to be optimized and experimentally validated to extract biologically meaningful data. Moreover, as larger datasets are being generated, their analysis requires use of high-performance servers. To address these needs, we developed CANEapp (application for Comprehensive automated Analysis of Next-generation sequencing Experiments), a unique suite that combines a Graphical User Interface (GUI) and an automated server-side analysis pipeline that is platform-independent, making it suitable for any server architecture. The GUI runs on a PC or Mac and seamlessly connects to the server to provide full GUI control of RNA-sequencing (RNA-seq) project analysis. The server-side analysis pipeline contains a framework that is implemented on a Linux server through completely automated installation of software components and reference files. Analysis with CANEapp is also fully automated and performs differential gene expression analysis and novel noncoding RNA discovery through alternative workflows (Cuffdiff and R packages edgeR and DESeq2). We compared CANEapp to other similar tools, and it significantly improves on previous developments. We experimentally validated CANEapp's performance by applying it to data derived from different experimental paradigms and confirming the results with quantitative real-time PCR (qRT-PCR). CANEapp adapts to any server architecture by effectively using available resources and thus handles large amounts of data efficiently. CANEapp performance has been experimentally validated on various biological datasets. CANEapp is available free of charge at http://psychiatry.med.miami.edu/research/laboratory-of-translational-rna-genomics/CANE-app . We believe that CANEapp will serve both biologists with no computational experience and bioinformaticians as a simple, timesaving but accurate and powerful tool to analyze large RNA-seq datasets and will provide foundations for future development of integrated and automated high-throughput genomics data analysis tools. Due to its inherently standardized pipeline and combination of automated analysis and platform-independence, CANEapp is an ideal for large-scale collaborative RNA-seq projects between different institutions and research groups.
Electron microscopy and forensic practice
NASA Astrophysics Data System (ADS)
Kotrlý, Marek; Turková, Ivana
2013-05-01
Electron microanalysis in forensic practice ranks among basic applications used in investigation of traces (latents, stains, etc.) from crime scenes. Applying electron microscope allows for rapid screening and receiving initial information for a wide range of traces. SEM with EDS/WDS makes it possible to observe topography surface and morphology samples and examination of chemical components. Physical laboratory of the Institute of Criminalistics Prague use SEM especially for examination of inorganic samples, rarely for biology and other material. Recently, possibilities of electron microscopy have been extended considerably using dual systems with focused ion beam. These systems are applied mainly in study of inner micro and nanoparticles , thin layers (intersecting lines in graphical forensic examinations, analysis of layers of functional glass, etc.), study of alloys microdefects, creating 3D particles and aggregates models, etc. Automated mineralogical analyses are a great asset to analysis of mineral phases, particularly soils, similarly it holds for cathode luminescence, predominantly colour one and precise quantitative measurement of their spectral characteristics. Among latest innovations that are becoming to appear also at ordinary laboratories are TOF - SIMS systems and micro Raman spectroscopy with a resolution comparable to EDS/WDS analysis (capable of achieving similar level as through EDS/WDS analysis).
Data is presented on the development of a new automated system combining solid phase extraction (SPE) with GC/MS spectrometry for the single-run analysis of water samples containing a broad range of organic compounds. The system uses commercially available automated in-line 10-m...
1974-07-01
automated manufacturing processes and a rough technoeconomic evaluation of those concepts. Our evaluation is largely based on estimates; therefore, the...must be subjected to thorough analysis and experimental verification before they can be considered definitive. They are being published at this time...hardware and sensor technology, manufacturing engineering, automation, and economic analysis . Members of this team inspected over thirty manufacturing
Model-centric distribution automation: Capacity, reliability, and efficiency
Onen, Ahmet; Jung, Jaesung; Dilek, Murat; ...
2016-02-26
A series of analyses along with field validations that evaluate efficiency, reliability, and capacity improvements of model-centric distribution automation are presented. With model-centric distribution automation, the same model is used from design to real-time control calculations. A 14-feeder system with 7 substations is considered. The analyses involve hourly time-varying loads and annual load growth factors. Phase balancing and capacitor redesign modifications are used to better prepare the system for distribution automation, where the designs are performed considering time-varying loads. Coordinated control of load tap changing transformers, line regulators, and switched capacitor banks is considered. In evaluating distribution automation versus traditionalmore » system design and operation, quasi-steady-state power flow analysis is used. In evaluating distribution automation performance for substation transformer failures, reconfiguration for restoration analysis is performed. In evaluating distribution automation for storm conditions, Monte Carlo simulations coupled with reconfiguration for restoration calculations are used. As a result, the evaluations demonstrate that model-centric distribution automation has positive effects on system efficiency, capacity, and reliability.« less
Model-centric distribution automation: Capacity, reliability, and efficiency
DOE Office of Scientific and Technical Information (OSTI.GOV)
Onen, Ahmet; Jung, Jaesung; Dilek, Murat
A series of analyses along with field validations that evaluate efficiency, reliability, and capacity improvements of model-centric distribution automation are presented. With model-centric distribution automation, the same model is used from design to real-time control calculations. A 14-feeder system with 7 substations is considered. The analyses involve hourly time-varying loads and annual load growth factors. Phase balancing and capacitor redesign modifications are used to better prepare the system for distribution automation, where the designs are performed considering time-varying loads. Coordinated control of load tap changing transformers, line regulators, and switched capacitor banks is considered. In evaluating distribution automation versus traditionalmore » system design and operation, quasi-steady-state power flow analysis is used. In evaluating distribution automation performance for substation transformer failures, reconfiguration for restoration analysis is performed. In evaluating distribution automation for storm conditions, Monte Carlo simulations coupled with reconfiguration for restoration calculations are used. As a result, the evaluations demonstrate that model-centric distribution automation has positive effects on system efficiency, capacity, and reliability.« less
Evaluation of automated time-lapse microscopy for assessment of in vitro activity of antibiotics.
Ungphakorn, Wanchana; Malmberg, Christer; Lagerbäck, Pernilla; Cars, Otto; Nielsen, Elisabet I; Tängdén, Thomas
2017-01-01
This study aimed to evaluate the potential of a new time-lapse microscopy based method (oCelloScope) to efficiently assess the in vitro antibacterial effects of antibiotics. Two E. coli and one P. aeruginosa strain were exposed to ciprofloxacin, colistin, ertapenem and meropenem in 24-h experiments. Background corrected absorption (BCA) derived from the oCelloScope was used to detect bacterial growth. The data obtained with the oCelloScope were compared with those of the automated Bioscreen C method and standard time-kill experiments and a good agreement in results was observed during 6-24h of experiments. Viable counts obtained at 1, 4, 6 and 24h during oCelloScope and Bioscreen C experiments were well correlated with the corresponding BCA and optical density (OD) data. Initial antibacterial effects during the first 6h of experiments were difficult to detect with the automated methods due to their high detection limits (approximately 10 5 CFU/mL for oCelloScope and 10 7 CFU/mL for Bioscreen C), the inability to distinguish between live and dead bacteria and early morphological changes of bacteria during exposure to ciprofloxacin, ertapenem and meropenem. Regrowth was more frequently detected in time-kill experiments, possibly related to the larger working volume with an increased risk of pre-existing or emerging resistance. In comparison with Bioscreen C, the oCelloScope provided additional information on bacterial growth dynamics in the range of 10 5 to 10 7 CFU/mL and morphological features. In conclusion, the oCelloScope would be suitable for detection of in vitro effects of antibiotics, especially when a large number of regimens need to be tested. Copyright © 2016 Elsevier B.V. All rights reserved.
Lobo, Daniel; Morokuma, Junji; Levin, Michael
2016-09-01
Automated computational methods can infer dynamic regulatory network models directly from temporal and spatial experimental data, such as genetic perturbations and their resultant morphologies. Recently, a computational method was able to reverse-engineer the first mechanistic model of planarian regeneration that can recapitulate the main anterior-posterior patterning experiments published in the literature. Validating this comprehensive regulatory model via novel experiments that had not yet been performed would add in our understanding of the remarkable regeneration capacity of planarian worms and demonstrate the power of this automated methodology. Using the Michigan Molecular Interactions and STRING databases and the MoCha software tool, we characterized as hnf4 an unknown regulatory gene predicted to exist by the reverse-engineered dynamic model of planarian regeneration. Then, we used the dynamic model to predict the morphological outcomes under different single and multiple knock-downs (RNA interference) of hnf4 and its predicted gene pathway interactors β-catenin and hh Interestingly, the model predicted that RNAi of hnf4 would rescue the abnormal regenerated phenotype (tailless) of RNAi of hh in amputated trunk fragments. Finally, we validated these predictions in vivo by performing the same surgical and genetic experiments with planarian worms, obtaining the same phenotypic outcomes predicted by the reverse-engineered model. These results suggest that hnf4 is a regulatory gene in planarian regeneration, validate the computational predictions of the reverse-engineered dynamic model, and demonstrate the automated methodology for the discovery of novel genes, pathways and experimental phenotypes. michael.levin@tufts.edu. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
A Robotic Platform for Corn Seedling Morphological Traits Characterization
Lu, Hang; Tang, Lie; Whitham, Steven A.; Mei, Yu
2017-01-01
Crop breeding plays an important role in modern agriculture, improving plant performance, and increasing yield. Identifying the genes that are responsible for beneficial traits greatly facilitates plant breeding efforts for increasing crop production. However, associating genes and their functions with agronomic traits requires researchers to observe, measure, record, and analyze phenotypes of large numbers of plants, a repetitive and error-prone job if performed manually. An automated seedling phenotyping system aimed at replacing manual measurement, reducing sampling time, and increasing the allowable work time is thus highly valuable. Toward this goal, we developed an automated corn seedling phenotyping platform based on a time-of-flight of light (ToF) camera and an industrial robot arm. A ToF camera is mounted on the end effector of the robot arm. The arm positions the ToF camera at different viewpoints for acquiring 3D point cloud data. A camera-to-arm transformation matrix was calculated using a hand-eye calibration procedure and applied to transfer different viewpoints into an arm-based coordinate frame. Point cloud data filters were developed to remove the noise in the background and in the merged seedling point clouds. A 3D-to-2D projection and an x-axis pixel density distribution method were used to segment the stem and leaves. Finally, separated leaves were fitted with 3D curves for morphological traits characterization. This platform was tested on a sample of 60 corn plants at their early growth stages with between two to five leaves. The error ratios of the stem height and leave length measurements are 13.7% and 13.1%, respectively, demonstrating the feasibility of this robotic system for automated corn seedling phenotyping. PMID:28895892
A Robotic Platform for Corn Seedling Morphological Traits Characterization.
Lu, Hang; Tang, Lie; Whitham, Steven A; Mei, Yu
2017-09-12
Crop breeding plays an important role in modern agriculture, improving plant performance, and increasing yield. Identifying the genes that are responsible for beneficial traits greatly facilitates plant breeding efforts for increasing crop production. However, associating genes and their functions with agronomic traits requires researchers to observe, measure, record, and analyze phenotypes of large numbers of plants, a repetitive and error-prone job if performed manually. An automated seedling phenotyping system aimed at replacing manual measurement, reducing sampling time, and increasing the allowable work time is thus highly valuable. Toward this goal, we developed an automated corn seedling phenotyping platform based on a time-of-flight of light (ToF) camera and an industrial robot arm. A ToF camera is mounted on the end effector of the robot arm. The arm positions the ToF camera at different viewpoints for acquiring 3D point cloud data. A camera-to-arm transformation matrix was calculated using a hand-eye calibration procedure and applied to transfer different viewpoints into an arm-based coordinate frame. Point cloud data filters were developed to remove the noise in the background and in the merged seedling point clouds. A 3D-to-2D projection and an x -axis pixel density distribution method were used to segment the stem and leaves. Finally, separated leaves were fitted with 3D curves for morphological traits characterization. This platform was tested on a sample of 60 corn plants at their early growth stages with between two to five leaves. The error ratios of the stem height and leave length measurements are 13.7% and 13.1%, respectively, demonstrating the feasibility of this robotic system for automated corn seedling phenotyping.
Willis, B H; Barton, P; Pearmain, P; Bryan, S; Hyde, C
2005-03-01
To assess the effectiveness and cost-effectiveness of adding automated image analysis to cervical screening programmes. Searching of all major electronic databases to the end of 2000 was supplemented by a detailed survey for unpublished UK literature. Four systematic reviews were conducted according to recognised guidance. The review of 'clinical effectiveness' included studies assessing reproducibility and impact on health outcomes and processes in addition to evaluations of test accuracy. A discrete event simulation model was developed, although the economic evaluation ultimately relied on a cost-minimisation analysis. The predominant finding from the systematic reviews was the very limited amount of rigorous primary research. None of the included studies refers to the only commercially available automated image analysis device in 2002, the AutoPap Guided Screening (GS) System. The results of the included studies were debatably most compatible with automated image analysis being equivalent in test performance to manual screening. Concerning process, there was evidence that automation does lead to reductions in average slide processing times. In the PRISMATIC trial this was reduced from 10.4 to 3.9 minutes, a statistically significant and practically important difference. The economic evaluation tentatively suggested that the AutoPap GS System may be efficient. The key proviso is that credible data become available to support that the AutoPap GS System has test performance and processing times equivalent to those obtained for PAPNET. The available evidence is still insufficient to recommend implementation of automated image analysis systems. The priority for action remains further research, particularly the 'clinical effectiveness' of the AutoPap GS System. Assessing the cost-effectiveness of introducing automation alongside other approaches is also a priority.
Automated Acquisition of Proximal Femur Morphological Characteristics
NASA Astrophysics Data System (ADS)
Tabakovic, Slobodan; Zeljkovic, Milan; Milojevic, Zoran
2014-10-01
The success of the hip arthroplasty surgery largely depends on the endoprosthesis adjustment to the patient's femur. This implies that the position of the femoral bone in relation to the pelvis is preserved and that the endoprosthesis position ensures its longevity. Dimensions and body shape of the hip joint endoprosthesis and its position after the surgery depend on a number of geometrical parameters of the patient's femur. One of the most suitable methods for determination of these parameters involves 3D reconstruction of femur, based on diagnostic images, and subsequent determination of the required geometric parameters. In this paper, software for automated determination of geometric parameters of the femur is presented. Detailed software development procedure for the purpose of faster and more efficient design of the hip endoprosthesis that ensures patients' specific requirements is also offered
Sédille-Mostafaie, Nazanin; Engler, Hanna; Lutz, Susanne; Korte, Wolfgang
2013-06-01
Laboratories today face increasing pressure to automate operations due to increasing workloads and the need to reduce expenditure. Few studies to date have focussed on the laboratory automation of preanalytical coagulation specimen processing. In the present study, we examined whether a clinical chemistry automation protocol meets the preanalytical requirements for the analyses of coagulation. During the implementation of laboratory automation, we began to operate a pre- and postanalytical automation system. The preanalytical unit processes blood specimens for chemistry, immunology and coagulation by automated specimen processing. As the production of platelet-poor plasma is highly dependent on optimal centrifugation, we examined specimen handling under different centrifugation conditions in order to produce optimal platelet deficient plasma specimens. To this end, manually processed models centrifuged at 1500 g for 5 and 20 min were compared to an automated centrifugation model at 3000 g for 7 min. For analytical assays that are performed frequently enough to be targets for full automation, Passing-Bablok regression analysis showed close agreement between different centrifugation methods, with a correlation coefficient between 0.98 and 0.99 and a bias between -5% and +6%. For seldom performed assays that do not mandate full automation, the Passing-Bablok regression analysis showed acceptable to poor agreement between different centrifugation methods. A full automation solution is suitable and can be recommended for frequent haemostasis testing.
Combined process automation for large-scale EEG analysis.
Sfondouris, John L; Quebedeaux, Tabitha M; Holdgraf, Chris; Musto, Alberto E
2012-01-01
Epileptogenesis is a dynamic process producing increased seizure susceptibility. Electroencephalography (EEG) data provides information critical in understanding the evolution of epileptiform changes throughout epileptic foci. We designed an algorithm to facilitate efficient large-scale EEG analysis via linked automation of multiple data processing steps. Using EEG recordings obtained from electrical stimulation studies, the following steps of EEG analysis were automated: (1) alignment and isolation of pre- and post-stimulation intervals, (2) generation of user-defined band frequency waveforms, (3) spike-sorting, (4) quantification of spike and burst data and (5) power spectral density analysis. This algorithm allows for quicker, more efficient EEG analysis. Copyright © 2011 Elsevier Ltd. All rights reserved.
Microfluidic sorting and multimodal typing of cancer cells in self-assembled magnetic arrays.
Saliba, Antoine-Emmanuel; Saias, Laure; Psychari, Eleni; Minc, Nicolas; Simon, Damien; Bidard, François-Clément; Mathiot, Claire; Pierga, Jean-Yves; Fraisier, Vincent; Salamero, Jean; Saada, Véronique; Farace, Françoise; Vielh, Philippe; Malaquin, Laurent; Viovy, Jean-Louis
2010-08-17
We propose a unique method for cell sorting, "Ephesia," using columns of biofunctionalized superparamagnetic beads self-assembled in a microfluidic channel onto an array of magnetic traps prepared by microcontact printing. It combines the advantages of microfluidic cell sorting, notably the application of a well controlled, flow-activated interaction between cells and beads, and those of immunomagnetic sorting, notably the use of batch-prepared, well characterized antibody-bearing beads. On cell lines mixtures, we demonstrated a capture yield better than 94%, and the possibility to cultivate in situ the captured cells. A second series of experiments involved clinical samples--blood, pleural effusion, and fine needle aspirates--issued from healthy donors and patients with B-cell hematological malignant tumors (leukemia and lymphoma). The immunophenotype and morphology of B-lymphocytes were analyzed directly in the microfluidic chamber, and compared with conventional flow cytometry and visual cytology data, in a blind test. Immunophenotyping results using Ephesia were fully consistent with those obtained by flow cytometry. We obtained in situ high resolution confocal three-dimensional images of the cell nuclei, showing intranuclear details consistent with conventional cytological staining. Ephesia thus provides a powerful approach to cell capture and typing allowing fully automated high resolution and quantitative immunophenotyping and morphological analysis. It requires at least 10 times smaller sample volume and cell numbers than cytometry, potentially increasing the range of indications and the success rate of microbiopsy-based diagnosis, and reducing analysis time and cost.
NASA Astrophysics Data System (ADS)
Bulusu, Kartik V.; Hussain, Shadman; Plesniak, Michael W.
2014-11-01
Secondary flow vortical patterns in arterial curvatures have the potential to affect several cardiovascular phenomena, e.g., progression of atherosclerosis by altering wall shear stresses, carotid atheromatous disease, thoracic aortic aneurysms and Marfan's syndrome. Temporal characteristics of secondary flow structures vis-à-vis physiological (pulsatile) inflow waveform were explored by continuous wavelet transform (CWT) analysis of phase-locked, two-component, two-dimensional particle image velocimeter data. Measurements were made in a 180° curved artery test section upstream of the curvature and at the 90° cross-sectional plane. Streamwise, upstream flow rate measurements were analyzed using a one-dimensional antisymmetric wavelet. Cross-stream measurements at the 90° location of the curved artery revealed interesting multi-scale, multi-strength coherent secondary flow structures. An automated process for coherent structure detection and vortical feature quantification was applied to large ensembles of PIV data. Metrics such as the number of secondary flow structures, their sizes and strengths were generated at every discrete time instance of the physiological inflow waveform. An autonomous data post-processing method incorporating two-dimensional CWT for coherent structure detection was implemented. Loss of coherence in secondary flow structures during the systolic deceleration phase is observed in accordance with previous research. The algorithmic approach presented herein further elucidated the sensitivity and dependence of morphological changes in secondary flow structures on quasiperiodicity and magnitude of temporal gradients in physiological inflow conditions.
Billi, Fabrizio; Benya, Paul; Kavanaugh, Aaron; Adams, John; Ebramzadeh, Edward; McKellop, Harry
2012-02-01
Numerous studies indicate highly crosslinked polyethylenes reduce the wear debris volume generated by hip arthroplasty acetabular liners. This, in turns, requires new methods to isolate and characterize them. We describe a method for extracting polyethylene wear particles from bovine serum typically used in wear tests and for characterizing their size, distribution, and morphology. Serum proteins were completely digested using an optimized enzymatic digestion method that prevented the loss of the smallest particles and minimized their clumping. Density-gradient ultracentrifugation was designed to remove contaminants and recover the particles without filtration, depositing them directly onto a silicon wafer. This provided uniform distribution of the particles and high contrast against the background, facilitating accurate, automated, morphometric image analysis. The accuracy and precision of the new protocol were assessed by recovering and characterizing particles from wear tests of three types of polyethylene acetabular cups (no crosslinking and 5 Mrads and 7.5 Mrads of gamma irradiation crosslinking). The new method demonstrated important differences in the particle size distributions and morphologic parameters among the three types of polyethylene that could not be detected using prior isolation methods. The new protocol overcomes a number of limitations, such as loss of nanometer-sized particles and artifactual clumping, among others. The analysis of polyethylene wear particles produced in joint simulator wear tests of prosthetic joints is a key tool to identify the wear mechanisms that produce the particles and predict and evaluate their effects on periprosthetic tissues.
Neural network classification of sweet potato embryos
NASA Astrophysics Data System (ADS)
Molto, Enrique; Harrell, Roy C.
1993-05-01
Somatic embryogenesis is a process that allows for the in vitro propagation of thousands of plants in sub-liter size vessels and has been successfully applied to many significant species. The heterogeneity of maturity and quality of embryos produced with this technique requires sorting to obtain a uniform product. An automated harvester is being developed at the University of Florida to sort embryos in vitro at different stages of maturation in a suspension culture. The system utilizes machine vision to characterize embryo morphology and a fluidic based separation device to isolate embryos associated with a pre-defined, targeted morphology. Two different backpropagation neural networks (BNN) were used to classify embryos based on information extracted from the vision system. One network utilized geometric features such as embryo area, length, and symmetry as inputs. The alternative network utilized polar coordinates of an embryo's perimeter with respect to its centroid as inputs. The performances of both techniques were compared with each other and with an embryo classification method based on linear discriminant analysis (LDA). Similar results were obtained with all three techniques. Classification efficiency was improved by reducing the dimension of the feature vector trough a forward stepwise analysis by LDA. In order to enhance the purity of the sample selected as harvestable, a reject to classify option was introduced in the model and analyzed. The best classifier performances (76% overall correct classifications, 75% harvestable objects properly classified, homogeneity improvement ratio 1.5) were obtained using 8 features in a BNN.
Liang, Shenxuan; Yin, Lei; Shengyang Yu, Kevin; Hofmann, Marie-Claude; Yu, Xiaozhong
2017-01-01
Bisphenol A (BPA), an endocrine-disrupting compound, was found to be a testicular toxicant in animal models. Bisphenol S (BPS), bisphenol AF (BPAF), and tetrabromobisphenol A (TBBPA) were recently introduced to the market as alternatives to BPA. However, toxicological data of these compounds in the male reproductive system are still limited so far. This study developed and validated an automated multi-parametric high-content analysis (HCA) using the C18-4 spermatogonial cell line as a model. We applied these validated HCA, including nuclear morphology, DNA content, cell cycle progression, DNA synthesis, cytoskeleton integrity, and DNA damage responses, to characterize and compare the testicular toxicities of BPA and 3 selected commercial available BPA analogues, BPS, BPAF, and TBBPA. HCA revealed BPAF and TBBPA exhibited higher spermatogonial toxicities as compared with BPA and BPS, including dose- and time-dependent alterations in nuclear morphology, cell cycle, DNA damage responses, and perturbation of the cytoskeleton. Our results demonstrated that this specific culture model together with HCA can be utilized for quantitative screening and discriminating of chemical-specific testicular toxicity in spermatogonial cells. It also provides a fast and cost-effective approach for the identification of environmental chemicals that could have detrimental effects on reproduction. © The Author 2016. Published by Oxford University Press on behalf of the Society of Toxicology. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Microfluidic sorting and multimodal typing of cancer cells in self-assembled magnetic arrays
Saliba, Antoine-Emmanuel; Saias, Laure; Psychari, Eleni; Minc, Nicolas; Simon, Damien; Bidard, François-Clément; Mathiot, Claire; Pierga, Jean-Yves; Fraisier, Vincent; Salamero, Jean; Saada, Véronique; Farace, Françoise; Vielh, Philippe; Malaquin, Laurent; Viovy, Jean-Louis
2010-01-01
We propose a unique method for cell sorting, “Ephesia,” using columns of biofunctionalized superparamagnetic beads self-assembled in a microfluidic channel onto an array of magnetic traps prepared by microcontact printing. It combines the advantages of microfluidic cell sorting, notably the application of a well controlled, flow-activated interaction between cells and beads, and those of immunomagnetic sorting, notably the use of batch-prepared, well characterized antibody-bearing beads. On cell lines mixtures, we demonstrated a capture yield better than 94%, and the possibility to cultivate in situ the captured cells. A second series of experiments involved clinical samples—blood, pleural effusion, and fine needle aspirates— issued from healthy donors and patients with B-cell hematological malignant tumors (leukemia and lymphoma). The immunophenotype and morphology of B-lymphocytes were analyzed directly in the microfluidic chamber, and compared with conventional flow cytometry and visual cytology data, in a blind test. Immunophenotyping results using Ephesia were fully consistent with those obtained by flow cytometry. We obtained in situ high resolution confocal three-dimensional images of the cell nuclei, showing intranuclear details consistent with conventional cytological staining. Ephesia thus provides a powerful approach to cell capture and typing allowing fully automated high resolution and quantitative immunophenotyping and morphological analysis. It requires at least 10 times smaller sample volume and cell numbers than cytometry, potentially increasing the range of indications and the success rate of microbiopsy-based diagnosis, and reducing analysis time and cost. PMID:20679245
Abu, Arpah; Leow, Lee Kien; Ramli, Rosli; Omar, Hasmahzaiti
2016-12-22
Taxonomists frequently identify specimen from various populations based on the morphological characteristics and molecular data. This study looks into another invasive process in identification of house shrew (Suncus murinus) using image analysis and machine learning approaches. Thus, an automated identification system is developed to assist and simplify this task. In this study, seven descriptors namely area, convex area, major axis length, minor axis length, perimeter, equivalent diameter and extent which are based on the shape are used as features to represent digital image of skull that consists of dorsal, lateral and jaw views for each specimen. An Artificial Neural Network (ANN) is used as classifier to classify the skulls of S. murinus based on region (northern and southern populations of Peninsular Malaysia) and sex (adult male and female). Thus, specimen classification using Training data set and identification using Testing data set were performed through two stages of ANNs. At present, the classifier used has achieved an accuracy of 100% based on skulls' views. Classification and identification to regions and sexes have also attained 72.5%, 87.5% and 80.0% of accuracy for dorsal, lateral, and jaw views, respectively. This results show that the shape characteristic features used are substantial because they can differentiate the specimens based on regions and sexes up to the accuracy of 80% and above. Finally, an application was developed and can be used for the scientific community. This automated system demonstrates the practicability of using computer-assisted systems in providing interesting alternative approach for quick and easy identification of unknown species.
Subtle In-Scanner Motion Biases Automated Measurement of Brain Anatomy From In Vivo MRI
Alexander-Bloch, Aaron; Clasen, Liv; Stockman, Michael; Ronan, Lisa; Lalonde, Francois; Giedd, Jay; Raznahan, Armin
2016-01-01
While the potential for small amounts of motion in functional magnetic resonance imaging (fMRI) scans to bias the results of functional neuroimaging studies is well appreciated, the impact of in-scanner motion on morphological analysis of structural MRI is relatively under-studied. Even among “good quality” structural scans, there may be systematic effects of motion on measures of brain morphometry. In the present study, the subjects’ tendency to move during fMRI scans, acquired in the same scanning sessions as their structural scans, yielded a reliable, continuous estimate of in-scanner motion. Using this approach within a sample of 127 children, adolescents, and young adults, significant relationships were found between this measure and estimates of cortical gray matter volume and mean curvature, as well as trend-level relationships with cortical thickness. Specifically, cortical volume and thickness decreased with greater motion, and mean curvature increased. These effects of subtle motion were anatomically heterogeneous, were present across different automated imaging pipelines, showed convergent validity with effects of frank motion assessed in a separate sample of 274 scans, and could be demonstrated in both pediatric and adult populations. Thus, using different motion assays in two large non-overlapping sets of structural MRI scans, convergent evidence showed that in-scanner motion—even at levels which do not manifest in visible motion artifact—can lead to systematic and regionally specific biases in anatomical estimation. These findings have special relevance to structural neuroimaging in developmental and clinical datasets, and inform ongoing efforts to optimize neuroanatomical analysis of existing and future structural MRI datasets in non-sedated humans. PMID:27004471
NASA Astrophysics Data System (ADS)
Chu, Yong; Chen, Ya-Fang; Su, Min-Ying; Nalcioglu, Orhan
2005-04-01
Image segmentation is an essential process for quantitative analysis. Segmentation of brain tissues in magnetic resonance (MR) images is very important for understanding the structural-functional relationship for various pathological conditions, such as dementia vs. normal brain aging. Different brain regions are responsible for certain functions and may have specific implication for diagnosis. Segmentation may facilitate the analysis of different brain regions to aid in early diagnosis. Region competition has been recently proposed as an effective method for image segmentation by minimizing a generalized Bayes/MDL criterion. However, it is sensitive to initial conditions - the "seeds", therefore an optimal choice of "seeds" is necessary for accurate segmentation. In this paper, we present a new skeleton-based region competition algorithm for automated gray and white matter segmentation. Skeletons can be considered as good "seed regions" since they provide the morphological a priori information, thus guarantee a correct initial condition. Intensity gradient information is also added to the global energy function to achieve a precise boundary localization. This algorithm was applied to perform gray and white matter segmentation using simulated MRI images from a realistic digital brain phantom. Nine different brain regions were manually outlined for evaluation of the performance in these separate regions. The results were compared to the gold-standard measure to calculate the true positive and true negative percentages. In general, this method worked well with a 96% accuracy, although the performance varied in different regions. We conclude that the skeleton-based region competition is an effective method for gray and white matter segmentation.
Inter-ictal spike detection using a database of smart templates.
Lodder, Shaun S; Askamp, Jessica; van Putten, Michel J A M
2013-12-01
Visual analysis of EEG is time consuming and suffers from inter-observer variability. Assisted automated analysis helps by summarizing key aspects for the reviewer and providing consistent feedback. Our objective is to design an accurate and robust system for the detection of inter-ictal epileptiform discharges (IEDs) in scalp EEG. IED Templates are extracted from the raw data of an EEG training set. By construction, the templates are given the ability to learn by searching for other IEDs within the training set using a time-shifted correlation. True and false detections are remembered and classifiers are trained for improving future predictions. During detection, trained templates search for IEDs in the new EEG. Overlapping detections from all templates are grouped and form one IED. Certainty values are added based on the reliability of the templates involved. For evaluation, 2160 templates were used on an evaluation dataset of 15 continuous recordings containing 241 IEDs (0.79/min). Sensitivities up to 0.99 (7.24fp/min) were reached. To reduce false detections, higher certainty thresholds led to a mean sensitivity of 0.90 with 2.36fp/min. By using many templates, this technique is less vulnerable to variations in spike morphology. A certainty value for each detection allows the system to present findings in a more efficient manner and simplifies the review process. Automated spike detection can assist in visual interpretation of the EEG which may lead to faster review times. Copyright © 2013 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.
Shayanfar, Noushin; Tobler, Ulrich; von Eckardstein, Arnold; Bestmann, Lukas
2007-01-01
Automated analysis of insoluble urine components can reduce the workload of conventional microscopic examination of urine sediment and is possibly helpful for standardization. We compared the diagnostic performance of two automated urine sediment analyzers and combined dipstick/automated urine analysis with that of the traditional dipstick/microscopy algorithm. A total of 332 specimens were collected and analyzed for insoluble urine components by microscopy and automated analyzers, namely the Iris iQ200 (Iris Diagnostics) and the UF-100 flow cytometer (Sysmex). The coefficients of variation for day-to-day quality control of the iQ200 and UF-100 analyzers were 6.5% and 5.5%, respectively, for red blood cells. We reached accuracy ranging from 68% (bacteria) to 97% (yeast) for the iQ200 and from 42% (bacteria) to 93% (yeast) for the UF-100. The combination of dipstick and automated urine sediment analysis increased the sensitivity of screening to approximately 98%. We conclude that automated urine sediment analysis is sufficiently precise and improves the workflow in a routine laboratory. In addition, it allows sediment analysis of all urine samples and thereby helps to detect pathological samples that would have been missed in the conventional two-step procedure according to the European guidelines. Although it is not a substitute for microscopic sediment examination, it can, when combined with dipstick testing, reduce the number of specimens submitted to microscopy. Visual microscopy is still required for some samples, namely, dysmorphic erythrocytes, yeasts, Trichomonas, oval fat bodies, differentiation of casts and certain crystals.
ERIC Educational Resources Information Center
Hsu, Chien-Ju; Thompson, Cynthia K.
2018-01-01
Purpose: The purpose of this study is to compare the outcomes of the manually coded Northwestern Narrative Language Analysis (NNLA) system, which was developed for characterizing agrammatic production patterns, and the automated Computerized Language Analysis (CLAN) system, which has recently been adopted to analyze speech samples of individuals…
Analysis of Trinity Power Metrics for Automated Monitoring
DOE Office of Scientific and Technical Information (OSTI.GOV)
Michalenko, Ashley Christine
This is a presentation from Los Alamos National Laboraotyr (LANL) about the analysis of trinity power metrics for automated monitoring. The following topics are covered: current monitoring efforts, motivation for analysis, tools used, the methodology, work performed during the summer, and future work planned.
Åkerfelt, Malin; Bayramoglu, Neslihan; Robinson, Sean; Toriseva, Mervi; Schukov, Hannu-Pekka; Härmä, Ville; Virtanen, Johannes; Sormunen, Raija; Kaakinen, Mika; Kannala, Juho; Eklund, Lauri; Heikkilä, Janne; Nees, Matthias
2015-01-01
Cancer-associated fibroblasts (CAFs) constitute an important part of the tumor microenvironment and promote invasion via paracrine functions and physical impact on the tumor. Although the importance of including CAFs into three-dimensional (3D) cell cultures has been acknowledged, computational support for quantitative live-cell measurements of complex cell cultures has been lacking. Here, we have developed a novel automated pipeline to model tumor-stroma interplay, track motility and quantify morphological changes of 3D co-cultures, in real-time live-cell settings. The platform consists of microtissues from prostate cancer cells, combined with CAFs in extracellular matrix that allows biochemical perturbation. Tracking of fibroblast dynamics revealed that CAFs guided the way for tumor cells to invade and increased the growth and invasiveness of tumor organoids. We utilized the platform to determine the efficacy of inhibitors in prostate cancer and the associated tumor microenvironment as a functional unit. Interestingly, certain inhibitors selectively disrupted tumor-CAF interactions, e.g. focal adhesion kinase (FAK) inhibitors specifically blocked tumor growth and invasion concurrently with fibroblast spreading and motility. This complex phenotype was not detected in other standard in vitro models. These results highlight the advantage of our approach, which recapitulates tumor histology and can significantly improve cancer target validation in vitro. PMID:26375443
Mathieson, Sean R; Livingstone, Vicki; Low, Evonne; Pressler, Ronit; Rennie, Janet M; Boylan, Geraldine B
2016-10-01
Phenobarbital increases electroclinical uncoupling and our preliminary observations suggest it may also affect electrographic seizure morphology. This may alter the performance of a novel seizure detection algorithm (SDA) developed by our group. The objectives of this study were to compare the morphology of seizures before and after phenobarbital administration in neonates and to determine the effect of any changes on automated seizure detection rates. The EEGs of 18 term neonates with seizures both pre- and post-phenobarbital (524 seizures) administration were studied. Ten features of seizures were manually quantified and summary measures for each neonate were statistically compared between pre- and post-phenobarbital seizures. SDA seizure detection rates were also compared. Post-phenobarbital seizures showed significantly lower amplitude (p<0.001) and involved fewer EEG channels at the peak of seizure (p<0.05). No other features or SDA detection rates showed a statistical difference. These findings show that phenobarbital reduces both the amplitude and propagation of seizures which may help to explain electroclinical uncoupling of seizures. The seizure detection rate of the algorithm was unaffected by these changes. The results suggest that users should not need to adjust the SDA sensitivity threshold after phenobarbital administration. Copyright © 2016 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.
Automation of the longwall mining system
NASA Technical Reports Server (NTRS)
Zimmerman, W.; Aster, R. W.; Harris, J.; High, J.
1982-01-01
Cost effective, safe, and technologically sound applications of automation technology to underground coal mining were identified. The longwall analysis commenced with a general search for government and industry experience of mining automation technology. A brief industry survey was conducted to identify longwall operational, safety, and design problems. The prime automation candidates resulting from the industry experience and survey were: (1) the shearer operation, (2) shield and conveyor pan line advance, (3) a management information system to allow improved mine logistics support, and (4) component fault isolation and diagnostics to reduce untimely maintenance delays. A system network analysis indicated that a 40% improvement in productivity was feasible if system delays associated with all of the above four areas were removed. A technology assessment and conceptual system design of each of the four automation candidate areas showed that state of the art digital computer, servomechanism, and actuator technologies could be applied to automate the longwall system.
Automated Tetrahedral Mesh Generation for CFD Analysis of Aircraft in Conceptual Design
NASA Technical Reports Server (NTRS)
Ordaz, Irian; Li, Wu; Campbell, Richard L.
2014-01-01
The paper introduces an automation process of generating a tetrahedral mesh for computational fluid dynamics (CFD) analysis of aircraft configurations in early conceptual design. The method was developed for CFD-based sonic boom analysis of supersonic configurations, but can be applied to aerodynamic analysis of aircraft configurations in any flight regime.
2014-07-01
Submoderating factors were examined and reported for human-related (i.e., age, cognitive factors, emotive factors) and automation- related (i.e., features and...capabilities) effects. Analyses were also conducted for type of automated aid: cognitive, control, and perceptual automation aids. Automated cognitive...operator, user) action. Perceptual aids are used to assist the operator or user by providing warnings or to assist with pattern recognition. All
Automated daily quality control analysis for mammography in a multi-unit imaging center.
Sundell, Veli-Matti; Mäkelä, Teemu; Meaney, Alexander; Kaasalainen, Touko; Savolainen, Sauli
2018-01-01
Background The high requirements for mammography image quality necessitate a systematic quality assurance process. Digital imaging allows automation of the image quality analysis, which can potentially improve repeatability and objectivity compared to a visual evaluation made by the users. Purpose To develop an automatic image quality analysis software for daily mammography quality control in a multi-unit imaging center. Material and Methods An automated image quality analysis software using the discrete wavelet transform and multiresolution analysis was developed for the American College of Radiology accreditation phantom. The software was validated by analyzing 60 randomly selected phantom images from six mammography systems and 20 phantom images with different dose levels from one mammography system. The results were compared to a visual analysis made by four reviewers. Additionally, long-term image quality trends of a full-field digital mammography system and a computed radiography mammography system were investigated. Results The automated software produced feature detection levels comparable to visual analysis. The agreement was good in the case of fibers, while the software detected somewhat more microcalcifications and characteristic masses. Long-term follow-up via a quality assurance web portal demonstrated the feasibility of using the software for monitoring the performance of mammography systems in a multi-unit imaging center. Conclusion Automated image quality analysis enables monitoring the performance of digital mammography systems in an efficient, centralized manner.
Lerch, Oliver; Temme, Oliver; Daldrup, Thomas
2014-07-01
The analysis of opioids, cocaine, and metabolites from blood serum is a routine task in forensic laboratories. Commonly, the employed methods include many manual or partly automated steps like protein precipitation, dilution, solid phase extraction, evaporation, and derivatization preceding a gas chromatography (GC)/mass spectrometry (MS) or liquid chromatography (LC)/MS analysis. In this study, a comprehensively automated method was developed from a validated, partly automated routine method. This was possible by replicating method parameters on the automated system. Only marginal optimization of parameters was necessary. The automation relying on an x-y-z robot after manual protein precipitation includes the solid phase extraction, evaporation of the eluate, derivatization (silylation with N-methyl-N-trimethylsilyltrifluoroacetamide, MSTFA), and injection into a GC/MS. A quantitative analysis of almost 170 authentic serum samples and more than 50 authentic samples of other matrices like urine, different tissues, and heart blood on cocaine, benzoylecgonine, methadone, morphine, codeine, 6-monoacetylmorphine, dihydrocodeine, and 7-aminoflunitrazepam was conducted with both methods proving that the analytical results are equivalent even near the limits of quantification (low ng/ml range). To our best knowledge, this application is the first one reported in the literature employing this sample preparation system.
Ma, Junlong; Wang, Chengbin; Yue, Jiaxin; Li, Mianyang; Zhang, Hongrui; Ma, Xiaojing; Li, Xincui; Xue, Dandan; Qing, Xiaoyan; Wang, Shengjiang; Xiang, Daijun; Cong, Yulong
2013-01-01
Several automated urine sediment analyzers have been introduced to clinical laboratories. Automated microscopic pattern recognition is a new technique for urine particle analysis. We evaluated the analytical and diagnostic performance of the UriSed automated microscopic analyzer and compared with manual microscopy for urine sediment analysis. Precision, linearity, carry-over, and method comparison were carried out. A total of 600 urine samples sent for urinalysis were assessed using the UriSed automated microscopic analyzer and manual microscopy. Within-run and between-run precision of the UriSed for red blood cells (RBC) and white blood cells (WBC) were acceptable at all levels (CV < 20%). Within-run and between-run imprecision of the UriSed testing for cast, squamous epithelial cells (EPI), and bacteria (BAC) were good at middle level and high level (CV < 20%). The linearity analysis revealed substantial agreement between the measured value and the theoretical value of the UriSed for RBC, WBC, cast, EPI, and BAC (r > 0.95). There was no carry-over. RBC, WBC, and squamous epithelial cells with sensitivities and specificities were more than 80% in this study. There is substantial agreement between the UriSed automated microscopic analyzer and the manual microscopy methods. The UriSed provides for a rapid turnaround time.
Automated LSA Assessment of Summaries in Distance Education: Some Variables to Be Considered
ERIC Educational Resources Information Center
Jorge-Botana, Guillermo; Luzón, José M.; Gómez-Veiga, Isabel; Martín-Cordero, Jesús I.
2015-01-01
A latent semantic analysis-based automated summary assessment is described; this automated system is applied to a real learning from text task in a Distance Education context. We comment on the use of automated content, plagiarism, text coherence measures, and word weights average and their impact on predicting human judges summary scoring. A…
Schaefer, Kristin E; Chen, Jessie Y C; Szalma, James L; Hancock, P A
2016-05-01
We used meta-analysis to assess research concerning human trust in automation to understand the foundation upon which future autonomous systems can be built. Trust is increasingly important in the growing need for synergistic human-machine teaming. Thus, we expand on our previous meta-analytic foundation in the field of human-robot interaction to include all of automation interaction. We used meta-analysis to assess trust in automation. Thirty studies provided 164 pairwise effect sizes, and 16 studies provided 63 correlational effect sizes. The overall effect size of all factors on trust development was ḡ = +0.48, and the correlational effect was [Formula: see text] = +0.34, each of which represented medium effects. Moderator effects were observed for the human-related (ḡ = +0.49; [Formula: see text] = +0.16) and automation-related (ḡ = +0.53; [Formula: see text] = +0.41) factors. Moderator effects specific to environmental factors proved insufficient in number to calculate at this time. Findings provide a quantitative representation of factors influencing the development of trust in automation as well as identify additional areas of needed empirical research. This work has important implications to the enhancement of current and future human-automation interaction, especially in high-risk or extreme performance environments. © 2016, Human Factors and Ergonomics Society.
Digital pathology: elementary, rapid and reliable automated image analysis.
Bouzin, Caroline; Saini, Monika L; Khaing, Kyi-Kyi; Ambroise, Jérôme; Marbaix, Etienne; Grégoire, Vincent; Bol, Vanesa
2016-05-01
Slide digitalization has brought pathology to a new era, including powerful image analysis possibilities. However, while being a powerful prognostic tool, immunostaining automated analysis on digital images is still not implemented worldwide in routine clinical practice. Digitalized biopsy sections from two independent cohorts of patients, immunostained for membrane or nuclear markers, were quantified with two automated methods. The first was based on stained cell counting through tissue segmentation, while the second relied upon stained area proportion within tissue sections. Different steps of image preparation, such as automated tissue detection, folds exclusion and scanning magnification, were also assessed and validated. Quantification of either stained cells or the stained area was found to be correlated highly for all tested markers. Both methods were also correlated with visual scoring performed by a pathologist. For an equivalent reliability, quantification of the stained area is, however, faster and easier to fine-tune and is therefore more compatible with time constraints for prognosis. This work provides an incentive for the implementation of automated immunostaining analysis with a stained area method in routine laboratory practice. © 2015 John Wiley & Sons Ltd.
Hormann, Wymke; Hahn, Melanie; Gerlach, Stefan; Hochstrate, Nicola; Affeldt, Kai; Giesen, Joyce; Fechner, Kai; Damoiseaux, Jan G M C
2017-11-27
Antibodies directed against dsDNA are a highly specific diagnostic marker for the presence of systemic lupus erythematosus and of particular importance in its diagnosis. To assess anti-dsDNA antibodies, the Crithidia luciliae-based indirect immunofluorescence test (CLIFT) is one of the assays considered to be the best choice. To overcome the drawback of subjective result interpretation that inheres indirect immunofluorescence assays in general, automated systems have been introduced into the market during the last years. Among these systems is the EUROPattern Suite, an advanced automated fluorescence microscope equipped with different software packages, capable of automated pattern interpretation and result suggestion for ANA, ANCA and CLIFT analysis. We analyzed the performance of the EUROPattern Suite with its automated fluorescence interpretation for CLIFT in a routine setting, reflecting the everyday life of a diagnostic laboratory. Three hundred and twelve consecutive samples were collected, sent to the Central Diagnostic Laboratory of the Maastricht University Medical Centre with a request for anti-dsDNA analysis over a period of 7 months. Agreement between EUROPattern assay analysis and the visual read was 93.3%. Sensitivity and specificity were 94.1% and 93.2%, respectively. The EUROPattern Suite performed reliably and greatly supported result interpretation. Automated image acquisition is readily performed and automated image classification gives a reliable recommendation for assay evaluation to the operator. The EUROPattern Suite optimizes workflow and contributes to standardization between different operators or laboratories.
Cost-benefit analysis on deployment of automated highway systems
DOT National Transportation Integrated Search
1997-01-01
The optimal ranges of traffic flow and capacity will be determined for selected scenarios, in which different proportions of automated and conventional traffic will operate simultaneously in an automated highway system (AHS). It is found that there w...
Dual Anterograde and Retrograde Viral Tracing of Reciprocal Connectivity.
Haberl, Matthias G; Ginger, Melanie; Frick, Andreas
2017-01-01
Current large-scale approaches in neuroscience aim to unravel the complete connectivity map of specific neuronal circuits, or even the entire brain. This emerging research discipline has been termed connectomics. Recombinant glycoprotein-deleted rabies virus (RABV ∆G) has become an important tool for the investigation of neuronal connectivity in the brains of a variety of species. Neuronal infection with even a single RABV ∆G particle results in high-level transgene expression, revealing the fine-detailed morphology of all neuronal features-including dendritic spines, axonal processes, and boutons-on a brain-wide scale. This labeling is eminently suitable for subsequent post-hoc morphological analysis, such as semiautomated reconstruction in 3D. Here we describe the use of a recently developed anterograde RABV ∆G variant together with a retrograde RABV ∆G for the investigation of projections both to, and from, a particular brain region. In addition to the automated reconstruction of a dendritic tree, we also give as an example the volume measurements of axonal boutons following RABV ∆G-mediated fluorescent marker expression. In conclusion RABV ∆G variants expressing a combination of markers and/or tools for stimulating/monitoring neuronal activity, used together with genetic or behavioral animal models, promise important insights in the structure-function relationship of neural circuits.
Increased cerebellar gray matter volume in head chefs
Sarica, Alessia; Martino, Iolanda; Fabbricatore, Carmelo; Tomaiuolo, Francesco; Rocca, Federico; Caracciolo, Manuela; Quattrone, Aldo
2017-01-01
Objective Chefs exert expert motor and cognitive performances on a daily basis. Neuroimaging has clearly shown that that long-term skill learning (i.e., athletes, musicians, chess player or sommeliers) induces plastic changes in the brain thus enabling tasks to be performed faster and more accurately. How a chef's expertise is embodied in a specific neural network has never been investigated. Methods Eleven Italian head chefs with long-term brigade management expertise and 11 demographically-/ psychologically- matched non-experts underwent morphological evaluations. Results Voxel-based analysis performed with SUIT, as well as, automated volumetric measurement assessed with Freesurfer, revealed increased gray matter volume in the cerebellum in chefs compared to non-experts. The most significant changes were detected in the anterior vermis and the posterior cerebellar lobule. The magnitude of the brigade staff and the higher performance in the Tower of London test correlated with these specific gray matter increases, respectively. Conclusions We found that chefs are characterized by an anatomical variability involving the cerebellum. This confirms the role of this region in the development of similar expert brains characterized by learning dexterous skills, such as pianists, rock climbers and basketball players. However, the nature of the cellular events underlying the detected morphological differences remains an open question. PMID:28182712
Optical coherence tomography angiography in the management of age-related macular degeneration.
Schneider, Eric W; Fowler, Samuel C
2018-05-01
Optical coherence tomography angiography (OCT-A) provides rapid, flow-based imaging of the retinal and choroidal vasculature in a noninvasive manner. This review contrasts this novel technique with conventional angiography and discusses its current uses and limitations in the management of age-related macular degeneration (AMD). Initial work with OCT-A has focused on its ability to identify choriocapillaris flow alterations in dry AMD and to sensitively detect choroidal neovascular membranes (CNVs) in neovascular AMD. Reduced choriocapillaris flow beyond the borders of geographic atrophy seen on OCT-A suggests a primary vascular cause in geographic atrophy. Longitudinal OCT-A analysis of CNV morphology has demonstrated the transition from an immature to mature CNV phenotype following treatment. Current clinical applications of OCT-A include identification of asymptomatic CNV and monitoring for CNV development in the setting of an acquired vitelliform lesion. OCT-A remains a promising diagnostic tool but one still very much in evolution. Larger studies will be needed to more accurately describe its sensitivity and specificity for CNV detection and to better characterize longitudinal CNV morphologic changes. Anticipated hardware and software updates including swept-source light sources, automated montaging, and manual adjustment of interscan timing should enhance the capabilities of OCT-A in the management of AMD.
Polyphasic approach for differentiating Penicillium nordicum from Penicillium verrucosum.
Berni, E; Degola, F; Cacchioli, C; Restivo, F M; Spotti, E
2011-04-01
The aim of this research was to use a polyphasic approach to differentiate Penicillium verrucosum from Penicillium nordicum, to compare different techniques, and to select the most suitable for industrial use. In particular, (1) a cultural technique with two substrates selective for these species; (2) a molecular diagnostic test recently set up and a RAPD procedure derived from this assay; (3) an RP-HPLC analysis to quantify ochratoxin A (OTA) production and (4) an automated system based on fungal carbon source utilisation (Biolog Microstation™) were used. Thirty strains isolated from meat products and originally identified as P. verrucosum by morphological methods were re-examined by newer cultural tests and by PCR methods. All were found to belong to P. nordicum. Their biochemical and chemical characterisation supported the results obtained by cultural and molecular techniques and showed the varied ability in P. verrucosum and P. nordicum to metabolise carbon-based sources and to produce OTA at different concentrations, respectively.
Beef quality grading using machine vision
NASA Astrophysics Data System (ADS)
Jeyamkondan, S.; Ray, N.; Kranzler, Glenn A.; Biju, Nisha
2000-12-01
A video image analysis system was developed to support automation of beef quality grading. Forty images of ribeye steaks were acquired. Fat and lean meat were differentiated using a fuzzy c-means clustering algorithm. Muscle longissimus dorsi (l.d.) was segmented from the ribeye using morphological operations. At the end of each iteration of erosion and dilation, a convex hull was fitted to the image and compactness was measured. The number of iterations was selected to yield the most compact l.d. Match between the l.d. muscle traced by an expert grader and that segmented by the program was 95.9%. Marbling and color features were extracted from the l.d. muscle and were used to build regression models to predict marbling and color scores. Quality grade was predicted using another regression model incorporating all features. Grades predicted by the model were statistically equivalent to the grades assigned by expert graders.
Automatic detection of spermatozoa for laser capture microdissection.
Vandewoestyne, Mado; Van Hoofstat, David; Van Nieuwerburgh, Filip; Deforce, Dieter
2009-03-01
In sexual assault crimes, differential extraction of spermatozoa from vaginal swab smears is often ineffective, especially when only a few spermatozoa are present in an overwhelming amount of epithelial cells. Laser capture microdissection (LCM) enables the precise separation of spermatozoa and epithelial cells. However, standard sperm-staining techniques are non-specific and rely on sperm morphology for identification. Moreover, manual screening of the microscope slides is time-consuming and labor-intensive. Here, we describe an automated screening method to detect spermatozoa stained with Sperm HY-LITER. Different ratios of spermatozoa and epithelial cells were used to assess the automatic detection method. In addition, real postcoital samples were also screened. Detected spermatozoa were isolated using LCM and DNA analysis was performed. Robust DNA profiles without allelic dropout could be obtained from as little as 30 spermatozoa recovered from postcoital samples, showing that the staining had no significant influence on DNA recovery.
Distinct single-cell morphological dynamics under beta-lactam antibiotics
Yao, Zhizhong; Kahne, Daniel; Kishony, Roy
2012-01-01
Summary The bacterial cell wall is conserved in prokaryotes, stabilizing cells against osmotic stress. Beta-lactams inhibit cell wall synthesis and induce lysis through a bulge-mediated mechanism; however, little is known about the formation dynamics and stability of these bulges. To capture processes of different timescales, we developed an imaging platform combining automated image analysis with live cell microscopy at high time resolution. Beta-lactam killing of Escherichia coli cells proceeded through four stages: elongation, bulge formation, bulge stagnation and lysis. Both the cell wall and outer membrane (OM) affect the observed dynamics; damaging the cell wall with different beta-lactams and compromising OM integrity cause different modes and rates of lysis. Our results show that the bulge formation dynamics is determined by how the cell wall is perturbed. The OM plays an independent role in stabilizing the bulge once it is formed. The stabilized bulge delays lysis, and allows recovery upon drug removal. PMID:23103254
NASA Astrophysics Data System (ADS)
Daher, H.; Gaceb, D.; Eglin, V.; Bres, S.; Vincent, N.
2012-01-01
We present in this paper a feature selection and weighting method for medieval handwriting images that relies on codebooks of shapes of small strokes of characters (graphemes that are issued from the decomposition of manuscripts). These codebooks are important to simplify the automation of the analysis, the manuscripts transcription and the recognition of styles or writers. Our approach provides a precise features weighting by genetic algorithms and a highperformance methodology for the categorization of the shapes of graphemes by using graph coloring into codebooks which are applied in turn on CBIR (Content Based Image Retrieval) in a mixed handwriting database containing different pages from different writers, periods of the history and quality. We show how the coupling of these two mechanisms 'features weighting - graphemes classification' can offer a better separation of the forms to be categorized by exploiting their grapho-morphological, their density and their significant orientations particularities.
Automated phenotyping of permanent crops
NASA Astrophysics Data System (ADS)
McPeek, K. Thomas; Steddom, Karl; Zamudio, Joseph; Pant, Paras; Mullenbach, Tyler
2017-05-01
AGERpoint is defining a new technology space for the growers' industry by introducing novel applications for sensor technology and data analysis to growers of permanent crops. Serving data to a state-of-the-art analytics engine from a cutting edge sensor platform, a new paradigm in precision agriculture is being developed that allows growers to understand the unique needs of each tree, bush or vine in their operation. Autonomous aerial and terrestrial vehicles equipped with multiple varieties of remote sensing technologies give AGERpoint the ability to measure key morphological and spectral features of permanent crops. This work demonstrates how such phenotypic measurements combined with machine learning algorithms can be used to determine the variety of crops (e.g., almond and pecan trees). This phenotypic and varietal information represents the first step in enabling growers with the ability to tailor their management practices to individual plants and maximize their economic productivity.
Automation &robotics for future Mars exploration
NASA Astrophysics Data System (ADS)
Schulte, W.; von Richter, A.; Bertrand, R.
2003-04-01
Automation and Robotics (A&R) are currently considered as a key technology for Mars exploration. initiatives in this field aim at developing new A&R systems and technologies for planetary surface exploration. Kayser-Threde led the study AROMA (Automation &Robotics for Human Mars Exploration) under ESA contract in order to define a reference architecture of A&R elements in support of a human Mars exploration program. One of the goals was to define new developments and to maintain the competitiveness of European industry within this field. We present a summary of the A&R study in respect to a particular system: The Autonomous Research Island (ARI). In the Mars exploration scenario initially a robotic outpost system lands at pre-selected sites in order to search for life forms and water and to analyze the surface, geology and atmosphere. A&R systems, i.e. rovers and autonomous instrument packages, perform a number of missions with scientific and technology development objectives on the surface of Mars as part of preparations for a human exploration mission. In the Robotic Outpost Phase ARI is conceived as an automated lander which can perform in-situ analysis. It consists of a service module and a micro-rover system for local investigations. Such a system is already under investigation and development in other TRP activities. The micro-rover system provides local mobility for in-situ scientific investigations at a given landing or deployment site. In the long run ARI supports also human Mars missions. An astronaut crew would travel larger distances in a pressurized rover on Mars. Whenever interesting features on the surface are identified, the crew would interrupt the travel and perform local investigations. In order to save crew time ARI could be deployed by the astronauts to perform time-consuming investigations as for example in-situ geochemistry analysis of rocks/soil. Later, the crew could recover the research island for refurbishment and deployment at another site. In the frame of near-term Mars exploration a dedicated exobiology mission is envisaged. Scientific and technical studies for a facility to detect the evidence of past of present life have been carried out under ESA contract. Mars soil/rock samples are to be analyzed for their morphology, organic and inorganic composition using a suite of scientific instruments. Robotic devices, e.g. for the acquisition, handling and onboard processing of Mars sample material retrieved from different locations, and surface mobility are important elements in a fully automated mission. Necessary robotic elements have been identified in past studies. Their realization can partly be based on heritage of existing space hardware, but will require dedicated development effort.
Research of the application of the new communication technologies for distribution automation
NASA Astrophysics Data System (ADS)
Zhong, Guoxin; Wang, Hao
2018-03-01
Communication network is a key factor of distribution automation. In recent years, new communication technologies for distribution automation have a rapid development in China. This paper introduces the traditional communication technologies of distribution automation and analyse the defects of these traditional technologies. Then this paper gives a detailed analysis on some new communication technologies for distribution automation including wired communication and wireless communication and then gives an application suggestion of these new technologies.
Byrne, M D; Jordan, T R; Welle, T
2013-01-01
The objective of this study was to investigate and improve the use of automated data collection procedures for nursing research and quality assurance. A descriptive, correlational study analyzed 44 orthopedic surgical patients who were part of an evidence-based practice (EBP) project examining post-operative oxygen therapy at a Midwestern hospital. The automation work attempted to replicate a manually-collected data set from the EBP project. Automation was successful in replicating data collection for study data elements that were available in the clinical data repository. The automation procedures identified 32 "false negative" patients who met the inclusion criteria described in the EBP project but were not selected during the manual data collection. Automating data collection for certain data elements, such as oxygen saturation, proved challenging because of workflow and practice variations and the reliance on disparate sources for data abstraction. Automation also revealed instances of human error including computational and transcription errors as well as incomplete selection of eligible patients. Automated data collection for analysis of nursing-specific phenomenon is potentially superior to manual data collection methods. Creation of automated reports and analysis may require initial up-front investment with collaboration between clinicians, researchers and information technology specialists who can manage the ambiguities and challenges of research and quality assurance work in healthcare.
Affordable Imaging Lab for Noninvasive Analysis of Biomass and Early Vigour in Cereal Crops
2018-01-01
Plant phenotyping by imaging allows automated analysis of plants for various morphological and physiological traits. In this work, we developed a low-cost RGB imaging phenotyping lab (LCP lab) for low-throughput imaging and analysis using affordable imaging equipment and freely available software. LCP lab comprising RGB imaging and analysis pipeline is set up and demonstrated with early vigour analysis in wheat. Using this lab, a few hundred pots can be photographed in a day and the pots are tracked with QR codes. The software pipeline for both imaging and analysis is built from freely available software. The LCP lab was evaluated for early vigour analysis of five wheat cultivars. A high coefficient of determination (R2 0.94) was obtained between the dry weight and the projected leaf area of 20-day-old wheat plants and R2 of 0.9 for the relative growth rate between 10 and 20 days of plant growth. Detailed description for setting up such a lab is provided together with custom scripts built for imaging and analysis. The LCP lab is an affordable alternative for analysis of cereal crops when access to a high-throughput phenotyping facility is unavailable or when the experiments require growing plants in highly controlled climate chambers. The protocols described in this work are useful for building affordable imaging system for small-scale research projects and for education. PMID:29850536
Clinical Laboratory Automation: A Case Study
Archetti, Claudia; Montanelli, Alessandro; Finazzi, Dario; Caimi, Luigi; Garrafa, Emirena
2017-01-01
Background This paper presents a case study of an automated clinical laboratory in a large urban academic teaching hospital in the North of Italy, the Spedali Civili in Brescia, where four laboratories were merged in a unique laboratory through the introduction of laboratory automation. Materials and Methods The analysis compares the preautomation situation and the new setting from a cost perspective, by considering direct and indirect costs. It also presents an analysis of the turnaround time (TAT). The study considers equipment, staff and indirect costs. Results The introduction of automation led to a slight increase in equipment costs which is highly compensated by a remarkable decrease in staff costs. Consequently, total costs decreased by 12.55%. The analysis of the TAT shows an improvement of nonemergency exams while emergency exams are still validated within the maximum time imposed by the hospital. Conclusions The strategy adopted by the management, which was based on re-using the available equipment and staff when merging the pre-existing laboratories, has reached its goal: introducing automation while minimizing the costs. Significance for public health Automation is an emerging trend in modern clinical laboratories with a positive impact on service level to patients and on staff safety as shown by different studies. In fact, it allows process standardization which, in turn, decreases the frequency of outliers and errors. In addition, it induces faster processing times, thus improving the service level. On the other side, automation decreases the staff exposition to accidents strongly improving staff safety. In this study, we analyse a further potential benefit of automation, that is economic convenience. We study the case of the automated laboratory of one of the biggest hospital in Italy and compare the cost related to the pre and post automation situation. Introducing automation lead to a cost decrease without affecting the service level to patients. This was a key goal of the hospital which, as public health entities in general, is constantly struggling with budget constraints. PMID:28660178
Tests of Spectral Cloud Classification Using DMSP Fine Mode Satellite Data.
1980-06-02
processing techniques of potential value. Fourier spectral analysis was identified as the most promising technique to upgrade automated processing of...these measurements on the Earth’s surface is 0. 3 n mi. 3. Pickett, R.M., and Blackman, E.S. (1976) Automated Processing of Satellite Imagery Data at Air...and Pickett. R. Al. (1977) Automated Processing of Satellite Imagery Data at the Air Force Global Weather Central: Demonstrations of Spectral Analysis
Microbiology of beef carcasses before and after slaughterline automation.
Whelehan, O. P.; Hudson, W. R.; Roberts, T. A.
1986-01-01
The bacterial status of beef carcasses at a commercial abattoir was monitored before and after slaughterline automation. Bacterial counts did not differ significantly overall (P greater than 0.05) between the original manual line and the automated line for either morning or afternoon slaughter. On the manual line counts in the morning were lower than those from carcasses slaughtered in the afternoon, but on the automated line there was no difference between morning and afternoon counts. Due to highly significant line X sample site interaction for both morning and afternoon counts, overall differences among sample sites were not found by analysis of variance. However, principal components analysis revealed a significant shift in bacterial contamination among some sites due to slaughterline changes. The incidence of Enterobacteriaceae increased marginally following automation. PMID:3701039
Clinical Laboratory Automation: A Case Study.
Archetti, Claudia; Montanelli, Alessandro; Finazzi, Dario; Caimi, Luigi; Garrafa, Emirena
2017-04-13
This paper presents a case study of an automated clinical laboratory in a large urban academic teaching hospital in the North of Italy, the Spedali Civili in Brescia, where four laboratories were merged in a unique laboratory through the introduction of laboratory automation. The analysis compares the preautomation situation and the new setting from a cost perspective, by considering direct and indirect costs. It also presents an analysis of the turnaround time (TAT). The study considers equipment, staff and indirect costs. The introduction of automation led to a slight increase in equipment costs which is highly compensated by a remarkable decrease in staff costs. Consequently, total costs decreased by 12.55%. The analysis of the TAT shows an improvement of nonemergency exams while emergency exams are still validated within the maximum time imposed by the hospital. The strategy adopted by the management, which was based on re-using the available equipment and staff when merging the pre-existing laboratories, has reached its goal: introducing automation while minimizing the costs.
Application of automation and information systems to forensic genetic specimen processing.
Leclair, Benoît; Scholl, Tom
2005-03-01
During the last 10 years, the introduction of PCR-based DNA typing technologies in forensic applications has been highly successful. This technology has become pervasive throughout forensic laboratories and it continues to grow in prevalence. For many criminal cases, it provides the most probative evidence. Criminal genotype data banking and victim identification initiatives that follow mass-fatality incidents have benefited the most from the introduction of automation for sample processing and data analysis. Attributes of offender specimens including large numbers, high quality and identical collection and processing are ideal for the application of laboratory automation. The magnitude of kinship analysis required by mass-fatality incidents necessitates the application of computing solutions to automate the task. More recently, the development activities of many forensic laboratories are focused on leveraging experience from these two applications to casework sample processing. The trend toward increased prevalence of forensic genetic analysis will continue to drive additional innovations in high-throughput laboratory automation and information systems.
Benefits of utilizing CellProfiler as a characterization tool for U–10Mo nuclear fuel
DOE Office of Scientific and Technical Information (OSTI.GOV)
Collette, R.; Douglas, J.; Patterson, L.
2015-07-15
Automated image processing techniques have the potential to aid in the performance evaluation of nuclear fuels by eliminating judgment calls that may vary from person-to-person or sample-to-sample. Analysis of in-core fuel performance is required for design and safety evaluations related to almost every aspect of the nuclear fuel cycle. This study presents a methodology for assessing the quality of uranium–molybdenum fuel images and describes image analysis routines designed for the characterization of several important microstructural properties. The analyses are performed in CellProfiler, an open-source program designed to enable biologists without training in computer vision or programming to automatically extract cellularmore » measurements from large image sets. The quality metric scores an image based on three parameters: the illumination gradient across the image, the overall focus of the image, and the fraction of the image that contains scratches. The metric presents the user with the ability to ‘pass’ or ‘fail’ an image based on a reproducible quality score. Passable images may then be characterized through a separate CellProfiler pipeline, which enlists a variety of common image analysis techniques. The results demonstrate the ability to reliably pass or fail images based on the illumination, focus, and scratch fraction of the image, followed by automatic extraction of morphological data with respect to fission gas voids, interaction layers, and grain boundaries. - Graphical abstract: Display Omitted - Highlights: • A technique is developed to score U–10Mo FIB-SEM image quality using CellProfiler. • The pass/fail metric is based on image illumination, focus, and area scratched. • Automated image analysis is performed in pipeline fashion to characterize images. • Fission gas void, interaction layer, and grain boundary coverage data is extracted. • Preliminary characterization results demonstrate consistency of the algorithm.« less
NASA Astrophysics Data System (ADS)
McClinton, J. T.; White, S. M.; Sinton, J. M.; Rubin, K. H.; Bowles, J. A.
2010-12-01
Differences in axial lava morphology along the Galapagos Spreading Center (GSC) can indicate variations in magma supply and emplacement dynamics due to the influence of the adjacent Galapagos hot spot. Unfortunately, the ability to discriminate fine-scale lava morphology has historically been limited to observations of the small coverage areas of towed camera surveys and submersible operations. This research presents a neuro-fuzzy approach to automated seafloor classification using spatially coincident, high-resolution bathymetry and backscatter data. The classification method implements a Sugeno-type fuzzy inference system trained by a multi-layered adaptive neural network and is capable of rapidly classifying seafloor morphology based on attributes of surface geometry and texture. The system has been applied to the 92°W segment of the western GSC in order to quantify coverage areas and distributions of pillow, lobate, and sheet lava morphology. An accuracy assessment has been performed on the classification results. The resulting classified maps provide a high-resolution view of GSC axial morphology and indicate the study area terrain is approximately 40% pillow flows, 40% lobate and sheet flows, and 10% fissured or faulted area, with about 10% of the study area unclassifiable. Fine-scale features such as eruptive fissures, tumuli, and individual pillowed lava flow fronts are also visible. Although this system has been applied to lava morphology, its design and implementation are applicable to other undersea mapping applications.
On the Automation of the MarkIII Data Analysis System.
NASA Astrophysics Data System (ADS)
Schwegmann, W.; Schuh, H.
1999-03-01
A faster and semiautomatic data analysis is an important contribution to the acceleration of the VLBI procedure. A concept for the automation of one of the most widely used VLBI software packages the MarkIII Data Analysis System was developed. Then, the program PWXCB, which extracts weather and cable calibration data from the station log-files, was automated supplementing the existing Fortran77 program-code. The new program XLOG and its results will be presented. Most of the tasks in the VLBI data analysis are very complex and their automation requires typical knowledge-based techniques. Thus, a knowledge-based system (KBS) for support and guidance of the analyst is being developed using the AI-workbench BABYLON, which is based on methods of artificial intelligence (AI). The advantages of a KBS for the MarkIII Data Analysis System and the required steps to build a KBS will be demonstrated. Examples about the current status of the project will be given, too.
Effects of large deep-seated landslides on hillslope morphology, western Southern Alps, New Zealand
NASA Astrophysics Data System (ADS)
Korup, Oliver
2006-03-01
Morphometric analysis and air photo interpretation highlight geomorphic imprints of large landslides (i.e., affecting ≥1 km2) on hillslopes in the western Southern Alps (WSA), New Zealand. Large landslides attain kilometer-scale runout, affect >50% of total basin relief, and in 70% are slope clearing, and thus relief limiting. Landslide terrain shows lower mean local relief, relief variability, slope angles, steepness, and concavity than surrounding terrain. Measuring mean slope angle smoothes out local landslide morphology, masking any relationship between large landslides and possible threshold hillslopes. Large failures also occurred on low-gradient slopes, indicating persistent low-frequency/high-magnitude hillslope adjustment independent of fluvial bedrock incision. At the basin and hillslope scale, slope-area plots partly constrain the effects of landslides on geomorphic process regimes. Landslide imprints gradually blend with relief characteristics at orogen scale (102 km), while being sensitive to length scales of slope failure, topography, sampling, and digital elevation model resolution. This limits means of automated detection, and underlines the importance of local morphologic contrasts for detecting large landslides in the WSA. Landslide controls on low-order drainage include divide lowering and shifting, formation of headwater basins and hanging valleys, and stream piracy. Volumes typically mobilized, yet still stored in numerous deposits despite high denudation rates, are >107 m3, and theoretically equal to 102 years of basin-wide debris production from historic shallow landslides; lack of absolute ages precludes further estimates. Deposit size and mature forest cover indicate residence times of 101-104 years. On these timescales, large landslides require further attention in landscape evolution models of tectonically active orogens.
Mei, Shuang; Wang, Yudan; Wen, Guojun; Hu, Yang
2018-05-03
Increasing deployment of optical fiber networks and the need for reliable high bandwidth make the task of inspecting optical fiber connector end faces a crucial process that must not be neglected. Traditional end face inspections are usually performed by manual visual methods, which are low in efficiency and poor in precision for long-term industrial applications. More seriously, the inspection results cannot be quantified for subsequent analysis. Aiming at the characteristics of typical defects in the inspection process for optical fiber end faces, we propose a novel method, “difference of min-max ranking filtering” (DO2MR), for detection of region-based defects, e.g., dirt, oil, contamination, pits, and chips, and a special model, a “linear enhancement inspector” (LEI), for the detection of scratches. The DO2MR is a morphology method that intends to determine whether a pixel belongs to a defective region by comparing the difference of gray values of pixels in the neighborhood around the pixel. The LEI is also a morphology method that is designed to search for scratches at different orientations with a special linear detector. These two approaches can be easily integrated into optical inspection equipment for automatic quality verification. As far as we know, this is the first time that complete defect detection methods for optical fiber end faces are available in the literature. Experimental results demonstrate that the proposed DO2MR and LEI models yield good comprehensive performance with high precision and accepted recall rates, and the image-level detection accuracies reach 96.0 and 89.3%, respectively.
Banks, Victoria A; Stanton, Neville A
2015-01-01
Automated assistance in driving emergencies aims to improve the safety of our roads by avoiding or mitigating the effects of accidents. However, the behavioural implications of such systems remain unknown. This paper introduces the driver decision-making in emergencies (DDMiEs) framework to investigate how the level and type of automation may affect driver decision-making and subsequent responses to critical braking events using network analysis to interrogate retrospective verbalisations. Four DDMiE models were constructed to represent different levels of automation within the driving task and its effects on driver decision-making. Findings suggest that whilst automation does not alter the decision-making pathway (e.g. the processes between hazard detection and response remain similar), it does appear to significantly weaken the links between information-processing nodes. This reflects an unintended yet emergent property within the task network that could mean that we may not be improving safety in the way we expect. This paper contrasts models of driver decision-making in emergencies at varying levels of automation using the Southampton University Driving Simulator. Network analysis of retrospective verbalisations indicates that increasing the level of automation in driving emergencies weakens the link between information-processing nodes essential for effective decision-making.
Population-scale three-dimensional reconstruction and quantitative profiling of microglia arbors
Rey-Villamizar, Nicolas; Merouane, Amine; Lu, Yanbin; Mukherjee, Amit; Trett, Kristen; Chong, Peter; Harris, Carolyn; Shain, William; Roysam, Badrinath
2015-01-01
Motivation: The arbor morphologies of brain microglia are important indicators of cell activation. This article fills the need for accurate, robust, adaptive and scalable methods for reconstructing 3-D microglial arbors and quantitatively mapping microglia activation states over extended brain tissue regions. Results: Thick rat brain sections (100–300 µm) were multiplex immunolabeled for IBA1 and Hoechst, and imaged by step-and-image confocal microscopy with automated 3-D image mosaicing, producing seamless images of extended brain regions (e.g. 5903 × 9874 × 229 voxels). An over-complete dictionary-based model was learned for the image-specific local structure of microglial processes. The microglial arbors were reconstructed seamlessly using an automated and scalable algorithm that exploits microglia-specific constraints. This method detected 80.1 and 92.8% more centered arbor points, and 53.5 and 55.5% fewer spurious points than existing vesselness and LoG-based methods, respectively, and the traces were 13.1 and 15.5% more accurate based on the DIADEM metric. The arbor morphologies were quantified using Scorcioni’s L-measure. Coifman’s harmonic co-clustering revealed four morphologically distinct classes that concord with known microglia activation patterns. This enabled us to map spatial distributions of microglial activation and cell abundances. Availability and implementation: Experimental protocols, sample datasets, scalable open-source multi-threaded software implementation (C++, MATLAB) in the electronic supplement, and website (www.farsight-toolkit.org). http://www.farsight-toolkit.org/wiki/Population-scale_Three-dimensional_Reconstruction_and_Quanti-tative_Profiling_of_Microglia_Arbors Contact: broysam@central.uh.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:25701570
NASA Astrophysics Data System (ADS)
Donchyts, G.; Jagers, B.; Van De Giesen, N.; Baart, F.; van Dam, A.
2015-12-01
Free global data sets on river bathymetry at global scale are not yet available. While one of the mostly used free elevation datasets, SRTM, provides data on location and elevation of rivers, its quality usually is very limited. This happens mainly because water mask was derived from older satellite imagery, such as Landsat 5, and also because the radar instruments perform bad near water, especially with the presence of vegetation in riparian zone. Additional corrections are required before it can be used for applications such as higher resolution surface water flow simulations. On the other hand, medium resolution satellite imagery from Landsat mission can be used to estimate water mask changes during the last 40 years. Water mask from Landsat imagery can be derived on per-image basis, in some cases, resulting in up to one thousand water masks. For rivers where significant water mask changes can be observed, this information can be used to improve quality of existing digital elevation models in the range between minimum and maximum observed water levels. Furthermore, we can use this information to further estimate river bathymetry using morphological models. We will evaluate how Landsat imagery can be used to estimate river bathymetry and will point to cases of significant inconsistencies between SRTM and Landsat-based water masks. We will also explore other challenges on a way to automated estimation of river bathymetry using fusion of numerical morphological models and remote sensing data. Some of them include automatic generation of model mesh, estimation of river morphodynamic properties and issues related to spectral method used to analyse optical satellite imagery.
NASA Astrophysics Data System (ADS)
Jenerowicz, Małgorzata; Kemper, Thomas
2016-10-01
Every year thousands of people are displaced by conflicts or natural disasters and often gather in large camps. Knowing how many people have been gathered is crucial for an efficient relief operation. However, it is often difficult to collect exact information on the total number of the population. This paper presents the improved morphological methodology for the estimation of dwellings structures located in several Internally Displaced Persons (IDPs) Camps, based on Very High Resolution (VHR) multispectral satellite imagery with pixel sizes of 1 meter or less including GeoEye-1, WorldView-2, QuickBird-2, Ikonos-2, Pléiades-A and Pléiades-B. The main topic of this paper is the approach enhancement with selection of feature extraction algorithm, the improvement and automation of pre-processing and results verification. For the informal and temporary dwellings extraction purpose the high quality of data has to be ensured. The pre-processing has been extended by including the input data hierarchy level assignment and data fusion method selection and evaluation. The feature extraction algorithm follows the procedure presented in Jenerowicz, M., Kemper, T., 2011. Optical data are analysed in a cyclic approach comprising image segmentation, geometrical, textural and spectral class modeling aiming at camp area identification. The successive steps of morphological processing have been combined in a one stand-alone application for automatic dwellings detection and enumeration. Actively implemented, these approaches can provide a reliable and consistent results, independent of the imaging satellite type and different study sites location, providing decision support in emergency response for the humanitarian community like United Nations, European Union and Non-Governmental relief organizations.
Lobo, Daniel; Levin, Michael
2015-01-01
Transformative applications in biomedicine require the discovery of complex regulatory networks that explain the development and regeneration of anatomical structures, and reveal what external signals will trigger desired changes of large-scale pattern. Despite recent advances in bioinformatics, extracting mechanistic pathway models from experimental morphological data is a key open challenge that has resisted automation. The fundamental difficulty of manually predicting emergent behavior of even simple networks has limited the models invented by human scientists to pathway diagrams that show necessary subunit interactions but do not reveal the dynamics that are sufficient for complex, self-regulating pattern to emerge. To finally bridge the gap between high-resolution genetic data and the ability to understand and control patterning, it is critical to develop computational tools to efficiently extract regulatory pathways from the resultant experimental shape phenotypes. For example, planarian regeneration has been studied for over a century, but despite increasing insight into the pathways that control its stem cells, no constructive, mechanistic model has yet been found by human scientists that explains more than one or two key features of its remarkable ability to regenerate its correct anatomical pattern after drastic perturbations. We present a method to infer the molecular products, topology, and spatial and temporal non-linear dynamics of regulatory networks recapitulating in silico the rich dataset of morphological phenotypes resulting from genetic, surgical, and pharmacological experiments. We demonstrated our approach by inferring complete regulatory networks explaining the outcomes of the main functional regeneration experiments in the planarian literature; By analyzing all the datasets together, our system inferred the first systems-biology comprehensive dynamical model explaining patterning in planarian regeneration. This method provides an automated, highly generalizable framework for identifying the underlying control mechanisms responsible for the dynamic regulation of growth and form. PMID:26042810
Automated Diatom Analysis Applied to Traditional Light Microscopy: A Proof-of-Concept Study
NASA Astrophysics Data System (ADS)
Little, Z. H. L.; Bishop, I.; Spaulding, S. A.; Nelson, H.; Mahoney, C.
2017-12-01
Diatom identification and enumeration by high resolution light microscopy is required for many areas of research and water quality assessment. Such analyses, however, are both expertise and labor-intensive. These challenges motivate the need for an automated process to efficiently and accurately identify and enumerate diatoms. Improvements in particle analysis software have increased the likelihood that diatom enumeration can be automated. VisualSpreadsheet software provides a possible solution for automated particle analysis of high-resolution light microscope diatom images. We applied the software, independent of its complementary FlowCam hardware, to automated analysis of light microscope images containing diatoms. Through numerous trials, we arrived at threshold settings to correctly segment 67% of the total possible diatom valves and fragments from broad fields of view. (183 light microscope images were examined containing 255 diatom particles. Of the 255 diatom particles present, 216 diatoms valves and fragments of valves were processed, with 170 properly analyzed and focused upon by the software). Manual analysis of the images yielded 255 particles in 400 seconds, whereas the software yielded a total of 216 particles in 68 seconds, thus highlighting that the software has an approximate five-fold efficiency advantage in particle analysis time. As in past efforts, incomplete or incorrect recognition was found for images with multiple valves in contact or valves with little contrast. The software has potential to be an effective tool in assisting taxonomists with diatom enumeration by completing a large portion of analyses. Benefits and limitations of the approach are presented to allow for development of future work in image analysis and automated enumeration of traditional light microscope images containing diatoms.
Adaptive automation of human-machine system information-processing functions.
Kaber, David B; Wright, Melanie C; Prinzel, Lawrence J; Clamann, Michael P
2005-01-01
The goal of this research was to describe the ability of human operators to interact with adaptive automation (AA) applied to various stages of complex systems information processing, defined in a model of human-automation interaction. Forty participants operated a simulation of an air traffic control task. Automated assistance was adaptively applied to information acquisition, information analysis, decision making, and action implementation aspects of the task based on operator workload states, which were measured using a secondary task. The differential effects of the forms of automation were determined and compared with a manual control condition. Results of two 20-min trials of AA or manual control revealed a significant effect of the type of automation on performance, particularly during manual control periods as part of the adaptive conditions. Humans appear to better adapt to AA applied to sensory and psychomotor information-processing functions (action implementation) than to AA applied to cognitive functions (information analysis and decision making), and AA is superior to completely manual control. Potential applications of this research include the design of automation to support air traffic controller information processing.
Humans: still vital after all these years of automation.
Parasuraman, Raja; Wickens, Christopher D
2008-06-01
The authors discuss empirical studies of human-automation interaction and their implications for automation design. Automation is prevalent in safety-critical systems and increasingly in everyday life. Many studies of human performance in automated systems have been conducted over the past 30 years. Developments in three areas are examined: levels and stages of automation, reliance on and compliance with automation, and adaptive automation. Automation applied to information analysis or decision-making functions leads to differential system performance benefits and costs that must be considered in choosing appropriate levels and stages of automation. Human user dependence on automated alerts and advisories reflects two components of operator trust, reliance and compliance, which are in turn determined by the threshold designers use to balance automation misses and false alarms. Finally, adaptive automation can provide additional benefits in balancing workload and maintaining the user's situation awareness, although more research is required to identify when adaptation should be user controlled or system driven. The past three decades of empirical research on humans and automation has provided a strong science base that can be used to guide the design of automated systems. This research can be applied to most current and future automated systems.
An Automated Energy Detection Algorithm Based on Consecutive Mean Excision
2018-01-01
present in the RF spectrum. 15. SUBJECT TERMS RF spectrum, detection threshold algorithm, consecutive mean excision, rank order filter , statistical...Median 4 3.1.9 Rank Order Filter (ROF) 4 3.1.10 Crest Factor (CF) 5 3.2 Statistical Summary 6 4. Algorithm 7 5. Conclusion 8 6. References 9...energy detection algorithm based on morphological filter processing with a semi- disk structure. Adelphi (MD): Army Research Laboratory (US); 2018 Jan
Zonneveld, Rens; Molema, Grietje; Plötz, Frans B
2016-01-01
Alterations in neutrophil morphology (size, shape, and composition), mechanics (deformability), and motility (chemotaxis and migration) have been observed during sepsis. We combine summarizing features of neutrophil morphology, mechanics, and motility that change during sepsis with an investigation into their clinical utility as markers for sepsis through measurement with novel technologies. We performed an initial literature search in MEDLINE using search terms "neutrophil," "morphology," "mechanics," "dynamics," "motility," "mobility," "spreading," "polarization," "migration," and "chemotaxis." We then combined the results with "sepsis" and "septic shock." We scanned bibliographies of included articles to identify additional articles. Final selection was done after the authors reviewed recovered articles. We included articles based on their relevance for our review topic. When compared with resting conditions, sepsis causes an increase in circulating numbers of larger, more rigid neutrophils that show diminished granularity, migration, and chemotaxis. Combined measurement of these variables could provide a more complete view on neutrophil phenotype manifestation. For that purpose, sophisticated automated hematology analyzers, microscopy, and bedside microfluidic devices provide clinically feasible, high-throughput, and cost-limiting means. We propose that integration of features of neutrophil morphology, mechanics, and motility with these new analytical methods can be useful as markers for diagnosis, prognosis, and monitoring of sepsis and may even contribute to basic understanding of its pathophysiology.
Development Status: Automation Advanced Development Space Station Freedom Electric Power System
NASA Technical Reports Server (NTRS)
Dolce, James L.; Kish, James A.; Mellor, Pamela A.
1990-01-01
Electric power system automation for Space Station Freedom is intended to operate in a loop. Data from the power system is used for diagnosis and security analysis to generate Operations Management System (OMS) requests, which are sent to an arbiter, which sends a plan to a commander generator connected to the electric power system. This viewgraph presentation profiles automation software for diagnosis, scheduling, and constraint interfaces, and simulation to support automation development. The automation development process is diagrammed, and the process of creating Ada and ART versions of the automation software is described.
Glacier Surface Lowering and Stagnation in the Manaslu Region of Nepal
NASA Astrophysics Data System (ADS)
Robson, B. A.; Nuth, C.; Nielsen, P. R.; Hendrickx, M.; Dahl, S. O.
2015-12-01
Frequent and up-to-date glacier outlines are needed for many applications of glaciology, not only glacier area change analysis, but also for masks in volume or velocity analysis, for the estimation of water resources and as model input data. Remote sensing offers a good option for creating glacier outlines over large areas, but manual correction is frequently necessary, especially in areas containing supraglacial debris. We show three different workflows for mapping clean ice and debris-covered ice within Object Based Image Analysis (OBIA). By working at the object level as opposed to the pixel level, OBIA facilitates using contextual, spatial and hierarchical information when assigning classes, and additionally permits the handling of multiple data sources. Our first example shows mapping debris-covered ice in the Manaslu Himalaya, Nepal. SAR Coherence data is used in combination with optical and topographic data to classify debris-covered ice, obtaining an accuracy of 91%. Our second example shows using a high-resolution LiDAR derived DEM over the Hohe Tauern National Park in Austria. Breaks in surface morphology are used in creating image objects; debris-covered ice is then classified using a combination of spectral, thermal and topographic properties. Lastly, we show a completely automated workflow for mapping glacier ice in Norway. The NDSI and NIR/SWIR band ratio are used to map clean ice over the entire country but the thresholds are calculated automatically based on a histogram of each image subset. This means that in theory any Landsat scene can be inputted and the clean ice can be automatically extracted. Debris-covered ice can be included semi-automatically using contextual and morphological information.
Deal, Samantha; Wambaugh, John; Judson, Richard; Mosher, Shad; Radio, Nick; Houck, Keith; Padilla, Stephanie
2016-09-01
One of the rate-limiting procedures in a developmental zebrafish screen is the morphological assessment of each larva. Most researchers opt for a time-consuming, structured visual assessment by trained human observer(s). The present studies were designed to develop a more objective, accurate and rapid method for screening zebrafish for dysmorphology. Instead of the very detailed human assessment, we have developed the computational malformation index, which combines the use of high-content imaging with a very brief human visual assessment. Each larva was quickly assessed by a human observer (basic visual assessment), killed, fixed and assessed for dysmorphology with the Zebratox V4 BioApplication using the Cellomics® ArrayScan® V(TI) high-content image analysis platform. The basic visual assessment adds in-life parameters, and the high-content analysis assesses each individual larva for various features (total area, width, spine length, head-tail length, length-width ratio, perimeter-area ratio). In developing the computational malformation index, a training set of hundreds of embryos treated with hundreds of chemicals were visually assessed using the basic or detailed method. In the second phase, we assessed both the stability of these high-content measurements and its performance using a test set of zebrafish treated with a dose range of two reference chemicals (trans-retinoic acid or cadmium). We found the measures were stable for at least 1 week and comparison of these automated measures to detailed visual inspection of the larvae showed excellent congruence. Our computational malformation index provides an objective manner for rapid phenotypic brightfield assessment of individual larva in a developmental zebrafish assay. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
High-Throughput Platform for Synthesis of Melamine-Formaldehyde Microcapsules.
Çakir, Seda; Bauters, Erwin; Rivero, Guadalupe; Parasote, Tom; Paul, Johan; Du Prez, Filip E
2017-07-10
The synthesis of microcapsules via in situ polymerization is a labor-intensive and time-consuming process, where many composition and process factors affect the microcapsule formation and its morphology. Herein, we report a novel combinatorial technique for the preparation of melamine-formaldehyde microcapsules, using a custom-made and automated high-throughput platform (HTP). After performing validation experiments for ensuring the accuracy and reproducibility of the novel platform, a design of experiment study was performed. The influence of different encapsulation parameters was investigated, such as the effect of the surfactant, surfactant type, surfactant concentration and core/shell ratio. As a result, this HTP-platform is suitable to be used for the synthesis of different types of microcapsules in an automated and controlled way, allowing the screening of different reaction parameters in a shorter time compared to the manual synthetic techniques.
Effects of imperfect automation on decision making in a simulated command and control task.
Rovira, Ericka; McGarry, Kathleen; Parasuraman, Raja
2007-02-01
Effects of four types of automation support and two levels of automation reliability were examined. The objective was to examine the differential impact of information and decision automation and to investigate the costs of automation unreliability. Research has shown that imperfect automation can lead to differential effects of stages and levels of automation on human performance. Eighteen participants performed a "sensor to shooter" targeting simulation of command and control. Dependent variables included accuracy and response time of target engagement decisions, secondary task performance, and subjective ratings of mental work-load, trust, and self-confidence. Compared with manual performance, reliable automation significantly reduced decision times. Unreliable automation led to greater cost in decision-making accuracy under the higher automation reliability condition for three different forms of decision automation relative to information automation. At low automation reliability, however, there was a cost in performance for both information and decision automation. The results are consistent with a model of human-automation interaction that requires evaluation of the different stages of information processing to which automation support can be applied. If fully reliable decision automation cannot be guaranteed, designers should provide users with information automation support or other tools that allow for inspection and analysis of raw data.
Toward designing for trust in database automation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Duez, P. P.; Jamieson, G. A.
Appropriate reliance on system automation is imperative for safe and productive work, especially in safety-critical systems. It is unsafe to rely on automation beyond its designed use; conversely, it can be both unproductive and unsafe to manually perform tasks that are better relegated to automated tools. Operator trust in automated tools mediates reliance, and trust appears to affect how operators use technology. As automated agents become more complex, the question of trust in automation is increasingly important. In order to achieve proper use of automation, we must engender an appropriate degree of trust that is sensitive to changes in operatingmore » functions and context. In this paper, we present research concerning trust in automation in the domain of automated tools for relational databases. Lee and See have provided models of trust in automation. One model developed by Lee and See identifies three key categories of information about the automation that lie along a continuum of attributional abstraction. Purpose-, process-and performance-related information serve, both individually and through inferences between them, to describe automation in such a way as to engender r properly-calibrated trust. Thus, one can look at information from different levels of attributional abstraction as a general requirements analysis for information key to appropriate trust in automation. The model of information necessary to engender appropriate trust in automation [1] is a general one. Although it describes categories of information, it does not provide insight on how to determine the specific information elements required for a given automated tool. We have applied the Abstraction Hierarchy (AH) to this problem in the domain of relational databases. The AH serves as a formal description of the automation at several levels of abstraction, ranging from a very abstract purpose-oriented description to a more concrete description of the resources involved in the automated process. The connection between an AH for an automated tool and a list of information elements at the three levels of attributional abstraction is then direct, providing a method for satisfying information requirements for appropriate trust in automation. In this paper, we will present our method for developing specific information requirements for an automated tool, based on a formal analysis of that tool and the models presented by Lee and See. We will show an example of the application of the AH to automation, in the domain of relational database automation, and the resulting set of specific information elements for appropriate trust in the automated tool. Finally, we will comment on the applicability of this approach to the domain of nuclear plant instrumentation. (authors)« less
Automated processing of endoscopic surgical instruments.
Roth, K; Sieber, J P; Schrimm, H; Heeg, P; Buess, G
1994-10-01
This paper deals with the requirements for automated processing of endoscopic surgical instruments. After a brief analysis of the current problems, solutions are discussed. Test-procedures have been developed to validate the automated processing, so that the cleaning results are guaranteed and reproducable. Also a device for testing and cleaning was designed together with Netzsch Newamatic and PCI, called TC-MIC, to automate processing and reduce manual work.
NASA Technical Reports Server (NTRS)
Bates, William V., Jr.
1989-01-01
The automation and robotics requirements for the Space Station Initial Operational Concept (IOC) are discussed. The amount of tasks to be performed by an eight-person crew, the need for an automated or directed fault analysis capability, and ground support requirements are considered. Issues important in determining the role of automation for the IOC are listed.
Morschett, Holger; Wiechert, Wolfgang; Oldiges, Marco
2016-02-09
Within the context of microalgal lipid production for biofuels and bulk chemical applications, specialized higher throughput devices for small scale parallelized cultivation are expected to boost the time efficiency of phototrophic bioprocess development. However, the increasing number of possible experiments is directly coupled to the demand for lipid quantification protocols that enable reliably measuring large sets of samples within short time and that can deal with the reduced sample volume typically generated at screening scale. To meet these demands, a dye based assay was established using a liquid handling robot to provide reproducible high throughput quantification of lipids with minimized hands-on-time. Lipid production was monitored using the fluorescent dye Nile red with dimethyl sulfoxide as solvent facilitating dye permeation. The staining kinetics of cells at different concentrations and physiological states were investigated to successfully down-scale the assay to 96 well microtiter plates. Gravimetric calibration against a well-established extractive protocol enabled absolute quantification of intracellular lipids improving precision from ±8 to ±2 % on average. Implementation into an automated liquid handling platform allows for measuring up to 48 samples within 6.5 h, reducing hands-on-time to a third compared to manual operation. Moreover, it was shown that automation enhances accuracy and precision compared to manual preparation. It was revealed that established protocols relying on optical density or cell number for biomass adjustion prior to staining may suffer from errors due to significant changes of the cells' optical and physiological properties during cultivation. Alternatively, the biovolume was used as a measure for biomass concentration so that errors from morphological changes can be excluded. The newly established assay proved to be applicable for absolute quantification of algal lipids avoiding limitations of currently established protocols, namely biomass adjustment and limited throughput. Automation was shown to improve data reliability, as well as experimental throughput simultaneously minimizing the needed hands-on-time to a third. Thereby, the presented protocol meets the demands for the analysis of samples generated by the upcoming generation of devices for higher throughput phototrophic cultivation and thereby contributes to boosting the time efficiency for setting up algae lipid production processes.
Gu, Qun; David, Frank; Lynen, Frédéric; Rumpel, Klaus; Dugardeyn, Jasper; Van Der Straeten, Dominique; Xu, Guowang; Sandra, Pat
2011-05-27
In this paper, automated sample preparation, retention time locked gas chromatography-mass spectrometry (GC-MS) and data analysis methods for the metabolomics study were evaluated. A miniaturized and automated derivatisation method using sequential oximation and silylation was applied to a polar extract of 4 types (2 types×2 ages) of Arabidopsis thaliana, a popular model organism often used in plant sciences and genetics. Automation of the derivatisation process offers excellent repeatability, and the time between sample preparation and analysis was short and constant, reducing artifact formation. Retention time locked (RTL) gas chromatography-mass spectrometry was used, resulting in reproducible retention times and GC-MS profiles. Two approaches were used for data analysis. XCMS followed by principal component analysis (approach 1) and AMDIS deconvolution combined with a commercially available program (Mass Profiler Professional) followed by principal component analysis (approach 2) were compared. Several features that were up- or down-regulated in the different types were detected. Copyright © 2011 Elsevier B.V. All rights reserved.
DOT National Transportation Integrated Search
1995-03-01
This report documents work performed by the University of California at Davis in collaboration with the California Department of Transportation (Caltrans) in relationship to a Precursor System Analysis dealing with a study of automated construction, ...
Ko, Dae-Hyun; Ji, Misuk; Kim, Sollip; Cho, Eun-Jung; Lee, Woochang; Yun, Yeo-Min; Chun, Sail; Min, Won-Ki
2016-01-01
The results of urine sediment analysis have been reported semiquantitatively. However, as recent guidelines recommend quantitative reporting of urine sediment, and with the development of automated urine sediment analyzers, there is an increasing need for quantitative analysis of urine sediment. Here, we developed a protocol for urine sediment analysis and quantified the results. Based on questionnaires, various reports, guidelines, and experimental results, we developed a protocol for urine sediment analysis. The results of this new protocol were compared with those obtained with a standardized chamber and an automated sediment analyzer. Reference intervals were also estimated using new protocol. We developed a protocol with centrifugation at 400 g for 5 min, with the average concentration factor of 30. The correlation between quantitative results of urine sediment analysis, the standardized chamber, and the automated sediment analyzer were generally good. The conversion factor derived from the new protocol showed a better fit with the results of manual count than the default conversion factor in the automated sediment analyzer. We developed a protocol for manual urine sediment analysis to quantitatively report the results. This protocol may provide a mean for standardization of urine sediment analysis.
Automated clinical system for chromosome analysis
NASA Technical Reports Server (NTRS)
Castleman, K. R.; Friedan, H. J.; Johnson, E. T.; Rennie, P. A.; Wall, R. J. (Inventor)
1978-01-01
An automatic chromosome analysis system is provided wherein a suitably prepared slide with chromosome spreads thereon is placed on the stage of an automated microscope. The automated microscope stage is computer operated to move the slide to enable detection of chromosome spreads on the slide. The X and Y location of each chromosome spread that is detected is stored. The computer measures the chromosomes in a spread, classifies them by group or by type and also prepares a digital karyotype image. The computer system can also prepare a patient report summarizing the result of the analysis and listing suspected abnormalities.
Kandaswamy, Umasankar; Rotman, Ziv; Watt, Dana; Schillebeeckx, Ian; Cavalli, Valeria; Klyachko, Vitaly
2013-01-01
High-resolution live-cell imaging studies of neuronal structure and function are characterized by large variability in image acquisition conditions due to background and sample variations as well as low signal-to-noise ratio. The lack of automated image analysis tools that can be generalized for varying image acquisition conditions represents one of the main challenges in the field of biomedical image analysis. Specifically, segmentation of the axonal/dendritic arborizations in brightfield or fluorescence imaging studies is extremely labor-intensive and still performed mostly manually. Here we describe a fully automated machine-learning approach based on textural analysis algorithms for segmenting neuronal arborizations in high-resolution brightfield images of live cultured neurons. We compare performance of our algorithm to manual segmentation and show that it combines 90% accuracy, with similarly high levels of specificity and sensitivity. Moreover, the algorithm maintains high performance levels under a wide range of image acquisition conditions indicating that it is largely condition-invariable. We further describe an application of this algorithm to fully automated synapse localization and classification in fluorescence imaging studies based on synaptic activity. Textural analysis-based machine-learning approach thus offers a high performance condition-invariable tool for automated neurite segmentation. PMID:23261652
NASA Astrophysics Data System (ADS)
Steel, B. A.; Kucera, M.; Darling, K. F.
2003-04-01
The origination of new species is contingent on the build up of mutations within isolated sub-populations, and results from interruptions to gene flow caused by tectonic, ecological or climatic barriers. Whilst the advent of new species is traditionally recognised in the fossil record by the appearance of clearly divergent phenotypes, recent studies have demonstrated that "Cryptic" speciation (cladogenesis without obvious concomitant morphological change) is common amongst planktic protists. Equally, they have raised the prospect that high-resolution microscopy can identify subtle but consistently expressed morphological features that may eventually allow the evolution and dispersal of these "Cryptic" types to be traced in the fossil record. We have conducted a pilot study using the planktic foraminifer Globigerinella siphonifera, previously shown to be a complex of several genotypes, two of which can be discriminated on the basis of test pore characteristics. Pore size, shape and density have been quantified using scanning electron microscopy combined with semi-automated image analysis of several hundred specimens from two ODP sites (926A (Ceara Rise) and 846 (Eastern Equatorial Pacific)) and a diaspora of Holocene samples. The fossil dataset does not show the same sharp bimodality seen in modern material, and pore morphology may be affected by abiotic factors such as dissolution. Nevertheless, we have tentatively identified a pore size expansion event at ˜ 3.5 Ma, possibly coincident with the uplift of the Panamanian Isthmus. This date differs considerably from previous estimates based on "molecular clock" analysis of ribosomal RNA, but confirms that such techniques at least return divergence times that are within the scope of palaeontological credibility. Whether reproductive isolation arose directly from tectonic isolation or sympatrically from associated changes to the biological, chemical and physical structure of the oceans is unclear. However, our data imply that "Cryptic" foraminiferal genotypes have been a feature of the oceanic biosphere for millions of years, and that their evolution and appearance are spurred by the same factors affecting traditional species.
Celik, Turgay; Lee, Hwee Kuan; Petznick, Andrea; Tong, Louis
2013-01-01
Background Infrared (IR) meibography is an imaging technique to capture the Meibomian glands in the eyelids. These ocular surface structures are responsible for producing the lipid layer of the tear film which helps to reduce tear evaporation. In a normal healthy eye, the glands have similar morphological features in terms of spatial width, in-plane elongation, length. On the other hand, eyes with Meibomian gland dysfunction show visible structural irregularities that help in the diagnosis and prognosis of the disease. However, currently there is no universally accepted algorithm for detection of these image features which will be clinically useful. We aim to develop a method of automated gland segmentation which allows images to be classified. Methods A set of 131 meibography images were acquired from patients from the Singapore National Eye Center. We used a method of automated gland segmentation using Gabor wavelets. Features of the imaged glands including orientation, width, length and curvature were extracted and the IR images enhanced. The images were classified as ‘healthy’, ‘intermediate’ or ‘unhealthy’, through the use of a support vector machine classifier (SVM). Half the images were used for training the SVM and the other half for validation. Independently of this procedure, the meibographs were classified by an expert clinician into the same 3 grades. Results The algorithm correctly detected 94% and 98% of mid-line pixels of gland and inter-gland regions, respectively, on healthy images. On intermediate images, correct detection rates of 92% and 97% of mid-line pixels of gland and inter-gland regions were achieved respectively. The true positive rate of detecting healthy images was 86%, and for intermediate images, 74%. The corresponding false positive rates were 15% and 31% respectively. Using the SVM, the proposed method has 88% accuracy in classifying images into the 3 classes. The classification of images into healthy and unhealthy classes achieved a 100% accuracy, but 7/38 intermediate images were incorrectly classified. Conclusions This technique of image analysis in meibography can help clinicians to interpret the degree of gland destruction in patients with dry eye and meibomian gland dysfunction.
Kaser, Daniel J; Farland, Leslie V; Missmer, Stacey A; Racowsky, Catherine
2017-08-01
How does automated time-lapse annotation (Eeva™) compare to manual annotation of the same video images performed by embryologists certified in measuring durations of the 2-cell (P2; time to the 3-cell minus time to the 2-cell, or t3-t2) and 3-cell (P3; time to 4-cell minus time to the 3-cell, or t4-t3) stages? Manual annotation was superior to the automated annotation provided by Eeva™ version 2.2, because manual annotation assigned a rating to a higher proportion of embryos and yielded a greater sensitivity for blastocyst prediction than automated annotation. While use of the Eeva™ test has been shown to improve an embryologist's ability to predict blastocyst formation compared to Day 3 morphology alone, the accuracy of the automated image analysis employed by the Eeva™ system has never been compared to manual annotation of the same time-lapse markers by a trained embryologist. We conducted a prospective cohort study of embryos (n = 1477) cultured in the Eeva™ system (n = 8 microscopes) at our institution from August 2014 to February 2016. Embryos were assigned a blastocyst prediction rating of High (H), Medium (M), Low (L), or Not Rated (NR) by Eeva™ version 2.2 according to P2 and P3. An embryologist from a team of 10, then manually annotated each embryo and if the automated and manual ratings differed, a second embryologist independently annotated the embryo. If both embryologists disagreed with the automated Eeva™ rating, then the rating was classified as discordant. If the second embryologist agreed with the automated Eeva™ score, the rating was not considered discordant. Spearman's correlation (ρ), weighted kappa statistics and the intra-class correlation (ICC) coefficients with 95% confidence intervals (CI) between Eeva™ and manual annotation were calculated, as were the proportions of discordant embryos, and the sensitivity, specificity, positive predictive value (PPV) and NPV of each method for blastocyst prediction. The distribution of H, M and L ratings differed by annotation method (P < 0.0001). The correlation between Eeva™ and manual annotation was higher for P2 (ρ = 0.75; ICC = 0.82; 95% CI 0.82-0.83) than for P3 (ρ = 0.39; ICC = 0.20; 95% CI 0.16-0.26). Eeva™ was more likely than an embryologist to rate an embryo as NR (11.1% vs. 3.0%, P < 0.0001). Discordance occurred in 30.0% (443/1477) of all embryos and was not associated with factors such as Day 3 cell number, fragmentation, symmetry or presence of abnormal cleavage. Rather, discordance was associated with direct cleavage (P2 ≤ 5 h) and short P3 (≤0.25 h), and also factors intrinsic to the Eeva™ system, such as the automated rating (proportion of discordant embryos by rating: H: 9.3%; M: 18.1%; L: 41.3%; NR: 31.4%; P < 0.0001), microwell location (peripheral: 31.2%; central: 23.8%; P = 0.02) and Eeva™ microscope (n = 8; range 22.9-42.6%; P < 0.0001). Manual annotation upgraded 82.6% of all discordant embryos from a lower to a higher rating, and improved the sensitivity for predicting blastocyst formation. One team of embryologists performed the manual annotations; however, the study staff was trained and certified by the company sponsor. Only two time-lapse markers were evaluated, so the results are not generalizable to other parameters; likewise, the results are not generalizable to future versions of Eeva™ or other automated image analysis systems. Based on the proportion of discordance and the improved performance of manual annotation, clinics using the Eeva™ system should consider manual annotation of P2 and P3 to confirm the automated ratings generated by Eeva™. These data were acquired in a study funded by Progyny, Inc. There are no competing interests. N/A. © The Author 2017. Published by Oxford University Press on behalf of the European Society of Human Reproduction and Embryology. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com
NASA Technical Reports Server (NTRS)
Kirlik, Alex
1993-01-01
Task-offload aids (e.g., an autopilot, an 'intelligent' assistant) can be selectively engaged by the human operator to dynamically delegate tasks to automation. Introducing such aids eliminates some task demands but creates new ones associated with programming, engaging, and disengaging the aiding device via an interface. The burdens associated with managing automation can sometimes outweigh the potential benefits of automation to improved system performance. Aid design parameters and features of the overall multitask context combine to determine whether or not a task-offload aid will effectively support the operator. A modeling and sensitivity analysis approach is presented that identifies effective strategies for human-automation interaction as a function of three task-context parameters and three aid design parameters. The analysis and modeling approaches provide resources for predicting how a well-adapted operator will use a given task-offload aid, and for specifying aid design features that ensure that automation will provide effective operator support in a multitask environment.
A survey of MRI-based medical image analysis for brain tumor studies
NASA Astrophysics Data System (ADS)
Bauer, Stefan; Wiest, Roland; Nolte, Lutz-P.; Reyes, Mauricio
2013-07-01
MRI-based medical image analysis for brain tumor studies is gaining attention in recent times due to an increased need for efficient and objective evaluation of large amounts of data. While the pioneering approaches applying automated methods for the analysis of brain tumor images date back almost two decades, the current methods are becoming more mature and coming closer to routine clinical application. This review aims to provide a comprehensive overview by giving a brief introduction to brain tumors and imaging of brain tumors first. Then, we review the state of the art in segmentation, registration and modeling related to tumor-bearing brain images with a focus on gliomas. The objective in the segmentation is outlining the tumor including its sub-compartments and surrounding tissues, while the main challenge in registration and modeling is the handling of morphological changes caused by the tumor. The qualities of different approaches are discussed with a focus on methods that can be applied on standard clinical imaging protocols. Finally, a critical assessment of the current state is performed and future developments and trends are addressed, giving special attention to recent developments in radiological tumor assessment guidelines.
Automatic Identification and Quantification of Extra-Well Fluorescence in Microarray Images.
Rivera, Robert; Wang, Jie; Yu, Xiaobo; Demirkan, Gokhan; Hopper, Marika; Bian, Xiaofang; Tahsin, Tasnia; Magee, D Mitchell; Qiu, Ji; LaBaer, Joshua; Wallstrom, Garrick
2017-11-03
In recent studies involving NAPPA microarrays, extra-well fluorescence is used as a key measure for identifying disease biomarkers because there is evidence to support that it is better correlated with strong antibody responses than statistical analysis involving intraspot intensity. Because this feature is not well quantified by traditional image analysis software, identification and quantification of extra-well fluorescence is performed manually, which is both time-consuming and highly susceptible to variation between raters. A system that could automate this task efficiently and effectively would greatly improve the process of data acquisition in microarray studies, thereby accelerating the discovery of disease biomarkers. In this study, we experimented with different machine learning methods, as well as novel heuristics, for identifying spots exhibiting extra-well fluorescence (rings) in microarray images and assigning each ring a grade of 1-5 based on its intensity and morphology. The sensitivity of our final system for identifying rings was found to be 72% at 99% specificity and 98% at 92% specificity. Our system performs this task significantly faster than a human, while maintaining high performance, and therefore represents a valuable tool for microarray image analysis.
Automated quantitative cytological analysis using portable microfluidic microscopy.
Jagannadh, Veerendra Kalyan; Murthy, Rashmi Sreeramachandra; Srinivasan, Rajesh; Gorthi, Sai Siva
2016-06-01
In this article, a portable microfluidic microscopy based approach for automated cytological investigations is presented. Inexpensive optical and electronic components have been used to construct a simple microfluidic microscopy system. In contrast to the conventional slide-based methods, the presented method employs microfluidics to enable automated sample handling and image acquisition. The approach involves the use of simple in-suspension staining and automated image acquisition to enable quantitative cytological analysis of samples. The applicability of the presented approach to research in cellular biology is shown by performing an automated cell viability assessment on a given population of yeast cells. Further, the relevance of the presented approach to clinical diagnosis and prognosis has been demonstrated by performing detection and differential assessment of malaria infection in a given sample. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Automated data acquisition technology development:Automated modeling and control development
NASA Technical Reports Server (NTRS)
Romine, Peter L.
1995-01-01
This report documents the completion of, and improvements made to, the software developed for automated data acquisition and automated modeling and control development on the Texas Micro rackmounted PC's. This research was initiated because a need was identified by the Metal Processing Branch of NASA Marshall Space Flight Center for a mobile data acquisition and data analysis system, customized for welding measurement and calibration. Several hardware configurations were evaluated and a PC based system was chosen. The Welding Measurement System (WMS), is a dedicated instrument strickly for use of data acquisition and data analysis. In addition to the data acquisition functions described in this thesis, WMS also supports many functions associated with process control. The hardware and software requirements for an automated acquisition system for welding process parameters, welding equipment checkout, and welding process modeling were determined in 1992. From these recommendations, NASA purchased the necessary hardware and software. The new welding acquisition system is designed to collect welding parameter data and perform analysis to determine the voltage versus current arc-length relationship for VPPA welding. Once the results of this analysis are obtained, they can then be used to develop a RAIL function to control welding startup and shutdown without torch crashing.
Pliasunova, S A; Balugian, R Sh; Khmel'nitskiĭ, K E; Medovyĭ, V S; Parpara, A A; Piatnitskiĭ, A M; Sokolinskiĭ, B Z; Dem'ianov, V L; Nikolaenko, D S
2006-10-01
The paper presents the results of medical tests of a group of computer-aided procedures for microscopic analysis by means of a MECOS-Ts2 complex (ZAO "MECOS", Russia), which have been conducted at the Republican Children's Clinical Hospital, the Research Institute of Emergency Pediatric Surgery and Traumatology, and Moscow City Clinical Hospital No. 23. Computer-aided procedures for calculating the differential count and for analyzing the morphology of red blood cells were tested on blood smears from a total of 443 patients and donors, computer-aided calculation of the count of reticulocytes was tested on 318 smears. The tests were carried out under the US standard NCCLS-H20A. Manual microscopy (443 smears) and flow blood analysis on a Coulter GEN*S (125 smears) were used as reference methods. The quality of collection of samples and laboriousness were additionally assessed. The certified MECOS-Ts2 subsystems were additionally used as reference tools. The tests indicated the advantage of computer-aided MECOS-Tsl2 complex microscopy over manual microscopy.
Geometrical characterization of perlite-metal syntactic foam
DOE Office of Scientific and Technical Information (OSTI.GOV)
Borovinšek, Matej, E-mail: matej.borovinsek@um.si
This paper introduces an improved method for the detailed geometrical characterization of perlite-metal syntactic foam. This novel metallic foam is created by infiltrating a packed bed of expanded perlite particles with liquid aluminium alloy. The geometry of the solidified metal is thus defined by the perlite particle shape, size and morphology. The method is based on a segmented micro-computed tomography data and allows for automated determination of the distributions of pore size, sphericity, orientation and location. The pore (i.e. particle) size distribution and pore orientation is determined by a multi-criteria k-nearest neighbour algorithm for pore identification. The results indicate amore » weak density gradient parallel to the casting direction and a slight preference of particle orientation perpendicular to the casting direction. - Highlights: •A new method for identification of pores in porous materials was developed. •It was applied on perlite-metal syntactic foam samples. •A porosity decrease in the axial direction of the samples was determined. •Pore shape analysis showed a high percentage of spherical pores. •Orientation analysis showed that more pores are oriented in the radial direction.« less
A novel image-based quantitative method for the characterization of NETosis
Zhao, Wenpu; Fogg, Darin K.; Kaplan, Mariana J.
2015-01-01
NETosis is a newly recognized mechanism of programmed neutrophil death. It is characterized by a stepwise progression of chromatin decondensation, membrane rupture, and release of bactericidal DNA-based structures called neutrophil extracellular traps (NETs). Conventional ‘suicidal’ NETosis has been described in pathogenic models of systemic autoimmune disorders. Recent in vivo studies suggest that a process of ‘vital’ NETosis also exists, in which chromatin is condensed and membrane integrity is preserved. Techniques to assess ‘suicidal’ or ‘vital’ NET formation in a specific, quantitative, rapid and semiautomated way have been lacking, hindering the characterization of this process. Here we have developed a new method to simultaneously assess both ‘suicidal’ and ‘vital’ NETosis, using high-speed multi-spectral imaging coupled to morphometric image analysis, to quantify spontaneous NET formation observed ex-vivo or stimulus-induced NET formation triggered in vitro. Use of imaging flow cytometry allows automated, quantitative and rapid analysis of subcellular morphology and texture, and introduces the potential for further investigation using NETosis as a biomarker in pre-clinical and clinical studies. PMID:26003624
Automated analysis of cell migration and nuclear envelope rupture in confined environments.
Elacqua, Joshua J; McGregor, Alexandra L; Lammerding, Jan
2018-01-01
Recent in vitro and in vivo studies have highlighted the importance of the cell nucleus in governing migration through confined environments. Microfluidic devices that mimic the narrow interstitial spaces of tissues have emerged as important tools to study cellular dynamics during confined migration, including the consequences of nuclear deformation and nuclear envelope rupture. However, while image acquisition can be automated on motorized microscopes, the analysis of the corresponding time-lapse sequences for nuclear transit through the pores and events such as nuclear envelope rupture currently requires manual analysis. In addition to being highly time-consuming, such manual analysis is susceptible to person-to-person variability. Studies that compare large numbers of cell types and conditions therefore require automated image analysis to achieve sufficiently high throughput. Here, we present an automated image analysis program to register microfluidic constrictions and perform image segmentation to detect individual cell nuclei. The MATLAB program tracks nuclear migration over time and records constriction-transit events, transit times, transit success rates, and nuclear envelope rupture. Such automation reduces the time required to analyze migration experiments from weeks to hours, and removes the variability that arises from different human analysts. Comparison with manual analysis confirmed that both constriction transit and nuclear envelope rupture were detected correctly and reliably, and the automated analysis results closely matched a manual analysis gold standard. Applying the program to specific biological examples, we demonstrate its ability to detect differences in nuclear transit time between cells with different levels of the nuclear envelope proteins lamin A/C, which govern nuclear deformability, and to detect an increase in nuclear envelope rupture duration in cells in which CHMP7, a protein involved in nuclear envelope repair, had been depleted. The program thus presents a versatile tool for the study of confined migration and its effect on the cell nucleus.
SHARP: Automated monitoring of spacecraft health and status
NASA Technical Reports Server (NTRS)
Atkinson, David J.; James, Mark L.; Martin, R. Gaius
1991-01-01
Briefly discussed here are the spacecraft and ground systems monitoring process at the Jet Propulsion Laboratory (JPL). Some of the difficulties associated with the existing technology used in mission operations are highlighted. A new automated system based on artificial intelligence technology is described which seeks to overcome many of these limitations. The system, called the Spacecraft Health Automated Reasoning Prototype (SHARP), is designed to automate health and status analysis for multi-mission spacecraft and ground data systems operations. The system has proved to be effective for detecting and analyzing potential spacecraft and ground systems problems by performing real-time analysis of spacecraft and ground data systems engineering telemetry. Telecommunications link analysis of the Voyager 2 spacecraft was the initial focus for evaluation of the system in real-time operations during the Voyager spacecraft encounter with Neptune in August 1989.