Science.gov

Sample records for supervised automated algorithm

  1. Validation of Supervised Automated Algorithm for Fast Quantitative Evaluation of Organ Motion on Magnetic Resonance Imaging

    SciTech Connect

    Prakash, Varuna; Stainsby, Jeffrey A.; Satkunasingham, Janakan; Craig, Tim; Catton, Charles; Chan, Philip; Dawson, Laura; Hensel, Jennifer; Jaffray, David; Milosevic, Michael; Nichol, Alan; Sussman, Marshall S.; Lockwood, Gina; Menard, Cynthia

    2008-07-15

    Purpose: To validate a correlation coefficient template-matching algorithm applied to the supervised automated quantification of abdominal-pelvic organ motion captured on time-resolved magnetic resonance imaging. Methods and Materials: Magnetic resonance images of 21 patients across four anatomic sites were analyzed. Representative anatomic points of interest were chosen as surrogates for organ motion. The point of interest displacements across each image frame relative to baseline were quantified manually and through the use of a template-matching software tool, termed 'Motiontrack.' Automated and manually acquired displacement measures, as well as the standard deviation of intrafraction motion, were compared for each image frame and for each patient. Results: Discrepancies between the automated and manual displacements of {>=}2 mm were uncommon, ranging in frequency of 0-9.7% (liver and prostate, respectively). The standard deviations of intrafraction motion measured with each method correlated highly (r = 0.99). Considerable interpatient variability in organ motion was demonstrated by a wide range of standard deviations in the liver (1.4-7.5 mm), uterus (1.1-8.4 mm), and prostate gland (0.8-2.7 mm). The automated algorithm performed successfully in all patients but 1 and substantially improved efficiency compared with manual quantification techniques (5 min vs. 60-90 min). Conclusion: Supervised automated quantification of organ motion captured on magnetic resonance imaging using a correlation coefficient template-matching algorithm was efficient, accurate, and may play an important role in off-line adaptive approaches to intrafraction motion management.

  2. Automated Quality Assessment of Structural Magnetic Resonance Brain Images Based on a Supervised Machine Learning Algorithm.

    PubMed

    Pizarro, Ricardo A; Cheng, Xi; Barnett, Alan; Lemaitre, Herve; Verchinski, Beth A; Goldman, Aaron L; Xiao, Ena; Luo, Qian; Berman, Karen F; Callicott, Joseph H; Weinberger, Daniel R; Mattay, Venkata S

    2016-01-01

    High-resolution three-dimensional magnetic resonance imaging (3D-MRI) is being increasingly used to delineate morphological changes underlying neuropsychiatric disorders. Unfortunately, artifacts frequently compromise the utility of 3D-MRI yielding irreproducible results, from both type I and type II errors. It is therefore critical to screen 3D-MRIs for artifacts before use. Currently, quality assessment involves slice-wise visual inspection of 3D-MRI volumes, a procedure that is both subjective and time consuming. Automating the quality rating of 3D-MRI could improve the efficiency and reproducibility of the procedure. The present study is one of the first efforts to apply a support vector machine (SVM) algorithm in the quality assessment of structural brain images, using global and region of interest (ROI) automated image quality features developed in-house. SVM is a supervised machine-learning algorithm that can predict the category of test datasets based on the knowledge acquired from a learning dataset. The performance (accuracy) of the automated SVM approach was assessed, by comparing the SVM-predicted quality labels to investigator-determined quality labels. The accuracy for classifying 1457 3D-MRI volumes from our database using the SVM approach is around 80%. These results are promising and illustrate the possibility of using SVM as an automated quality assessment tool for 3D-MRI.

  3. Automated Quality Assessment of Structural Magnetic Resonance Brain Images Based on a Supervised Machine Learning Algorithm

    PubMed Central

    Pizarro, Ricardo A.; Cheng, Xi; Barnett, Alan; Lemaitre, Herve; Verchinski, Beth A.; Goldman, Aaron L.; Xiao, Ena; Luo, Qian; Berman, Karen F.; Callicott, Joseph H.; Weinberger, Daniel R.; Mattay, Venkata S.

    2016-01-01

    High-resolution three-dimensional magnetic resonance imaging (3D-MRI) is being increasingly used to delineate morphological changes underlying neuropsychiatric disorders. Unfortunately, artifacts frequently compromise the utility of 3D-MRI yielding irreproducible results, from both type I and type II errors. It is therefore critical to screen 3D-MRIs for artifacts before use. Currently, quality assessment involves slice-wise visual inspection of 3D-MRI volumes, a procedure that is both subjective and time consuming. Automating the quality rating of 3D-MRI could improve the efficiency and reproducibility of the procedure. The present study is one of the first efforts to apply a support vector machine (SVM) algorithm in the quality assessment of structural brain images, using global and region of interest (ROI) automated image quality features developed in-house. SVM is a supervised machine-learning algorithm that can predict the category of test datasets based on the knowledge acquired from a learning dataset. The performance (accuracy) of the automated SVM approach was assessed, by comparing the SVM-predicted quality labels to investigator-determined quality labels. The accuracy for classifying 1457 3D-MRI volumes from our database using the SVM approach is around 80%. These results are promising and illustrate the possibility of using SVM as an automated quality assessment tool for 3D-MRI. PMID:28066227

  4. Accuracy estimation for supervised learning algorithms

    SciTech Connect

    Glover, C.W.; Oblow, E.M.; Rao, N.S.V.

    1997-04-01

    This paper illustrates the relative merits of three methods - k-fold Cross Validation, Error Bounds, and Incremental Halting Test - to estimate the accuracy of a supervised learning algorithm. For each of the three methods we point out the problem they address, some of the important assumptions that are based on, and illustrate them through an example. Finally, we discuss the relative advantages and disadvantages of each method.

  5. Automated Classification and Correlation of Drill Cores using High-Resolution Hyperspectral Images and Supervised Pattern Classification Algorithms. Applications to Paleoseismology

    NASA Astrophysics Data System (ADS)

    Ragona, D. E.; Minster, B.; Rockwell, T.; Jasso, H.

    2006-12-01

    The standard methodology to describe, classify and correlate geologic materials in the field or lab rely on physical inspection of samples, sometimes with the assistance of conventional analytical techniques (e. g. XRD, microscopy, particle size analysis). This is commonly both time-consuming and inherently subjective. Many geological materials share identical visible properties (e.g. fine grained materials, alteration minerals) and therefore cannot be mapped using the human eye alone. Recent investigations have shown that ground- based hyperspectral imaging provides an effective method to study and digitally store stratigraphic and structural data from cores or field exposures. Neural networks and Naive Bayesian classifiers supply a variety of well-established techniques towards pattern recognition, especially for data examples with high- dimensionality input-outputs. In this poster, we present a new methodology for automatic mapping of sedimentary stratigraphy in the lab (drill cores, samples) or the field (outcrops, exposures) using short wave infrared (SWIR) hyperspectral images and these two supervised classification algorithms. High-spatial/spectral resolution data from large sediment samples (drill cores) from a paleoseismic excavation site were collected using a portable hyperspectral scanner with 245 continuous channels measured across the 960 to 2404 nm spectral range. The data were corrected for geometric and radiometric distortions and pre-processed to obtain reflectance at each pixel of the images. We built an example set using hundreds of reflectance spectra collected from the sediment core images. The examples were grouped into eight classes corresponding to materials found in the samples. We constructed two additional example sets by computing the 2-norm normalization, the derivative of the smoothed original reflectance examples. Each example set was divided into four subsets: training, training test, verification and validation. A multi

  6. Random forest automated supervised classification of Hipparcos periodic variable stars

    NASA Astrophysics Data System (ADS)

    Dubath, P.; Rimoldini, L.; Süveges, M.; Blomme, J.; López, M.; Sarro, L. M.; De Ridder, J.; Cuypers, J.; Guy, L.; Lecoeur, I.; Nienartowicz, K.; Jan, A.; Beck, M.; Mowlavi, N.; De Cat, P.; Lebzelter, T.; Eyer, L.

    2011-07-01

    We present an evaluation of the performance of an automated classification of the Hipparcos periodic variable stars into 26 types. The sub-sample with the most reliable variability types available in the literature is used to train supervised algorithms to characterize the type dependencies on a number of attributes. The most useful attributes evaluated with the random forest methodology include, in decreasing order of importance, the period, the amplitude, the V-I colour index, the absolute magnitude, the residual around the folded light-curve model, the magnitude distribution skewness and the amplitude of the second harmonic of the Fourier series model relative to that of the fundamental frequency. Random forests and a multi-stage scheme involving Bayesian network and Gaussian mixture methods lead to statistically equivalent results. In standard 10-fold cross-validation (CV) experiments, the rate of correct classification is between 90 and 100 per cent, depending on the variability type. The main mis-classification cases, up to a rate of about 10 per cent, arise due to confusion between SPB and ACV blue variables and between eclipsing binaries, ellipsoidal variables and other variability types. Our training set and the predicted types for the other Hipparcos periodic stars are available online.

  7. AMASS: Algorithm for MSI Analysis by Semi-supervised Segmentation

    PubMed Central

    Bruand, Jocelyne; Alexandrov, Theodore; Sistla, Srinivas; Wisztorski, Maxence; Meriaux, Céline; Becker, Michael; Salzet, Michel; Fournier, Isabelle; Macagno, Eduardo; Bafna, Vineet

    2011-01-01

    Mass Spectrometric Imaging (MSI) is a molecular imaging technique that allows the generation of 2D ion density maps for a large complement of the active molecules present in cells and sectioned tissues. Automatic segmentation of such maps according to patterns of co-expression of individual molecules can be used for discovery of novel molecular signatures (molecules that are specifically expressed in particular spatial regions). However, current segmentation techniques are biased towards the discovery of higher abundance molecules and large segments; they allow limited opportunity for user interaction and validation is usually performed by similarity to known anatomical features. We describe here a novel method, AMASS (Algorithm for MSI Analysis by Semi-supervised Segmentation). AMASS relies on the discriminating power of a molecular signal instead of its intensity as a key feature, uses an internal consistency measure for validation, and allows significant user interaction and supervision as options. An automated segmentation of entire leech embryo data images resulted in segmentation domains congruent with many known organs, including heart, CNS ganglia, nephridia, nephridiopores, and lateral and ventral regions, each with a distinct molecular signature. Likewise, segmentation of a rat brain MSI slice data set yielded known brain features, and provided interesting examples of co-expression between distinct brain regions. AMASS represents a new approach for the discovery of peptide masses with distinct spatial features of expression. PMID:21800894

  8. Algorithms Could Automate Cancer Diagnosis

    NASA Technical Reports Server (NTRS)

    Baky, A. A.; Winkler, D. G.

    1982-01-01

    Five new algorithms are a complete statistical procedure for quantifying cell abnormalities from digitized images. Procedure could be basis for automated detection and diagnosis of cancer. Objective of procedure is to assign each cell an atypia status index (ASI), which quantifies level of abnormality. It is possible that ASI values will be accurate and economical enough to allow diagnoses to be made quickly and accurately by computer processing of laboratory specimens extracted from patients.

  9. POSE Algorithms for Automated Docking

    NASA Technical Reports Server (NTRS)

    Heaton, Andrew F.; Howard, Richard T.

    2011-01-01

    POSE (relative position and attitude) can be computed in many different ways. Given a sensor that measures bearing to a finite number of spots corresponding to known features (such as a target) of a spacecraft, a number of different algorithms can be used to compute the POSE. NASA has sponsored the development of a flash LIDAR proximity sensor called the Vision Navigation Sensor (VNS) for use by the Orion capsule in future docking missions. This sensor generates data that can be used by a variety of algorithms to compute POSE solutions inside of 15 meters, including at the critical docking range of approximately 1-2 meters. Previously NASA participated in a DARPA program called Orbital Express that achieved the first automated docking for the American space program. During this mission a large set of high quality mated sensor data was obtained at what is essentially the docking distance. This data set is perhaps the most accurate truth data in existence for docking proximity sensors in orbit. In this paper, the flight data from Orbital Express is used to test POSE algorithms at 1.22 meters range. Two different POSE algorithms are tested for two different Fields-of-View (FOVs) and two different pixel noise levels. The results of the analysis are used to predict future performance of the POSE algorithms with VNS data.

  10. Algorithms for automated DNA assembly

    PubMed Central

    Densmore, Douglas; Hsiau, Timothy H.-C.; Kittleson, Joshua T.; DeLoache, Will; Batten, Christopher; Anderson, J. Christopher

    2010-01-01

    Generating a defined set of genetic constructs within a large combinatorial space provides a powerful method for engineering novel biological functions. However, the process of assembling more than a few specific DNA sequences can be costly, time consuming and error prone. Even if a correct theoretical construction scheme is developed manually, it is likely to be suboptimal by any number of cost metrics. Modular, robust and formal approaches are needed for exploring these vast design spaces. By automating the design of DNA fabrication schemes using computational algorithms, we can eliminate human error while reducing redundant operations, thus minimizing the time and cost required for conducting biological engineering experiments. Here, we provide algorithms that optimize the simultaneous assembly of a collection of related DNA sequences. We compare our algorithms to an exhaustive search on a small synthetic dataset and our results show that our algorithms can quickly find an optimal solution. Comparison with random search approaches on two real-world datasets show that our algorithms can also quickly find lower-cost solutions for large datasets. PMID:20335162

  11. Automated training for algorithms that learn from genomic data.

    PubMed

    Cilingir, Gokcen; Broschat, Shira L

    2015-01-01

    Supervised machine learning algorithms are used by life scientists for a variety of objectives. Expert-curated public gene and protein databases are major resources for gathering data to train these algorithms. While these data resources are continuously updated, generally, these updates are not incorporated into published machine learning algorithms which thereby can become outdated soon after their introduction. In this paper, we propose a new model of operation for supervised machine learning algorithms that learn from genomic data. By defining these algorithms in a pipeline in which the training data gathering procedure and the learning process are automated, one can create a system that generates a classifier or predictor using information available from public resources. The proposed model is explained using three case studies on SignalP, MemLoci, and ApicoAP in which existing machine learning models are utilized in pipelines. Given that the vast majority of the procedures described for gathering training data can easily be automated, it is possible to transform valuable machine learning algorithms into self-evolving learners that benefit from the ever-changing data available for gene products and to develop new machine learning algorithms that are similarly capable.

  12. QUEST: Eliminating Online Supervised Learning for Efficient Classification Algorithms

    PubMed Central

    Zwartjes, Ardjan; Havinga, Paul J. M.; Smit, Gerard J. M.; Hurink, Johann L.

    2016-01-01

    In this work, we introduce QUEST (QUantile Estimation after Supervised Training), an adaptive classification algorithm for Wireless Sensor Networks (WSNs) that eliminates the necessity for online supervised learning. Online processing is important for many sensor network applications. Transmitting raw sensor data puts high demands on the battery, reducing network life time. By merely transmitting partial results or classifications based on the sampled data, the amount of traffic on the network can be significantly reduced. Such classifications can be made by learning based algorithms using sampled data. An important issue, however, is the training phase of these learning based algorithms. Training a deployed sensor network requires a lot of communication and an impractical amount of human involvement. QUEST is a hybrid algorithm that combines supervised learning in a controlled environment with unsupervised learning on the location of deployment. Using the SITEX02 dataset, we demonstrate that the presented solution works with a performance penalty of less than 10% in 90% of the tests. Under some circumstances, it even outperforms a network of classifiers completely trained with supervised learning. As a result, the need for on-site supervised learning and communication for training is completely eliminated by our solution. PMID:27706071

  13. QUEST: Eliminating Online Supervised Learning for Efficient Classification Algorithms.

    PubMed

    Zwartjes, Ardjan; Havinga, Paul J M; Smit, Gerard J M; Hurink, Johann L

    2016-10-01

    In this work, we introduce QUEST (QUantile Estimation after Supervised Training), an adaptive classification algorithm for Wireless Sensor Networks (WSNs) that eliminates the necessity for online supervised learning. Online processing is important for many sensor network applications. Transmitting raw sensor data puts high demands on the battery, reducing network life time. By merely transmitting partial results or classifications based on the sampled data, the amount of traffic on the network can be significantly reduced. Such classifications can be made by learning based algorithms using sampled data. An important issue, however, is the training phase of these learning based algorithms. Training a deployed sensor network requires a lot of communication and an impractical amount of human involvement. QUEST is a hybrid algorithm that combines supervised learning in a controlled environment with unsupervised learning on the location of deployment. Using the SITEX02 dataset, we demonstrate that the presented solution works with a performance penalty of less than 10% in 90% of the tests. Under some circumstances, it even outperforms a network of classifiers completely trained with supervised learning. As a result, the need for on-site supervised learning and communication for training is completely eliminated by our solution.

  14. Benchmarking protein classification algorithms via supervised cross-validation.

    PubMed

    Kertész-Farkas, Attila; Dhir, Somdutta; Sonego, Paolo; Pacurar, Mircea; Netoteia, Sergiu; Nijveen, Harm; Kuzniar, Arnold; Leunissen, Jack A M; Kocsor, András; Pongor, Sándor

    2008-04-24

    Development and testing of protein classification algorithms are hampered by the fact that the protein universe is characterized by groups vastly different in the number of members, in average protein size, similarity within group, etc. Datasets based on traditional cross-validation (k-fold, leave-one-out, etc.) may not give reliable estimates on how an algorithm will generalize to novel, distantly related subtypes of the known protein classes. Supervised cross-validation, i.e., selection of test and train sets according to the known subtypes within a database has been successfully used earlier in conjunction with the SCOP database. Our goal was to extend this principle to other databases and to design standardized benchmark datasets for protein classification. Hierarchical classification trees of protein categories provide a simple and general framework for designing supervised cross-validation strategies for protein classification. Benchmark datasets can be designed at various levels of the concept hierarchy using a simple graph-theoretic distance. A combination of supervised and random sampling was selected to construct reduced size model datasets, suitable for algorithm comparison. Over 3000 new classification tasks were added to our recently established protein classification benchmark collection that currently includes protein sequence (including protein domains and entire proteins), protein structure and reading frame DNA sequence data. We carried out an extensive evaluation based on various machine-learning algorithms such as nearest neighbor, support vector machines, artificial neural networks, random forests and logistic regression, used in conjunction with comparison algorithms, BLAST, Smith-Waterman, Needleman-Wunsch, as well as 3D comparison methods DALI and PRIDE. The resulting datasets provide lower, and in our opinion more realistic estimates of the classifier performance than do random cross-validation schemes. A combination of supervised and

  15. Supervised and unsupervised discretization methods for evolutionary algorithms

    SciTech Connect

    Cantu-Paz, E

    2001-01-24

    This paper introduces simple model-building evolutionary algorithms (EAs) that operate on continuous domains. The algorithms are based on supervised and unsupervised discretization methods that have been used as preprocessing steps in machine learning. The basic idea is to discretize the continuous variables and use the discretization as a simple model of the solutions under consideration. The model is then used to generate new solutions directly, instead of using the usual operators based on sexual recombination and mutation. The algorithms presented here have fewer parameters than traditional and other model-building EAs. They expect that the proposed algorithms that use multivariate models scale up better to the dimensionality of the problem than existing EAs.

  16. A novel supervised trajectory segmentation algorithm identifies distinct types of human adenovirus motion in host cells.

    PubMed

    Helmuth, Jo A; Burckhardt, Christoph J; Koumoutsakos, Petros; Greber, Urs F; Sbalzarini, Ivo F

    2007-09-01

    Biological trajectories can be characterized by transient patterns that may provide insight into the interactions of the moving object with its immediate environment. The accurate and automated identification of trajectory motifs is important for the understanding of the underlying mechanisms. In this work, we develop a novel trajectory segmentation algorithm based on supervised support vector classification. The algorithm is validated on synthetic data and applied to the identification of trajectory fingerprints of fluorescently tagged human adenovirus particles in live cells. In virus trajectories on the cell surface, periods of confined motion, slow drift, and fast drift are efficiently detected. Additionally, directed motion is found for viruses in the cytoplasm. The algorithm enables the linking of microscopic observations to molecular phenomena that are critical in many biological processes, including infectious pathogen entry and signal transduction.

  17. Automated segmentation of geographic atrophy in fundus autofluorescence images using supervised pixel classification.

    PubMed

    Hu, Zhihong; Medioni, Gerard G; Hernandez, Matthias; Sadda, Srinivas R

    2015-01-01

    Geographic atrophy (GA) is a manifestation of the advanced or late stage of age-related macular degeneration (AMD). AMD is the leading cause of blindness in people over the age of 65 in the western world. The purpose of this study is to develop a fully automated supervised pixel classification approach for segmenting GA, including uni- and multifocal patches in fundus autofluorescene (FAF) images. The image features include region-wise intensity measures, gray-level co-occurrence matrix measures, and Gaussian filter banks. A [Formula: see text]-nearest-neighbor pixel classifier is applied to obtain a GA probability map, representing the likelihood that the image pixel belongs to GA. Sixteen randomly chosen FAF images were obtained from 16 subjects with GA. The algorithm-defined GA regions are compared with manual delineation performed by a certified image reading center grader. Eight-fold cross-validation is applied to evaluate the algorithm performance. The mean overlap ratio (OR), area correlation (Pearson's [Formula: see text]), accuracy (ACC), true positive rate (TPR), specificity (SPC), positive predictive value (PPV), and false discovery rate (FDR) between the algorithm- and manually defined GA regions are [Formula: see text], [Formula: see text], [Formula: see text], [Formula: see text], [Formula: see text], [Formula: see text], and [Formula: see text], respectively.

  18. Segmentation of retinal blood vessels using a novel clustering algorithm (RACAL) with a partial supervision strategy.

    PubMed

    Salem, Sameh A; Salem, Nancy M; Nandi, Asoke K

    2007-03-01

    In this paper, segmentation of blood vessels from colour retinal images using a novel clustering algorithm with a partial supervision strategy is proposed. The proposed clustering algorithm, which is a RAdius based Clustering ALgorithm (RACAL), uses a distance based principle to map the distributions of the data by utilising the premise that clusters are determined by a distance parameter, without having to specify the number of clusters. Additionally, the proposed clustering algorithm is enhanced with a partial supervision strategy and it is demonstrated that it is able to segment blood vessels of small diameters and low contrasts. Results are compared with those from the KNN classifier and show that the proposed RACAL performs better than the KNN in case of abnormal images as it succeeds in segmenting small and low contrast blood vessels, while it achieves comparable results for normal images. For automation process, RACAL can be used as a classifier and results show that it performs better than the KNN classifier in both normal and abnormal images.

  19. Algorithms for Automated DNA Assembly

    DTIC Science & Technology

    2010-01-01

    polyketide synthase gene cluster. Proc. Natl Acad. Sci. USA, 101, 15573–15578. 16. Shetty,R.P., Endy,D. and Knight,T.F. Jr (2008) Engineering BioBrick vectors...correct theoretical construction scheme is de- veloped manually, it is likely to be suboptimal by any number of cost metrics. Modular, robust and...to an exhaustive search on a small synthetic dataset and our results show that our algorithms can quickly find an optimal solution. Comparison with

  20. A comparison of supervised machine learning algorithms and feature vectors for MS lesion segmentation using multimodal structural MRI.

    PubMed

    Sweeney, Elizabeth M; Vogelstein, Joshua T; Cuzzocreo, Jennifer L; Calabresi, Peter A; Reich, Daniel S; Crainiceanu, Ciprian M; Shinohara, Russell T

    2014-01-01

    Machine learning is a popular method for mining and analyzing large collections of medical data. We focus on a particular problem from medical research, supervised multiple sclerosis (MS) lesion segmentation in structural magnetic resonance imaging (MRI). We examine the extent to which the choice of machine learning or classification algorithm and feature extraction function impacts the performance of lesion segmentation methods. As quantitative measures derived from structural MRI are important clinical tools for research into the pathophysiology and natural history of MS, the development of automated lesion segmentation methods is an active research field. Yet, little is known about what drives performance of these methods. We evaluate the performance of automated MS lesion segmentation methods, which consist of a supervised classification algorithm composed with a feature extraction function. These feature extraction functions act on the observed T1-weighted (T1-w), T2-weighted (T2-w) and fluid-attenuated inversion recovery (FLAIR) MRI voxel intensities. Each MRI study has a manual lesion segmentation that we use to train and validate the supervised classification algorithms. Our main finding is that the differences in predictive performance are due more to differences in the feature vectors, rather than the machine learning or classification algorithms. Features that incorporate information from neighboring voxels in the brain were found to increase performance substantially. For lesion segmentation, we conclude that it is better to use simple, interpretable, and fast algorithms, such as logistic regression, linear discriminant analysis, and quadratic discriminant analysis, and to develop the features to improve performance.

  1. Automated extraction of the cortical sulci based on a supervised learning approach.

    PubMed

    Tu, Zhuowen; Zheng, Songfeng; Yuille, Alan L; Reiss, Allan L; Dutton, Rebecca A; Lee, Agatha D; Galaburda, Albert M; Dinov, Ivo; Thompson, Paul M; Toga, Arthur W

    2007-04-01

    It is important to detect and extract the major cortical sulci from brain images, but manually annotating these sulci is a time-consuming task and requires the labeler to follow complex protocols. This paper proposes a learning-based algorithm for automated extraction of the major cortical sulci from magnetic resonance imaging (MRI) volumes and cortical surfaces. Unlike alternative methods for detecting the major cortical sulci, which use a small number of predefined rules based on properties of the cortical surface such as the mean curvature, our approach learns a discriminative model using the probabilistic boosting tree algorithm (PBT). PBT is a supervised learning approach which selects and combines hundreds of features at different scales, such as curvatures, gradients and shape index. Our method can be applied to either MRI volumes or cortical surfaces. It first outputs a probability map which indicates how likely each voxel lies on a major sulcal curve. Next, it applies dynamic programming to extract the best curve based on the probability map and a shape prior. The algorithm has almost no parameters to tune for extracting different major sulci. It is very fast (it runs in under 1 min per sulcus including the time to compute the discriminative models) due to efficient implementation of the features (e.g., using the integral volume to rapidly compute the responses of 3-D Haar filters). Because the algorithm can be applied to MRI volumes directly, there is no need to perform preprocessing such as tissue segmentation or mapping to a canonical space. The learning aspect of our approach makes the system very flexible and general. For illustration, we use volumes of the right hemisphere with several major cortical sulci manually labeled. The algorithm is tested on two groups of data, including some brains from patients with Williams Syndrome, and the results are very encouraging.

  2. ALFA: Automated Line Fitting Algorithm

    NASA Astrophysics Data System (ADS)

    Wesson, R.

    2015-12-01

    ALFA fits emission line spectra of arbitrary wavelength coverage and resolution, fully automatically. It uses a catalog of lines which may be present to construct synthetic spectra, the parameters of which are then optimized by means of a genetic algorithm. Uncertainties are estimated using the noise structure of the residuals. An emission line spectrum containing several hundred lines can be fitted in a few seconds using a single processor of a typical contemporary desktop or laptop PC. Data cubes in FITS format can be analysed using multiple processors, and an analysis of tens of thousands of deep spectra obtained with instruments such as MUSE will take a few hours.

  3. Algorithms to Automate LCLS Undulator Tuning

    SciTech Connect

    Wolf, Zachary

    2010-12-03

    Automation of the LCLS undulator tuning offers many advantages to the project. Automation can make a substantial reduction in the amount of time the tuning takes. Undulator tuning is fairly complex and automation can make the final tuning less dependent on the skill of the operator. Also, algorithms are fixed and can be scrutinized and reviewed, as opposed to an individual doing the tuning by hand. This note presents algorithms implemented in a computer program written for LCLS undulator tuning. The LCLS undulators must meet the following specifications. The maximum trajectory walkoff must be less than 5 {micro}m over 10 m. The first field integral must be below 40 x 10{sup -6} Tm. The second field integral must be below 50 x 10{sup -6} Tm{sup 2}. The phase error between the electron motion and the radiation field must be less than 10 degrees in an undulator. The K parameter must have the value of 3.5000 {+-} 0.0005. The phase matching from the break regions into the undulator must be accurate to better than 10 degrees. A phase change of 113 x 2{pi} must take place over a distance of 3.656 m centered on the undulator. Achieving these requirements is the goal of the tuning process. Most of the tuning is done with Hall probe measurements. The field integrals are checked using long coil measurements. An analysis program written in Matlab takes the Hall probe measurements and computes the trajectories, phase errors, K value, etc. The analysis program and its calculation techniques were described in a previous note. In this note, a second Matlab program containing tuning algorithms is described. The algorithms to determine the required number and placement of the shims are discussed in detail. This note describes the operation of a computer program which was written to automate LCLS undulator tuning. The algorithms used to compute the shim sizes and locations are discussed.

  4. A numeric comparison of variable selection algorithms for supervised learning

    NASA Astrophysics Data System (ADS)

    Palombo, G.; Narsky, I.

    2009-12-01

    Datasets in modern High Energy Physics (HEP) experiments are often described by dozens or even hundreds of input variables. Reducing a full variable set to a subset that most completely represents information about data is therefore an important task in analysis of HEP data. We compare various variable selection algorithms for supervised learning using several datasets such as, for instance, imaging gamma-ray Cherenkov telescope (MAGIC) data found at the UCI repository. We use classifiers and variable selection methods implemented in the statistical package StatPatternRecognition (SPR), a free open-source C++ package developed in the HEP community ( http://sourceforge.net/projects/statpatrec/). For each dataset, we select a powerful classifier and estimate its learning accuracy on variable subsets obtained by various selection algorithms. When possible, we also estimate the CPU time needed for the variable subset selection. The results of this analysis are compared with those published previously for these datasets using other statistical packages such as R and Weka. We show that the most accurate, yet slowest, method is a wrapper algorithm known as generalized sequential forward selection ("Add N Remove R") implemented in SPR.

  5. Experiments on Supervised Learning Algorithms for Text Categorization

    NASA Technical Reports Server (NTRS)

    Namburu, Setu Madhavi; Tu, Haiying; Luo, Jianhui; Pattipati, Krishna R.

    2005-01-01

    Modern information society is facing the challenge of handling massive volume of online documents, news, intelligence reports, and so on. How to use the information accurately and in a timely manner becomes a major concern in many areas. While the general information may also include images and voice, we focus on the categorization of text data in this paper. We provide a brief overview of the information processing flow for text categorization, and discuss two supervised learning algorithms, viz., support vector machines (SVM) and partial least squares (PLS), which have been successfully applied in other domains, e.g., fault diagnosis [9]. While SVM has been well explored for binary classification and was reported as an efficient algorithm for text categorization, PLS has not yet been applied to text categorization. Our experiments are conducted on three data sets: Reuter's- 21578 dataset about corporate mergers and data acquisitions (ACQ), WebKB and the 20-Newsgroups. Results show that the performance of PLS is comparable to SVM in text categorization. A major drawback of SVM for multi-class categorization is that it requires a voting scheme based on the results of pair-wise classification. PLS does not have this drawback and could be a better candidate for multi-class text categorization.

  6. Experiments on Supervised Learning Algorithms for Text Categorization

    NASA Technical Reports Server (NTRS)

    Namburu, Setu Madhavi; Tu, Haiying; Luo, Jianhui; Pattipati, Krishna R.

    2005-01-01

    Modern information society is facing the challenge of handling massive volume of online documents, news, intelligence reports, and so on. How to use the information accurately and in a timely manner becomes a major concern in many areas. While the general information may also include images and voice, we focus on the categorization of text data in this paper. We provide a brief overview of the information processing flow for text categorization, and discuss two supervised learning algorithms, viz., support vector machines (SVM) and partial least squares (PLS), which have been successfully applied in other domains, e.g., fault diagnosis [9]. While SVM has been well explored for binary classification and was reported as an efficient algorithm for text categorization, PLS has not yet been applied to text categorization. Our experiments are conducted on three data sets: Reuter's- 21578 dataset about corporate mergers and data acquisitions (ACQ), WebKB and the 20-Newsgroups. Results show that the performance of PLS is comparable to SVM in text categorization. A major drawback of SVM for multi-class categorization is that it requires a voting scheme based on the results of pair-wise classification. PLS does not have this drawback and could be a better candidate for multi-class text categorization.

  7. An evaluation of unsupervised and supervised learning algorithms for clustering landscape types in the United States

    USGS Publications Warehouse

    Wendel, Jochen; Buttenfield, Barbara P.; Stanislawski, Larry V.

    2016-01-01

    Knowledge of landscape type can inform cartographic generalization of hydrographic features, because landscape characteristics provide an important geographic context that affects variation in channel geometry, flow pattern, and network configuration. Landscape types are characterized by expansive spatial gradients, lacking abrupt changes between adjacent classes; and as having a limited number of outliers that might confound classification. The US Geological Survey (USGS) is exploring methods to automate generalization of features in the National Hydrography Data set (NHD), to associate specific sequences of processing operations and parameters with specific landscape characteristics, thus obviating manual selection of a unique processing strategy for every NHD watershed unit. A chronology of methods to delineate physiographic regions for the United States is described, including a recent maximum likelihood classification based on seven input variables. This research compares unsupervised and supervised algorithms applied to these seven input variables, to evaluate and possibly refine the recent classification. Evaluation metrics for unsupervised methods include the Davies–Bouldin index, the Silhouette index, and the Dunn index as well as quantization and topographic error metrics. Cross validation and misclassification rate analysis are used to evaluate supervised classification methods. The paper reports the comparative analysis and its impact on the selection of landscape regions. The compared solutions show problems in areas of high landscape diversity. There is some indication that additional input variables, additional classes, or more sophisticated methods can refine the existing classification.

  8. A Comparison of Supervised Machine Learning Algorithms and Feature Vectors for MS Lesion Segmentation Using Multimodal Structural MRI

    PubMed Central

    Sweeney, Elizabeth M.; Vogelstein, Joshua T.; Cuzzocreo, Jennifer L.; Calabresi, Peter A.; Reich, Daniel S.; Crainiceanu, Ciprian M.; Shinohara, Russell T.

    2014-01-01

    Machine learning is a popular method for mining and analyzing large collections of medical data. We focus on a particular problem from medical research, supervised multiple sclerosis (MS) lesion segmentation in structural magnetic resonance imaging (MRI). We examine the extent to which the choice of machine learning or classification algorithm and feature extraction function impacts the performance of lesion segmentation methods. As quantitative measures derived from structural MRI are important clinical tools for research into the pathophysiology and natural history of MS, the development of automated lesion segmentation methods is an active research field. Yet, little is known about what drives performance of these methods. We evaluate the performance of automated MS lesion segmentation methods, which consist of a supervised classification algorithm composed with a feature extraction function. These feature extraction functions act on the observed T1-weighted (T1-w), T2-weighted (T2-w) and fluid-attenuated inversion recovery (FLAIR) MRI voxel intensities. Each MRI study has a manual lesion segmentation that we use to train and validate the supervised classification algorithms. Our main finding is that the differences in predictive performance are due more to differences in the feature vectors, rather than the machine learning or classification algorithms. Features that incorporate information from neighboring voxels in the brain were found to increase performance substantially. For lesion segmentation, we conclude that it is better to use simple, interpretable, and fast algorithms, such as logistic regression, linear discriminant analysis, and quadratic discriminant analysis, and to develop the features to improve performance. PMID:24781953

  9. Automated labelling of cancer textures in colorectal histopathology slides using quasi-supervised learning.

    PubMed

    Onder, Devrim; Sarioglu, Sulen; Karacali, Bilge

    2013-04-01

    Quasi-supervised learning is a statistical learning algorithm that contrasts two datasets by computing estimate for the posterior probability of each sample in either dataset. This method has not been applied to histopathological images before. The purpose of this study is to evaluate the performance of the method to identify colorectal tissues with or without adenocarcinoma. Light microscopic digital images from histopathological sections were obtained from 30 colorectal radical surgery materials including adenocarcinoma and non-neoplastic regions. The texture features were extracted by using local histograms and co-occurrence matrices. The quasi-supervised learning algorithm operates on two datasets, one containing samples of normal tissues labelled only indirectly, and the other containing an unlabeled collection of samples of both normal and cancer tissues. As such, the algorithm eliminates the need for manually labelled samples of normal and cancer tissues for conventional supervised learning and significantly reduces the expert intervention. Several texture feature vector datasets corresponding to different extraction parameters were tested within the proposed framework. The Independent Component Analysis dimensionality reduction approach was also identified as the one improving the labelling performance evaluated in this series. In this series, the proposed method was applied to the dataset of 22,080 vectors with reduced dimensionality 119 from 132. Regions containing cancer tissue could be identified accurately having false and true positive rates up to 19% and 88% respectively without using manually labelled ground-truth datasets in a quasi-supervised strategy. The resulting labelling performances were compared to that of a conventional powerful supervised classifier using manually labelled ground-truth data. The supervised classifier results were calculated as 3.5% and 95% for the same case. The results in this series in comparison with the benchmark

  10. Validation of automated supervised segmentation of multibeam backscatter data from the Chatham Rise, New Zealand

    NASA Astrophysics Data System (ADS)

    Hillman, Jess I. T.; Lamarche, Geoffroy; Pallentin, Arne; Pecher, Ingo A.; Gorman, Andrew R.; Schneider von Deimling, Jens

    2017-01-01

    Using automated supervised segmentation of multibeam backscatter data to delineate seafloor substrates is a relatively novel technique. Low-frequency multibeam echosounders (MBES), such as the 12-kHz EM120, present particular difficulties since the signal can penetrate several metres into the seafloor, depending on substrate type. We present a case study illustrating how a non-targeted dataset may be used to derive information from multibeam backscatter data regarding distribution of substrate types. The results allow us to assess limitations associated with low frequency MBES where sub-bottom layering is present, and test the accuracy of automated supervised segmentation performed using SonarScope® software. This is done through comparison of predicted and observed substrate from backscatter facies-derived classes and substrate data, reinforced using quantitative statistical analysis based on a confusion matrix. We use sediment samples, video transects and sub-bottom profiles acquired on the Chatham Rise, east of New Zealand. Inferences on the substrate types are made using the Generic Seafloor Acoustic Backscatter (GSAB) model, and the extents of the backscatter classes are delineated by automated supervised segmentation. Correlating substrate data to backscatter classes revealed that backscatter amplitude may correspond to lithologies up to 4 m below the seafloor. Our results emphasise several issues related to substrate characterisation using backscatter classification, primarily because the GSAB model does not only relate to grain size and roughness properties of substrate, but also accounts for other parameters that influence backscatter. Better understanding these limitations allows us to derive first-order interpretations of sediment properties from automated supervised segmentation.

  11. THE QUASIPERIODIC AUTOMATED TRANSIT SEARCH ALGORITHM

    SciTech Connect

    Carter, Joshua A.; Agol, Eric

    2013-03-10

    We present a new algorithm for detecting transiting extrasolar planets in time-series photometry. The Quasiperiodic Automated Transit Search (QATS) algorithm relaxes the usual assumption of strictly periodic transits by permitting a variable, but bounded, interval between successive transits. We show that this method is capable of detecting transiting planets with significant transit timing variations without any loss of significance-{sup s}mearing{sup -}as would be incurred with traditional algorithms; however, this is at the cost of a slightly increased stochastic background. The approximate times of transit are standard products of the QATS search. Despite the increased flexibility, we show that QATS has a run-time complexity that is comparable to traditional search codes and is comparably easy to implement. QATS is applicable to data having a nearly uninterrupted, uniform cadence and is therefore well suited to the modern class of space-based transit searches (e.g., Kepler, CoRoT). Applications of QATS include transiting planets in dynamically active multi-planet systems and transiting planets in stellar binary systems.

  12. Improved Algorithm for Automated Glucose Clamps.

    PubMed

    Kuhlenkötter, Mareike; Heise, Tim; Benesch, Carsten

    2017-02-01

    In glucose clamp experiments, blood glucose concentrations (BGs) are kept as close as possible to a predefined target level using variable glucose infusion rates (GIRs). In automated clamps, GIRs are calculated by algorithms implemented in the device (e.g., the Biostator). Low BG- and GIR-variability is needed for high clamp quality. We therefore tried to reduce oscillations in both BG and GIR with an improved algorithm implemented in ClampArt, a modern clamp device. The Biostator algorithm was first improved by numerical simulations of glucose clamps (in silico). With the results of the simulations, we started in vitro experiments using the ClampArt device and a container with water and glucose as "test subject." After a small pilot in vivo study, a larger clinical study was performed to compare the original with the optimized algorithm. With the improved algorithm, in silico, in vitro, and in vivo experiments showed reduced oscillations in both BG and GIR. In the clinical study, the coefficient of variation (CV) of BG values was lowered from 6.0% (4.6%-7.8%) [median (interquartile range)] to 4.2% (3.6%-5.0%), P < 0.0001 and the CV of GIR from 60.7% (49.6%-82.0%) to 43.5% (32.8%-57.2%), P < 0.0001. Other clamp quality parameters did not change substantially, median deviation from target slightly increased from 0.6% (0.2%-1.0%) to 1.1% (0.7%-1.5%), P = 0.0005, whereas utility did not change [97.0% (93.4%-100.0%) vs. 97.0% (94.0%-98.8%), P = 0.57]. With the improved algorithm, all experiments confirmed a reduction in BG- and GIR-oscillations without a major impact on other glucose clamp parameters. The optimized algorithm has been implemented in ClampArt for all future glucose clamp studies.

  13. A new supervised learning algorithm for spiking neurons.

    PubMed

    Xu, Yan; Zeng, Xiaoqin; Zhong, Shuiming

    2013-06-01

    The purpose of supervised learning with temporal encoding for spiking neurons is to make the neurons emit a specific spike train encoded by the precise firing times of spikes. If only running time is considered, the supervised learning for a spiking neuron is equivalent to distinguishing the times of desired output spikes and the other time during the running process of the neuron through adjusting synaptic weights, which can be regarded as a classification problem. Based on this idea, this letter proposes a new supervised learning method for spiking neurons with temporal encoding; it first transforms the supervised learning into a classification problem and then solves the problem by using the perceptron learning rule. The experiment results show that the proposed method has higher learning accuracy and efficiency over the existing learning methods, so it is more powerful for solving complex and real-time problems.

  14. Semi-supervised clustering algorithm for haplotype assembly problem based on MEC model.

    PubMed

    Xu, Xin-Shun; Li, Ying-Xin

    2012-01-01

    Haplotype assembly is to infer a pair of haplotypes from localized polymorphism data. In this paper, a semi-supervised clustering algorithm-SSK (semi-supervised K-means) is proposed for it, which, to our knowledge, is the first semi-supervised clustering method for it. In SSK, some positive information is firstly extracted. The information is then used to help k-means to cluster all SNP fragments into two sets from which two haplotypes can be reconstructed. The performance of SSK is tested on both real data and simulated data. The results show that it outperforms several state-of-the-art algorithms on minimum error correction (MEC) model.

  15. ALFA: an automated line fitting algorithm

    NASA Astrophysics Data System (ADS)

    Wesson, R.

    2016-03-01

    I present the automated line fitting algorithm, ALFA, a new code which can fit emission line spectra of arbitrary wavelength coverage and resolution, fully automatically. In contrast to traditional emission line fitting methods which require the identification of spectral features suspected to be emission lines, ALFA instead uses a list of lines which are expected to be present to construct a synthetic spectrum. The parameters used to construct the synthetic spectrum are optimized by means of a genetic algorithm. Uncertainties are estimated using the noise structure of the residuals. An emission line spectrum containing several hundred lines can be fitted in a few seconds using a single processor of a typical contemporary desktop or laptop PC. I show that the results are in excellent agreement with those measured manually for a number of spectra. Where discrepancies exist, the manually measured fluxes are found to be less accurate than those returned by ALFA. Together with the code NEAT, ALFA provides a powerful way to rapidly extract physical information from observations, an increasingly vital function in the era of highly multiplexed spectroscopy. The two codes can deliver a reliable and comprehensive analysis of very large data sets in a few hours with little or no user interaction.

  16. A semi-supervised classification algorithm using the TAD-derived background as training data

    NASA Astrophysics Data System (ADS)

    Fan, Lei; Ambeau, Brittany; Messinger, David W.

    2013-05-01

    In general, spectral image classification algorithms fall into one of two categories: supervised and unsupervised. In unsupervised approaches, the algorithm automatically identifies clusters in the data without a priori information about those clusters (except perhaps the expected number of them). Supervised approaches require an analyst to identify training data to learn the characteristics of the clusters such that they can then classify all other pixels into one of the pre-defined groups. The classification algorithm presented here is a semi-supervised approach based on the Topological Anomaly Detection (TAD) algorithm. The TAD algorithm defines background components based on a mutual k-Nearest Neighbor graph model of the data, along with a spectral connected components analysis. Here, the largest components produced by TAD are used as regions of interest (ROI's),or training data for a supervised classification scheme. By combining those ROI's with a Gaussian Maximum Likelihood (GML) or a Minimum Distance to the Mean (MDM) algorithm, we are able to achieve a semi supervised classification method. We test this classification algorithm against data collected by the HyMAP sensor over the Cooke City, MT area and University of Pavia scene.

  17. Semi-supervised least squares support vector machine algorithm: application to offshore oil reservoir

    NASA Astrophysics Data System (ADS)

    Luo, Wei-Ping; Li, Hong-Qi; Shi, Ning

    2016-06-01

    At the early stages of deep-water oil exploration and development, fewer and further apart wells are drilled than in onshore oilfields. Supervised least squares support vector machine algorithms are used to predict the reservoir parameters but the prediction accuracy is low. We combined the least squares support vector machine (LSSVM) algorithm with semi-supervised learning and established a semi-supervised regression model, which we call the semi-supervised least squares support vector machine (SLSSVM) model. The iterative matrix inversion is also introduced to improve the training ability and training time of the model. We use the UCI data to test the generalization of a semi-supervised and a supervised LSSVM models. The test results suggest that the generalization performance of the LSSVM model greatly improves and with decreasing training samples the generalization performance is better. Moreover, for small-sample models, the SLSSVM method has higher precision than the semi-supervised K-nearest neighbor (SKNN) method. The new semisupervised LSSVM algorithm was used to predict the distribution of porosity and sandstone in the Jingzhou study area.

  18. Automated Antenna Design with Evolutionary Algorithms

    NASA Technical Reports Server (NTRS)

    Hornby, Gregory S.; Globus, Al; Linden, Derek S.; Lohn, Jason D.

    2006-01-01

    Current methods of designing and optimizing antennas by hand are time and labor intensive, and limit complexity. Evolutionary design techniques can overcome these limitations by searching the design space and automatically finding effective solutions. In recent years, evolutionary algorithms have shown great promise in finding practical solutions in large, poorly understood design spaces. In particular, spacecraft antenna design has proven tractable to evolutionary design techniques. Researchers have been investigating evolutionary antenna design and optimization since the early 1990s, and the field has grown in recent years as computer speed has increased and electromagnetic simulators have improved. Two requirements-compliant antennas, one for ST5 and another for TDRS-C, have been automatically designed by evolutionary algorithms. The ST5 antenna is slated to fly this year, and a TDRS-C phased array element has been fabricated and tested. Such automated evolutionary design is enabled by medium-to-high quality simulators and fast modern computers to evaluate computer-generated designs. Evolutionary algorithms automate cut-and-try engineering, substituting automated search though millions of potential designs for intelligent search by engineers through a much smaller number of designs. For evolutionary design, the engineer chooses the evolutionary technique, parameters and the basic form of the antenna, e.g., single wire for ST5 and crossed-element Yagi for TDRS-C. Evolutionary algorithms then search for optimal configurations in the space defined by the engineer. NASA's Space Technology 5 (ST5) mission will launch three small spacecraft to test innovative concepts and technologies. Advanced evolutionary algorithms were used to automatically design antennas for ST5. The combination of wide beamwidth for a circularly-polarized wave and wide impedance bandwidth made for a challenging antenna design problem. From past experience in designing wire antennas, we chose to

  19. Automated morphological classification of galaxies based on projection gradient nonnegative matrix factorization algorithm

    NASA Astrophysics Data System (ADS)

    Selim, I. M.; Abd El Aziz, Mohamed

    2017-04-01

    The development of automated morphological classification schemes can successfully distinguish between morphological types of galaxies and can be used for studies of the formation and subsequent evolution of galaxies in our universe. In this paper, we present a new automated machine supervised learning astronomical classification scheme based on the Nonnegative Matrix Factorization algorithm. This scheme is making distinctions between all types roughly corresponding to Hubble types such as elliptical, lenticulars, spiral, and irregular galaxies. The proposed algorithm is performed on two examples with different number of image (small dataset contains 110 image and large dataset contains 700 images). The experimental results show that galaxy images from EFIGI catalog can be classified automatically with an accuracy of ˜93% for small and ˜92% for large number. These results are in good agreement when compared with the visual classifications.

  20. Automated morphological classification of galaxies based on projection gradient nonnegative matrix factorization algorithm

    NASA Astrophysics Data System (ADS)

    Selim, I. M.; Abd El Aziz, Mohamed

    2017-02-01

    The development of automated morphological classification schemes can successfully distinguish between morphological types of galaxies and can be used for studies of the formation and subsequent evolution of galaxies in our universe. In this paper, we present a new automated machine supervised learning astronomical classification scheme based on the Nonnegative Matrix Factorization algorithm. This scheme is making distinctions between all types roughly corresponding to Hubble types such as elliptical, lenticulars, spiral, and irregular galaxies. The proposed algorithm is performed on two examples with different number of image (small dataset contains 110 image and large dataset contains 700 images). The experimental results show that galaxy images from EFIGI catalog can be classified automatically with an accuracy of ˜93% for small and ˜92% for large number. These results are in good agreement when compared with the visual classifications.

  1. Automated segment matching algorithm-theory, test, and evaluation

    NASA Technical Reports Server (NTRS)

    Kalcic, M. T. (Principal Investigator)

    1982-01-01

    Results to automate the U.S. Department of Agriculture's process of segment shifting and obtain results within one-half pixel accuracy are presented. Given an initial registration, the digitized segment is shifted until a more precise fit to the LANDSAT data is found. The algorithm automates the shifting process and performs certain tests for matching and accepting the computed shift numbers. Results indicate the algorithm can obtain results within one-half pixel accuracy.

  2. Adaptive Automation for Human Supervision of Multiple Uninhabited Vehicles: Effects on Change Detection, Situation Awareness, and Mental Workload

    DTIC Science & Technology

    2009-01-01

    http://www.informaworld.com/smpp/title~content=t775653681 Adaptive Automation for Human Supervision of Multiple Uninhabited Vehicles: Effects on Change...Uninhabited Vehicles: Effects on Change Detection, Situation Awareness, and Mental Workload’,Military Psychology,21:2,270 — 297 To link to this...Supervision of Multiple Uninhabited Vehicles: Effects on Change Detection, Situation Awareness, and Mental Workload 5a. CONTRACT NUMBER 5b. GRANT

  3. Automated labeling of cancer textures in larynx histopathology slides using quasi-supervised learning.

    PubMed

    Onder, Devrim; Sarioglu, Sulen; Karacali, Bilge

    2014-12-01

    To evaluate the performance of a quasi-supervised statistical learning algorithm, operating on datasets having normal and neoplastic tissues, to identify larynx squamous cell carcinomas. Furthermore, cancer texture separability measures against normal tissues are to be developed and compared either for colorectal or larynx tissues. Light microscopic digital images from histopathological sections were obtained from laryngectomy materials including squamous cell carcinoma and nonneoplastic regions. The texture features were calculated by using co-occurrence matrices and local histograms. The texture features were input to the quasi-supervised learning algorithm. Larynx regions containing squamous cell carcinomas were accurately identified, having false and true positive rates up to 21% and 87%, respectively. Larynx squamous cell carcinoma versus normal tissue texture separability measures were higher than colorectal adenocarcinoma versus normal textures for the colorectal database. Furthermore, the resultant labeling performances for all larynx datasets are higher than or equal to that of colorectal datasets. The results in larynx datasets, in comparison with the former colorectal study, suggested that quasi-supervised texture classification is to be a helpful method in histopathological image classification and analysis.

  4. Comparison of supervised machine learning algorithms for waterborne pathogen detection using mobile phone fluorescence microscopy

    NASA Astrophysics Data System (ADS)

    Ceylan Koydemir, Hatice; Feng, Steve; Liang, Kyle; Nadkarni, Rohan; Benien, Parul; Ozcan, Aydogan

    2017-06-01

    Giardia lamblia is a waterborne parasite that affects millions of people every year worldwide, causing a diarrheal illness known as giardiasis. Timely detection of the presence of the cysts of this parasite in drinking water is important to prevent the spread of the disease, especially in resource-limited settings. Here we provide extended experimental testing and evaluation of the performance and repeatability of a field-portable and cost-effective microscopy platform for automated detection and counting of Giardia cysts in water samples, including tap water, non-potable water, and pond water. This compact platform is based on our previous work, and is composed of a smartphone-based fluorescence microscope, a disposable sample processing cassette, and a custom-developed smartphone application. Our mobile phone microscope has a large field of view of 0.8 cm2 and weighs only 180 g, excluding the phone. A custom-developed smartphone application provides a user-friendly graphical interface, guiding the users to capture a fluorescence image of the sample filter membrane and analyze it automatically at our servers using an image processing algorithm and training data, consisting of >30,000 images of cysts and >100,000 images of other fluorescent particles that are captured, including, e.g. dust. The total time that it takes from sample preparation to automated cyst counting is less than an hour for each 10 ml of water sample that is tested. We compared the sensitivity and the specificity of our platform using multiple supervised classification models, including support vector machines and nearest neighbors, and demonstrated that a bootstrap aggregating (i.e. bagging) approach using raw image file format provides the best performance for automated detection of Giardia cysts. We evaluated the performance of this machine learning enabled pathogen detection device with water samples taken from different sources (e.g. tap water, non-potable water, pond water) and achieved a

  5. Automated discrete element method calibration using genetic and optimization algorithms

    NASA Astrophysics Data System (ADS)

    Do, Huy Q.; Aragón, Alejandro M.; Schott, Dingena L.

    2017-06-01

    This research aims at developing a universal methodology for automated calibration of microscopic properties of modelled granular materials. The proposed calibrator can be applied for different experimental set-ups. Two optimization approaches: (1) a genetic algorithm and (2) DIRECT optimization, are used to identify discrete element method input model parameters, e.g., coefficients of sliding and rolling friction. The algorithms are used to minimize the objective function characterized by the discrepancy between the experimental macroscopic properties and the associated numerical results. Two test cases highlight the robustness, stability, and reliability of the two algorithms used for automated discrete element method calibration with different set-ups.

  6. An Efficient Supervised Training Algorithm for Multilayer Spiking Neural Networks

    PubMed Central

    Xie, Xiurui; Qu, Hong; Liu, Guisong; Zhang, Malu; Kurths, Jürgen

    2016-01-01

    The spiking neural networks (SNNs) are the third generation of neural networks and perform remarkably well in cognitive tasks such as pattern recognition. The spike emitting and information processing mechanisms found in biological cognitive systems motivate the application of the hierarchical structure and temporal encoding mechanism in spiking neural networks, which have exhibited strong computational capability. However, the hierarchical structure and temporal encoding approach require neurons to process information serially in space and time respectively, which reduce the training efficiency significantly. For training the hierarchical SNNs, most existing methods are based on the traditional back-propagation algorithm, inheriting its drawbacks of the gradient diffusion and the sensitivity on parameters. To keep the powerful computation capability of the hierarchical structure and temporal encoding mechanism, but to overcome the low efficiency of the existing algorithms, a new training algorithm, the Normalized Spiking Error Back Propagation (NSEBP) is proposed in this paper. In the feedforward calculation, the output spike times are calculated by solving the quadratic function in the spike response model instead of detecting postsynaptic voltage states at all time points in traditional algorithms. Besides, in the feedback weight modification, the computational error is propagated to previous layers by the presynaptic spike jitter instead of the gradient decent rule, which realizes the layer-wised training. Furthermore, our algorithm investigates the mathematical relation between the weight variation and voltage error change, which makes the normalization in the weight modification applicable. Adopting these strategies, our algorithm outperforms the traditional SNN multi-layer algorithms in terms of learning efficiency and parameter sensitivity, that are also demonstrated by the comprehensive experimental results in this paper. PMID:27044001

  7. Derivation of a Novel Efficient Supervised Learning Algorithm from Cortical-Subcortical Loops

    PubMed Central

    Chandrashekar, Ashok; Granger, Richard

    2012-01-01

    Although brain circuits presumably carry out powerful perceptual algorithms, few instances of derived biological methods have been found to compete favorably against algorithms that have been engineered for specific applications. We forward a novel analysis of a subset of functions of cortical–subcortical loops, which constitute more than 80% of the human brain, thus likely underlying a broad range of cognitive functions. We describe a family of operations performed by the derived method, including a non-standard method for supervised classification, which may underlie some forms of cortically dependent associative learning. The novel supervised classifier is compared against widely used algorithms for classification, including support vector machines (SVM) and k-nearest neighbor methods, achieving corresponding classification rates – at a fraction of the time and space costs. This represents an instance of a biologically derived algorithm comparing favorably against widely used machine learning methods on well-studied tasks. PMID:22291632

  8. Ordering and finding the best of K > 2 supervised learning algorithms.

    PubMed

    Yildiz, Olcay Taner; Alpaydin, Ethem

    2006-03-01

    Given a data set and a number of supervised learning algorithms, we would like to find the algorithm with the smallest expected error. Existing pairwise tests allow a comparison of two algorithms only; range tests and ANOVA check whether multiple algorithms have the same expected error and cannot be used for finding the smallest. We propose a methodology, the MultiTest algorithm, whereby we order supervised learning algorithms taking into account 1) the result of pairwise statistical tests on expected error (what the data tells us), and 2) our prior preferences, e.g., due to complexity. We define the problem in graph-theoretic terms and propose an algorithm to find the "best" learning algorithm in terms of these two criteria, or in the more general case, order learning algorithms in terms of their "goodness." Simulation results using five classification algorithms on 30 data sets indicate the utility of the method. Our proposed method can be generalized to regression and other loss functions by using a suitable pairwise test.

  9. Automated grading of lumbar disc degeneration via supervised distance metric learning

    NASA Astrophysics Data System (ADS)

    He, Xiaoxu; Landis, Mark; Leung, Stephanie; Warrington, James; Shmuilovich, Olga; Li, Shuo

    2017-03-01

    Lumbar disc degeneration (LDD) is a commonly age-associated condition related to low back pain, while its consequences are responsible for over 90% of spine surgical procedures. In clinical practice, grading of LDD by inspecting MRI is a necessary step to make a suitable treatment plan. This step purely relies on physicians manual inspection so that it brings the unbearable tediousness and inefficiency. An automated method for grading of LDD is highly desirable. However, the technical implementation faces a big challenge from class ambiguity, which is typical in medical image classification problems with a large number of classes. This typical challenge is derived from the complexity and diversity of medical images, which lead to a serious class overlapping and brings a great challenge in discriminating different classes. To solve this problem, we proposed an automated grading approach, which is based on supervised distance metric learning to classify the input discs into four class labels (0: normal, 1: slight, 2: marked, 3: severe). By learning distance metrics from labeled instances, an optimal distance metric is modeled and with two attractive advantages: (1) keeps images from the same classes close, and (2) keeps images from different classes far apart. The experiments, performed in 93 subjects, demonstrated the superiority of our method with accuracy 0.9226, sensitivity 0.9655, specificity 0.9083, F-score 0.8615. With our approach, physicians will be free from the tediousness and patients will be provided an effective treatment.

  10. Evolutionary Algorithm Based Automated Reverse Engineering and Defect Discovery

    DTIC Science & Technology

    2007-09-21

    A data mining based procedure for automated reverse engineering and defect discovery has been developed. The data mining algorithm for reverse...engineering uses a genetic program (GP) as a data mining function. A GP is an evolutionary algorithm that automatically evolves populations of computer...are used to create a fitness function for the GP, allowing GP-based data mining . This procedure incorporates not only the experts’ rules into the

  11. Semi-supervised prediction of gene regulatory networks using machine learning algorithms.

    PubMed

    Patel, Nihir; Wang, Jason T L

    2015-10-01

    Use of computational methods to predict gene regulatory networks (GRNs) from gene expression data is a challenging task. Many studies have been conducted using unsupervised methods to fulfill the task; however, such methods usually yield low prediction accuracies due to the lack of training data. In this article, we propose semi-supervised methods for GRN prediction by utilizing two machine learning algorithms, namely, support vector machines (SVM) and random forests (RF). The semi-supervised methods make use of unlabelled data for training. We investigated inductive and transductive learning approaches, both of which adopt an iterative procedure to obtain reliable negative training data from the unlabelled data. We then applied our semi-supervised methods to gene expression data of Escherichia coli and Saccharomyces cerevisiae, and evaluated the performance of our methods using the expression data. Our analysis indicated that the transductive learning approach outperformed the inductive learning approach for both organisms. However, there was no conclusive difference identified in the performance of SVM and RF. Experimental results also showed that the proposed semi-supervised methods performed better than existing supervised methods for both organisms.

  12. Automated programming for bioinformatics algorithm deployment.

    PubMed

    Alterovitz, Gil; Jiwaji, Adnaan; Ramoni, Marco F

    2008-02-01

    Many bioinformatics solutions suffer from the lack of usable interface/platform from which results can be analyzed and visualized. Overcoming this hurdle would allow for more widespread dissemination of bioinformatics algorithms within the biological and medical communities. The algorithms should be accessible without extensive technical support or programming knowledge. Here, we propose a dynamic wizard platform that provides users with a Graphical User Interface (GUI) for most Java bioinformatics library toolkits. The application interface is generated in real-time based on the original source code. This platform lets developers focus on designing algorithms and biologists/physicians on testing hypotheses and analyzing results. The open source code can be downloaded from: http://bcl.med.harvard.edu/proteomics/proj/APBA/.

  13. Agent-Based Automated Algorithm Generator

    DTIC Science & Technology

    2010-01-12

    Detection and Isolation Agent (FDIA), Prognostic Agent (PA), Fusion Agent (FA), and Maintenance Mining Agent (MMA). FDI agents perform diagnostics...manner and loosely coupled). The library of D/P algorithms will be hosted in server-side agents, consisting of four types of major agents: Fault

  14. Automated Vectorization of Decision-Based Algorithms

    NASA Technical Reports Server (NTRS)

    James, Mark

    2006-01-01

    Virtually all existing vectorization algorithms are designed to only analyze the numeric properties of an algorithm and distribute those elements across multiple processors. This advances the state of the practice because it is the only known system, at the time of this reporting, that takes high-level statements and analyzes them for their decision properties and converts them to a form that allows them to automatically be executed in parallel. The software takes a high-level source program that describes a complex decision- based condition and rewrites it as a disjunctive set of component Boolean relations that can then be executed in parallel. This is important because parallel architectures are becoming more commonplace in conventional systems and they have always been present in NASA flight systems. This technology allows one to take existing condition-based code and automatically vectorize it so it naturally decomposes across parallel architectures.

  15. Automated Detection of Microaneurysms Using Scale-Adapted Blob Analysis and Semi-Supervised Learning

    SciTech Connect

    Adal, Kedir M.; Sidebe, Desire; Ali, Sharib; Chaum, Edward; Karnowski, Thomas Paul; Meriaudeau, Fabrice

    2014-01-07

    Despite several attempts, automated detection of microaneurysm (MA) from digital fundus images still remains to be an open issue. This is due to the subtle nature of MAs against the surrounding tissues. In this paper, the microaneurysm detection problem is modeled as finding interest regions or blobs from an image and an automatic local-scale selection technique is presented. Several scale-adapted region descriptors are then introduced to characterize these blob regions. A semi-supervised based learning approach, which requires few manually annotated learning examples, is also proposed to train a classifier to detect true MAs. The developed system is built using only few manually labeled and a large number of unlabeled retinal color fundus images. The performance of the overall system is evaluated on Retinopathy Online Challenge (ROC) competition database. A competition performance measure (CPM) of 0.364 shows the competitiveness of the proposed system against state-of-the art techniques as well as the applicability of the proposed features to analyze fundus images.

  16. Automated detection of microaneurysms using scale-adapted blob analysis and semi-supervised learning.

    PubMed

    Adal, Kedir M; Sidibé, Désiré; Ali, Sharib; Chaum, Edward; Karnowski, Thomas P; Mériaudeau, Fabrice

    2014-04-01

    Despite several attempts, automated detection of microaneurysm (MA) from digital fundus images still remains to be an open issue. This is due to the subtle nature of MAs against the surrounding tissues. In this paper, the microaneurysm detection problem is modeled as finding interest regions or blobs from an image and an automatic local-scale selection technique is presented. Several scale-adapted region descriptors are introduced to characterize these blob regions. A semi-supervised based learning approach, which requires few manually annotated learning examples, is also proposed to train a classifier which can detect true MAs. The developed system is built using only few manually labeled and a large number of unlabeled retinal color fundus images. The performance of the overall system is evaluated on Retinopathy Online Challenge (ROC) competition database. A competition performance measure (CPM) of 0.364 shows the competitiveness of the proposed system against state-of-the art techniques as well as the applicability of the proposed features to analyze fundus images. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  17. Comparative Study of Algorithms for Automated Generalization of Linear Objects

    NASA Astrophysics Data System (ADS)

    Azimjon, S.; Gupta, P. K.; Sukhmani, R. S. G. S.

    2014-11-01

    Automated generalization, rooted from conventional cartography, has become an increasing concern in both geographic information system (GIS) and mapping fields. All geographic phenomenon and the processes are bound to the scale, as it is impossible for human being to observe the Earth and the processes in it without decreasing its scale. To get optimal results, cartographers and map-making agencies develop set of rules and constraints, however these rules are under consideration and topic for many researches up until recent days. Reducing map generating time and giving objectivity is possible by developing automated map generalization algorithms (McMaster and Shea, 1988). Modification of the scale traditionally is a manual process, which requires knowledge of the expert cartographer, and it depends on the experience of the user, which makes the process very subjective as every user may generate different map with same requirements. However, automating generalization based on the cartographic rules and constrains can give consistent result. Also, developing automated system for map generation is the demand of this rapid changing world. The research that we have conveyed considers only generalization of the roads, as it is one of the indispensable parts of a map. Dehradun city, Uttarakhand state of India was selected as a study area. The study carried out comparative study of the generalization software sets, operations and algorithms available currently, also considers advantages and drawbacks of the existing software used worldwide. Research concludes with the development of road network generalization tool and with the final generalized road map of the study area, which explores the use of open source python programming language and attempts to compare different road network generalization algorithms. Thus, the paper discusses the alternative solutions for automated generalization of linear objects using GIS-technologies. Research made on automated of road network

  18. A Novel Classification Algorithm Based on Incremental Semi-Supervised Support Vector Machine

    PubMed Central

    Gao, Fei; Mei, Jingyuan; Sun, Jinping; Wang, Jun; Yang, Erfu; Hussain, Amir

    2015-01-01

    For current computational intelligence techniques, a major challenge is how to learn new concepts in changing environment. Traditional learning schemes could not adequately address this problem due to a lack of dynamic data selection mechanism. In this paper, inspired by human learning process, a novel classification algorithm based on incremental semi-supervised support vector machine (SVM) is proposed. Through the analysis of prediction confidence of samples and data distribution in a changing environment, a “soft-start” approach, a data selection mechanism and a data cleaning mechanism are designed, which complete the construction of our incremental semi-supervised learning system. Noticeably, with the ingenious design procedure of our proposed algorithm, the computation complexity is reduced effectively. In addition, for the possible appearance of some new labeled samples in the learning process, a detailed analysis is also carried out. The results show that our algorithm does not rely on the model of sample distribution, has an extremely low rate of introducing wrong semi-labeled samples and can effectively make use of the unlabeled samples to enrich the knowledge system of classifier and improve the accuracy rate. Moreover, our method also has outstanding generalization performance and the ability to overcome the concept drift in a changing environment. PMID:26275294

  19. An immune-inspired semi-supervised algorithm for breast cancer diagnosis.

    PubMed

    Peng, Lingxi; Chen, Wenbin; Zhou, Wubai; Li, Fufang; Yang, Jin; Zhang, Jiandong

    2016-10-01

    Breast cancer is the most frequently and world widely diagnosed life-threatening cancer, which is the leading cause of cancer death among women. Early accurate diagnosis can be a big plus in treating breast cancer. Researchers have approached this problem using various data mining and machine learning techniques such as support vector machine, artificial neural network, etc. The computer immunology is also an intelligent method inspired by biological immune system, which has been successfully applied in pattern recognition, combination optimization, machine learning, etc. However, most of these diagnosis methods belong to a supervised diagnosis method. It is very expensive to obtain labeled data in biology and medicine. In this paper, we seamlessly integrate the state-of-the-art research on life science with artificial intelligence, and propose a semi-supervised learning algorithm to reduce the need for labeled data. We use two well-known benchmark breast cancer datasets in our study, which are acquired from the UCI machine learning repository. Extensive experiments are conducted and evaluated on those two datasets. Our experimental results demonstrate the effectiveness and efficiency of our proposed algorithm, which proves that our algorithm is a promising automatic diagnosis method for breast cancer. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  20. [The study of medical supplies automation replenishment algorithm in hospital on medical supplies supplying chain].

    PubMed

    Sheng, Xi

    2012-07-01

    The thesis aims to study the automation replenishment algorithm in hospital on medical supplies supplying chain. The mathematical model and algorithm of medical supplies automation replenishment are designed through referring to practical data form hospital on the basis of applying inventory theory, greedy algorithm and partition algorithm. The automation replenishment algorithm is proved to realize automatic calculation of the medical supplies distribution amount and optimize medical supplies distribution scheme. A conclusion could be arrived that the model and algorithm of inventory theory, if applied in medical supplies circulation field, could provide theoretical and technological support for realizing medical supplies automation replenishment of hospital on medical supplies supplying chain.

  1. The Marshall Automated Wind Algorithm for Geostationary Satellite Wind Applications

    NASA Technical Reports Server (NTRS)

    Jedlovec, Gary J.; Atkinson, Robert J.

    1998-01-01

    The Marshall Automated Wind (MAW) algorithm was developed over a decade ago in support of specialized studies of mesoscale meteorology. In recent years, the algorithm has been generalized to address global climate issues and other specific objectives related to NASA missions. The MAW algorithm uses a tracking scheme which minimizes image brightness temperature differences in a sequence of satellite images to determine feature displacement (winds). With the appropriate methodology accurate satellite derived winds can be obtained from visible, infrared, and water vapor imagery. Typical errors are less than 4 m/s but depend on the quality and control constraints used in post-processing. Key to this success is the judicious use of template size and search area used for tracking, image resolution and time sampling, and selection of appropriate statistical constraints which may vary with image type and desired application. The conference paper and subsequent poster will provide details of the technique and examples of its application.

  2. A Recommendation Algorithm for Automating Corollary Order Generation

    PubMed Central

    Klann, Jeffrey; Schadow, Gunther; McCoy, JM

    2009-01-01

    Manual development and maintenance of decision support content is time-consuming and expensive. We explore recommendation algorithms, e-commerce data-mining tools that use collective order history to suggest purchases, to assist with this. In particular, previous work shows corollary order suggestions are amenable to automated data-mining techniques. Here, an item-based collaborative filtering algorithm augmented with association rule interestingness measures mined suggestions from 866,445 orders made in an inpatient hospital in 2007, generating 584 potential corollary orders. Our expert physician panel evaluated the top 92 and agreed 75.3% were clinically meaningful. Also, at least one felt 47.9% would be directly relevant in guideline development. This automated generation of a rough-cut of corollary orders confirms prior indications about automated tools in building decision support content. It is an important step toward computerized augmentation to decision support development, which could increase development efficiency and content quality while automatically capturing local standards. PMID:20351875

  3. A recommendation algorithm for automating corollary order generation.

    PubMed

    Klann, Jeffrey; Schadow, Gunther; McCoy, J M

    2009-11-14

    Manual development and maintenance of decision support content is time-consuming and expensive. We explore recommendation algorithms, e-commerce data-mining tools that use collective order history to suggest purchases, to assist with this. In particular, previous work shows corollary order suggestions are amenable to automated data-mining techniques. Here, an item-based collaborative filtering algorithm augmented with association rule interestingness measures mined suggestions from 866,445 orders made in an inpatient hospital in 2007, generating 584 potential corollary orders. Our expert physician panel evaluated the top 92 and agreed 75.3% were clinically meaningful. Also, at least one felt 47.9% would be directly relevant in guideline development. This automated generation of a rough-cut of corollary orders confirms prior indications about automated tools in building decision support content. It is an important step toward computerized augmentation to decision support development, which could increase development efficiency and content quality while automatically capturing local standards.

  4. Algorithm of the automated choice of points of the acupuncture for EHF-therapy

    NASA Astrophysics Data System (ADS)

    Lyapina, E. P.; Chesnokov, I. A.; Anisimov, Ya. E.; Bushuev, N. A.; Murashov, E. P.; Eliseev, Yu. Yu.; Syuzanna, H.

    2007-05-01

    Offered algorithm of the automated choice of points of the acupuncture for EHF-therapy. The recipe formed by algorithm of an automated choice of points for acupunctural actions has a recommendational character. Clinical investigations showed that application of the developed algorithm in EHF-therapy allows to normalize energetic state of the meridians and to effectively solve many problems of an organism functioning.

  5. Enhancing Time-Series Detection Algorithms for Automated Biosurveillance

    PubMed Central

    Burkom, Howard; Xing, Jian; English, Roseanne; Bloom, Steven; Cox, Kenneth; Pavlin, Julie A.

    2009-01-01

    BioSense is a US national system that uses data from health information systems for automated disease surveillance. We studied 4 time-series algorithm modifications designed to improve sensitivity for detecting artificially added data. To test these modified algorithms, we used reports of daily syndrome visits from 308 Department of Defense (DoD) facilities and 340 hospital emergency departments (EDs). At a constant alert rate of 1%, sensitivity was improved for both datasets by using a minimum standard deviation (SD) of 1.0, a 14–28 day baseline duration for calculating mean and SD, and an adjustment for total clinic visits as a surrogate denominator. Stratifying baseline days into weekdays versus weekends to account for day-of-week effects increased sensitivity for the DoD data but not for the ED data. These enhanced methods may increase sensitivity without increasing the alert rate and may improve the ability to detect outbreaks by using automated surveillance system data. PMID:19331728

  6. Enhancing time-series detection algorithms for automated biosurveillance.

    PubMed

    Tokars, Jerome I; Burkom, Howard; Xing, Jian; English, Roseanne; Bloom, Steven; Cox, Kenneth; Pavlin, Julie A

    2009-04-01

    BioSense is a US national system that uses data from health information systems for automated disease surveillance. We studied 4 time-series algorithm modifications designed to improve sensitivity for detecting artificially added data. To test these modified algorithms, we used reports of daily syndrome visits from 308 Department of Defense (DoD) facilities and 340 hospital emergency departments (EDs). At a constant alert rate of 1%, sensitivity was improved for both datasets by using a minimum standard deviation (SD) of 1.0, a 14-28 day baseline duration for calculating mean and SD, and an adjustment for total clinic visits as a surrogate denominator. Stratifying baseline days into weekdays versus weekends to account for day-of-week effects increased sensitivity for the DoD data but not for the ED data. These enhanced methods may increase sensitivity without increasing the alert rate and may improve the ability to detect outbreaks by using automated surveillance system data.

  7. Comparison of Automated Flare Location Algorithm Results to Solar Truth

    NASA Astrophysics Data System (ADS)

    Plunkett, S. P.; Newmark, J. S.; Kunkel, V.; Patsourakos, S.; McMullin, D. R.; Hill, S. M.

    2008-12-01

    Accurate and timely detection of solar flares and determination of their heliocentric coordinates are key requirements for space weather forecasting. We report the results of a study to compare the results of multiple algorithms for automated determination of flare locations to "solar truth". The XFL algorithm determines flare locations in near real-time using GOES-12 SXI image data, and is triggered by GOES-12 XRS flare detections. We also consider H-alpha flare locations reported in the FLA data set, and the Latest Events (LEV) locations produced by LMSAL, based on GOES-12 SXI or SOHO EIT observations. We compare the results of each of these algorithms to solar truth heliocentric flare locations determined from analysis of GOES-12 SXI images of several hundred flares of C class and higher, during periods of high, moderate, and low solar activity between 2003 and 2006. We also compare the relative effectiveness of each of these algorithms for determining flare locations in near real-time, considering both timeliness and accuracy of the reported flare locations.

  8. An Automated Summarization Assessment Algorithm for Identifying Summarizing Strategies

    PubMed Central

    Abdi, Asad; Idris, Norisma; Alguliyev, Rasim M.; Aliguliyev, Ramiz M.

    2016-01-01

    Background Summarization is a process to select important information from a source text. Summarizing strategies are the core cognitive processes in summarization activity. Since summarization can be important as a tool to improve comprehension, it has attracted interest of teachers for teaching summary writing through direct instruction. To do this, they need to review and assess the students' summaries and these tasks are very time-consuming. Thus, a computer-assisted assessment can be used to help teachers to conduct this task more effectively. Design/Results This paper aims to propose an algorithm based on the combination of semantic relations between words and their syntactic composition to identify summarizing strategies employed by students in summary writing. An innovative aspect of our algorithm lies in its ability to identify summarizing strategies at the syntactic and semantic levels. The efficiency of the algorithm is measured in terms of Precision, Recall and F-measure. We then implemented the algorithm for the automated summarization assessment system that can be used to identify the summarizing strategies used by students in summary writing. PMID:26735139

  9. Automatic supervised classification of multi-temporal images using the expectation-maximization algorithm

    NASA Astrophysics Data System (ADS)

    Chi, Junhwa; Kim, Hyun-cheol

    2017-04-01

    The impact of nonstationary phenomena is a challenging problem for analyzing multi-temporal remote sensing data. Spectral signatures are subject to change over time due to natural (e.g. seasonal phenology or environmental conditions) and disruptive impacts. For example, the same class shows quite different spectral signatures in two temporal remote sensing images. The phenomenon of evolving spectral features is referred as spectral drift in the remote sensing community, or data shifting in the machine learning community. Under the effect of spectral drift, we need to address the problem that the distributions of training and testing set are different, which is more difficult than for single-image classification. That is, a supervised model may not be capable of explaining the testing set. In this study, we utilize the expectation-maximization algorithm to classify multi-temporal sea ice images acquired by optical remote sensing sensors. The proposed technique allows the classifier's parameters, obtained by supervised learning on a specific image, to be updated in an automatic way on the basis of the distribution of a new image to be classified.

  10. Optimization of supervised self-organizing maps with genetic algorithms for classification of urinary calculi

    NASA Astrophysics Data System (ADS)

    Kuzmanovski, Igor; Trpkovska, Mira; Šoptrajanov, Bojan

    2005-06-01

    Supervised self-organizing maps were used for classification of 160 infrared spectra of urinary calculi composed of calcium oxalates (whewellite and weddellite), pure or in binary or ternary mixtures with carbonate apatite, struvite or uric acid. The study was focused to such calculi since more than 80% of the samples analyzed contained some or all of the above-mentioned constituents. The classification was done on the basis of the infrared spectra in the 1450-450 cm -1 region. Two procedures were used in order to find the most suitable size and for optimizing the self-organizing map of which that using the genetic algorithms gave better results. Using this procedure several sets of solutions with zero misclassifications were obtained. Thus, the self-organizing maps may be considered as a promising tool for qualitative analysis of urinary calculi.

  11. The Automated Assessment of Postural Stability: Balance Detection Algorithm.

    PubMed

    Napoli, Alessandro; Glass, Stephen M; Tucker, Carole; Obeid, Iyad

    2017-08-30

    Impaired balance is a common indicator of mild traumatic brain injury, concussion and musculoskeletal injury. Given the clinical relevance of such injuries, especially in military settings, it is paramount to develop more accurate and reliable on-field evaluation tools. This work presents the design and implementation of the automated assessment of postural stability (AAPS) system, for on-field evaluations following concussion. The AAPS is a computer system, based on inexpensive off-the-shelf components and custom software, that aims to automatically and reliably evaluate balance deficits, by replicating a known on-field clinical test, namely, the Balance Error Scoring System (BESS). The AAPS main innovation is its balance error detection algorithm that has been designed to acquire data from a Microsoft Kinect(®) sensor and convert them into clinically-relevant BESS scores, using the same detection criteria defined by the original BESS test. In order to assess the AAPS balance evaluation capability, a total of 15 healthy subjects (7 male, 8 female) were required to perform the BESS test, while simultaneously being tracked by a Kinect 2.0 sensor and a professional-grade motion capture system (Qualisys AB, Gothenburg, Sweden). High definition videos with BESS trials were scored off-line by three experienced observers for reference scores. AAPS performance was assessed by comparing the AAPS automated scores to those derived by three experienced observers. Our results show that the AAPS error detection algorithm presented here can accurately and precisely detect balance deficits with performance levels that are comparable to those of experienced medical personnel. Specifically, agreement levels between the AAPS algorithm and the human average BESS scores ranging between 87.9% (single-leg on foam) and 99.8% (double-leg on firm ground) were detected. Moreover, statistically significant differences in balance scores were not detected by an ANOVA test with alpha equal to

  12. Geostationary Fire Detection with the Wildfire Automated Biomass Burning Algorithm

    NASA Astrophysics Data System (ADS)

    Hoffman, J.; Schmidt, C. C.; Brunner, J. C.; Prins, E. M.

    2010-12-01

    The Wild Fire Automated Biomass Burning Algorithm (WF_ABBA), developed at the Cooperative Institute for Meteorological Satellite Studies (CIMSS), has a long legacy of operational wildfire detection and characterization. In recent years, applications of geostationary fire detection and characterization data have been expanding. Fires are detected with a contextual algorithm and when the fires meet certain conditions the instantaneous fire size, temperature, and radiative power are calculated and provided in user products. The WF_ABBA has been applied to data from Geostationary Operational Environmental Satellite (GOES)-8 through 15, Meteosat-8/-9, and Multifunction Transport Satellite (MTSAT)-1R/-2. WF_ABBA is also being developed for the upcoming platforms like GOES-R Advanced Baseline Imager (ABI) and other geostationary satellites. Development of the WF_ABBA for GOES-R ABI has focused on adapting the legacy algorithm to the new satellite system, enhancing its capabilities to take advantage of the improvements available from ABI, and addressing user needs. By its nature as a subpixel feature, observation of fire is extraordinarily sensitive to the characteristics of the sensor and this has been a fundamental part of the GOES-R WF_ABBA development work.

  13. Automated Spectroscopic Analysis Using the Particle Swarm Optimization Algorithm: Implementing a Guided Search Algorithm to Autofit

    NASA Astrophysics Data System (ADS)

    Ervin, Katherine; Shipman, Steven

    2017-06-01

    While rotational spectra can be rapidly collected, their analysis (especially for complex systems) is seldom straightforward, leading to a bottleneck. The AUTOFIT program was designed to serve that need by quickly matching rotational constants to spectra with little user input and supervision. This program can potentially be improved by incorporating an optimization algorithm in the search for a solution. The Particle Swarm Optimization Algorithm (PSO) was chosen for implementation. PSO is part of a family of optimization algorithms called heuristic algorithms, which seek approximate best answers. This is ideal for rotational spectra, where an exact match will not be found without incorporating distortion constants, etc., which would otherwise greatly increase the size of the search space. PSO was tested for robustness against five standard fitness functions and then applied to a custom fitness function created for rotational spectra. This talk will explain the Particle Swarm Optimization algorithm and how it works, describe how Autofit was modified to use PSO, discuss the fitness function developed to work with spectroscopic data, and show our current results. Seifert, N.A., Finneran, I.A., Perez, C., Zaleski, D.P., Neill, J.L., Steber, A.L., Suenram, R.D., Lesarri, A., Shipman, S.T., Pate, B.H., J. Mol. Spec. 312, 13-21 (2015)

  14. Appropriate training area selection for supervised texture classification by using the genetic algorithms

    NASA Astrophysics Data System (ADS)

    Okumura, Hiroshi; Maeda, Masaru; Arai, Kohei

    2003-03-01

    A new method for selection of appropriate training areas which are used for supervised texture classification is proposed. In the method, the genetic algorithms (GA) are employed to determine the appropriate location and the appropriate size of each texture category's training area. The proposed method consists of the following procedures: 1) the determination of the number of classification category and those kinds; 2) each chromosome used in the GA consists of coordinates of center pixel of each training area candidate and those size; 3) 50 chromosomes are generated using random number; 4) fitness of each chromosome is calculated; the fitness is the product of the Classification Reliability in the Mixed Texture Cases (CRMTC) and the Stability of NZMV against Scanning Field of View Size (SNSFS); 5) in the selection operation in the GA, the elite preservation strategy is employed; 6) in the crossover operation, multi point crossover is employed and two parent chromosomes are selected by the roulette strategy; 7) in mutation operation, the locuses where the bit inverting occurs are decided by a mutation rate; 8) go to the procedure 4. Some experiments are conducted to evaluate searching capability of appropriate training areas of the proposed method by using images from Brodatz's photo album and their rotated images. The experimental results show that the proposed method can select appropriate training areas much faster than conventional try-and-error method. The proposed method has been also applied to supervised texture classification of airborne multispectral scanner images. The experimental results show that the proposed method can provide appropriate training areas for reasonable classification results.

  15. How to measure metallicity from five-band photometry with supervised machine learning algorithms

    NASA Astrophysics Data System (ADS)

    Acquaviva, Viviana

    2016-02-01

    We demonstrate that it is possible to measure metallicity from the SDSS five-band photometry to better than 0.1 dex using supervised machine learning algorithms. Using spectroscopic estimates of metallicity as ground truth, we build, optimize and train several estimators to predict metallicity. We use the observed photometry, as well as derived quantities such as stellar mass and photometric redshift, as features, and we build two sample data sets at median redshifts of 0.103 and 0.218 and median r-band magnitude of 17.5 and 18.3, respectively. We find that ensemble methods, such as random forests of trees and extremely randomized trees and support vector machines all perform comparably well and can measure metallicity with a Root Mean Square Error (RMSE) of 0.081 and 0.090 for the two data sets when all objects are included. The fraction of outliers (objects for which |Ztrue - Zpred| > 0.2 dex) is 2.2 and 3.9 per cent, respectively and the RMSE decreases to 0.068 and 0.069 if those objects are excluded. Because of the ability of these algorithms to capture complex relationships between data and target, our technique performs better than previously proposed methods that sought to fit metallicity using an analytic fitting formula, and has 3× more constraining power than SED fitting-based methods. Additionally, this method is extremely forgiving of contamination in the training set, and can be used with very satisfactory results for sample sizes of a few hundred objects. We distribute all the routines to reproduce our results and apply them to other data sets.

  16. Automated classification of female facial beauty by image analysis and supervised learning

    NASA Astrophysics Data System (ADS)

    Gunes, Hatice; Piccardi, Massimo; Jan, Tony

    2004-01-01

    The fact that perception of facial beauty may be a universal concept has long been debated amongst psychologists and anthropologists. In this paper, we performed experiments to evaluate the extent of beauty universality by asking a number of diverse human referees to grade a same collection of female facial images. Results obtained show that the different individuals gave similar votes, thus well supporting the concept of beauty universality. We then trained an automated classifier using the human votes as the ground truth and used it to classify an independent test set of facial images. The high accuracy achieved proves that this classifier can be used as a general, automated tool for objective classification of female facial beauty. Potential applications exist in the entertainment industry and plastic surgery.

  17. Generation of a supervised classification algorithm for time-series variable stars with an application to the LINEAR dataset

    NASA Astrophysics Data System (ADS)

    Johnston, K. B.; Oluseyi, H. M.

    2017-04-01

    With the advent of digital astronomy, new benefits and new problems have been presented to the modern day astronomer. While data can be captured in a more efficient and accurate manner using digital means, the efficiency of data retrieval has led to an overload of scientific data for processing and storage. This paper will focus on the construction and application of a supervised pattern classification algorithm for the identification of variable stars. Given the reduction of a survey of stars into a standard feature space, the problem of using prior patterns to identify new observed patterns can be reduced to time-tested classification methodologies and algorithms. Such supervised methods, so called because the user trains the algorithms prior to application using patterns with known classes or labels, provide a means to probabilistically determine the estimated class type of new observations. This paper will demonstrate the construction and application of a supervised classification algorithm on variable star data. The classifier is applied to a set of 192,744 LINEAR data points. Of the original samples, 34,451 unique stars were classified with high confidence (high level of probability of being the true class).

  18. Visualizing Global Wildfire Automated Biomass Burning Algorithm Data

    NASA Astrophysics Data System (ADS)

    Schmidt, C. C.; Hoffman, J.; Prins, E. M.

    2013-12-01

    The Wildfire Automated Biomass Burning Algorithm (WFABBA) produces fire detection and characterization from a global constellation of geostationary satellites on a realtime basis. Presentation of this data in a timely and meaningful way has been a challenge, but as hardware and software have advanced and web tools have evolved, new options have rapidly arisen. The WFABBA team at the Cooperative Institute for Meteorological Satellite Studies (CIMSS) at the Space Science Engineering Center (SSEC) have begun implementation of a web-based framework that allows a user to visualize current and archived fire data from NOAA's Geostationary Operational Environmental Satellite (GOES), EUMETSAT's Meteosat Second Generation (MSG), JMA's Multifunction Transport Satellite (MTSAT), and KMA's COMS series of satellites. User group needs vary from simple examination of the most recent data to multi-hour composites to animations, as well as saving datasets for further review. In order to maximize the usefulness of the data, a user-friendly and scaleable interface has been under development that will, when complete, allow access to approximately 18 years of WFABBA data, as well as the data produced in real-time. Implemented, planned, and potential additional features will be examined.

  19. An Automated Algorithm to Screen Massive Training Samples for a Global Impervious Surface Classification

    NASA Technical Reports Server (NTRS)

    Tan, Bin; Brown de Colstoun, Eric; Wolfe, Robert E.; Tilton, James C.; Huang, Chengquan; Smith, Sarah E.

    2012-01-01

    An algorithm is developed to automatically screen the outliers from massive training samples for Global Land Survey - Imperviousness Mapping Project (GLS-IMP). GLS-IMP is to produce a global 30 m spatial resolution impervious cover data set for years 2000 and 2010 based on the Landsat Global Land Survey (GLS) data set. This unprecedented high resolution impervious cover data set is not only significant to the urbanization studies but also desired by the global carbon, hydrology, and energy balance researches. A supervised classification method, regression tree, is applied in this project. A set of accurate training samples is the key to the supervised classifications. Here we developed the global scale training samples from 1 m or so resolution fine resolution satellite data (Quickbird and Worldview2), and then aggregate the fine resolution impervious cover map to 30 m resolution. In order to improve the classification accuracy, the training samples should be screened before used to train the regression tree. It is impossible to manually screen 30 m resolution training samples collected globally. For example, in Europe only, there are 174 training sites. The size of the sites ranges from 4.5 km by 4.5 km to 8.1 km by 3.6 km. The amount training samples are over six millions. Therefore, we develop this automated statistic based algorithm to screen the training samples in two levels: site and scene level. At the site level, all the training samples are divided to 10 groups according to the percentage of the impervious surface within a sample pixel. The samples following in each 10% forms one group. For each group, both univariate and multivariate outliers are detected and removed. Then the screen process escalates to the scene level. A similar screen process but with a looser threshold is applied on the scene level considering the possible variance due to the site difference. We do not perform the screen process across the scenes because the scenes might vary due to

  20. How Small Can Impact Craters Be Detected at Large Scale by Automated Algorithms?

    NASA Astrophysics Data System (ADS)

    Bandeira, L.; Machado, M.; Pina, P.; Marques, J. S.

    2013-12-01

    The last decade has seen a widespread publication of crater detection algorithms (CDA) with increasing detection performances. The adaptive nature of some of the algorithms [1] has permitting their use in the construction or update of global catalogues for Mars and the Moon. Nevertheless, the smallest craters detected in these situations by CDA have 10 pixels in diameter (or about 2 km in MOC-WA images) [2] or can go down to 16 pixels or 200 m in HRSC imagery [3]. The availability of Martian images with metric (HRSC and CTX) and centimetric (HiRISE) resolutions is permitting to unveil craters not perceived before, thus automated approaches seem a natural way of detecting the myriad of these structures. In this study we present the efforts, based on our previous algorithms [2-3] and new training strategies, to push the automated detection of craters to a dimensional threshold as close as possible to the detail that can be perceived on the images, something that has not been addressed yet in a systematic way. The approach is based on the selection of candidate regions of the images (portions that contain crescent highlight and shadow shapes indicating a possible presence of a crater) using mathematical morphology operators (connected operators of different sizes) and on the extraction of texture features (Haar-like) and classification by Adaboost, into crater and non-crater. This is a supervised approach, meaning that a training phase, in which manually labelled samples are provided, is necessary so the classifier can learn what crater and non-crater structures are. The algorithm is intensively tested in Martian HiRISE images, from different locations on the planet, in order to cover the largest surface types from the geological point view (different ages and crater densities) and also from the imaging or textural perspective (different degrees of smoothness/roughness). The quality of the detections obtained is clearly dependent on the dimension of the craters

  1. A Parallel Genetic Algorithm for Automated Electronic Circuit Design

    NASA Technical Reports Server (NTRS)

    Long, Jason D.; Colombano, Silvano P.; Haith, Gary L.; Stassinopoulos, Dimitris

    2000-01-01

    Parallelized versions of genetic algorithms (GAs) are popular primarily for three reasons: the GA is an inherently parallel algorithm, typical GA applications are very compute intensive, and powerful computing platforms, especially Beowulf-style computing clusters, are becoming more affordable and easier to implement. In addition, the low communication bandwidth required allows the use of inexpensive networking hardware such as standard office ethernet. In this paper we describe a parallel GA and its use in automated high-level circuit design. Genetic algorithms are a type of trial-and-error search technique that are guided by principles of Darwinian evolution. Just as the genetic material of two living organisms can intermix to produce offspring that are better adapted to their environment, GAs expose genetic material, frequently strings of 1s and Os, to the forces of artificial evolution: selection, mutation, recombination, etc. GAs start with a pool of randomly-generated candidate solutions which are then tested and scored with respect to their utility. Solutions are then bred by probabilistically selecting high quality parents and recombining their genetic representations to produce offspring solutions. Offspring are typically subjected to a small amount of random mutation. After a pool of offspring is produced, this process iterates until a satisfactory solution is found or an iteration limit is reached. Genetic algorithms have been applied to a wide variety of problems in many fields, including chemistry, biology, and many engineering disciplines. There are many styles of parallelism used in implementing parallel GAs. One such method is called the master-slave or processor farm approach. In this technique, slave nodes are used solely to compute fitness evaluations (the most time consuming part). The master processor collects fitness scores from the nodes and performs the genetic operators (selection, reproduction, variation, etc.). Because of dependency

  2. Evaluation of an automated single-channel sleep staging algorithm

    PubMed Central

    Wang, Ying; Loparo, Kenneth A; Kelly, Monica R; Kaplan, Richard F

    2015-01-01

    Background We previously published the performance evaluation of an automated electroencephalography (EEG)-based single-channel sleep–wake detection algorithm called Z-ALG used by the Zmachine® sleep monitoring system. The objective of this paper is to evaluate the performance of a new algorithm called Z-PLUS, which further differentiates sleep as detected by Z-ALG into Light Sleep, Deep Sleep, and Rapid Eye Movement (REM) Sleep, against laboratory polysomnography (PSG) using a consensus of expert visual scorers. Methods Single night, in-lab PSG recordings from 99 subjects (52F/47M, 18–60 years, median age 32.7 years), including both normal sleepers and those reporting a variety of sleep complaints consistent with chronic insomnia, sleep apnea, and restless leg syndrome, as well as those taking selective serotonin reuptake inhibitor/serotonin–norepinephrine reuptake inhibitor antidepressant medications, previously evaluated using Z-ALG were re-examined using Z-PLUS. EEG data collected from electrodes placed at the differential-mastoids (A1–A2) were processed by Z-ALG to determine wake and sleep, then those epochs detected as sleep were further processed by Z-PLUS to differentiate into Light Sleep, Deep Sleep, and REM. EEG data were visually scored by multiple certified polysomnographic technologists according to the Rechtschaffen and Kales criterion, and then combined using a majority-voting rule to create a PSG Consensus score file for each of the 99 subjects. Z-PLUS output was compared to the PSG Consensus score files for both epoch-by-epoch (eg, sensitivity, specificity, and kappa) and sleep stage-related statistics (eg, Latency to Deep Sleep, Latency to REM, Total Deep Sleep, and Total REM). Results Sensitivities of Z-PLUS compared to the PSG Consensus were 0.84 for Light Sleep, 0.74 for Deep Sleep, and 0.72 for REM. Similarly, positive predictive values were 0.85 for Light Sleep, 0.78 for Deep Sleep, and 0.73 for REM. Overall, kappa agreement of 0

  3. An Automated Cloud-edge Detection Algorithm Using Cloud Physics and Radar Data

    NASA Technical Reports Server (NTRS)

    Ward, Jennifer G.; Merceret, Francis J.; Grainger, Cedric A.

    2003-01-01

    An automated cloud edge detection algorithm was developed and extensively tested. The algorithm uses in-situ cloud physics data measured by a research aircraft coupled with ground-based weather radar measurements to determine whether the aircraft is in or out of cloud. Cloud edges are determined when the in/out state changes, subject to a hysteresis constraint. The hysteresis constraint prevents isolated transient cloud puffs or data dropouts from being identified as cloud boundaries. The algorithm was verified by detailed manual examination of the data set in comparison to the results from application of the automated algorithm.

  4. Automated cell analysis tool for a genome-wide RNAi screen with support vector machine based supervised learning

    NASA Astrophysics Data System (ADS)

    Remmele, Steffen; Ritzerfeld, Julia; Nickel, Walter; Hesser, Jürgen

    2011-03-01

    RNAi-based high-throughput microscopy screens have become an important tool in biological sciences in order to decrypt mostly unknown biological functions of human genes. However, manual analysis is impossible for such screens since the amount of image data sets can often be in the hundred thousands. Reliable automated tools are thus required to analyse the fluorescence microscopy image data sets usually containing two or more reaction channels. The herein presented image analysis tool is designed to analyse an RNAi screen investigating the intracellular trafficking and targeting of acylated Src kinases. In this specific screen, a data set consists of three reaction channels and the investigated cells can appear in different phenotypes. The main issue of the image processing task is an automatic cell segmentation which has to be robust and accurate for all different phenotypes and a successive phenotype classification. The cell segmentation is done in two steps by segmenting the cell nuclei first and then using a classifier-enhanced region growing on basis of the cell nuclei to segment the cells. The classification of the cells is realized by a support vector machine which has to be trained manually using supervised learning. Furthermore, the tool is brightness invariant allowing different staining quality and it provides a quality control that copes with typical defects during preparation and acquisition. A first version of the tool has already been successfully applied for an RNAi-screen containing three hundred thousand image data sets and the SVM extended version is designed for additional screens.

  5. High Throughput Light Absorber Discovery, Part 1: An Algorithm for Automated Tauc Analysis.

    PubMed

    Suram, Santosh K; Newhouse, Paul F; Gregoire, John M

    2016-11-14

    High-throughput experimentation provides efficient mapping of composition-property relationships, and its implementation for the discovery of optical materials enables advancements in solar energy and other technologies. In a high throughput pipeline, automated data processing algorithms are often required to match experimental throughput, and we present an automated Tauc analysis algorithm for estimating band gap energies from optical spectroscopy data. The algorithm mimics the judgment of an expert scientist, which is demonstrated through its application to a variety of high throughput spectroscopy data, including the identification of indirect or direct band gaps in Fe2O3, Cu2V2O7, and BiVO4. The applicability of the algorithm to estimate a range of band gap energies for various materials is demonstrated by a comparison of direct-allowed band gaps estimated by expert scientists and by automated algorithm for 60 optical spectra.

  6. High throughput light absorber discovery, Part 1: An algorithm for automated tauc analysis

    DOE PAGES

    Suram, Santosh K.; Newhouse, Paul F.; Gregoire, John M.

    2016-09-23

    High-throughput experimentation provides efficient mapping of composition-property relationships, and its implementation for the discovery of optical materials enables advancements in solar energy and other technologies. In a high throughput pipeline, automated data processing algorithms are often required to match experimental throughput, and we present an automated Tauc analysis algorithm for estimating band gap energies from optical spectroscopy data. The algorithm mimics the judgment of an expert scientist, which is demonstrated through its application to a variety of high throughput spectroscopy data, including the identification of indirect or direct band gaps in Fe2O3, Cu2V2O7, and BiVO4. Here, the applicability of themore » algorithm to estimate a range of band gap energies for various materials is demonstrated by a comparison of direct-allowed band gaps estimated by expert scientists and by automated algorithm for 60 optical spectra.« less

  7. Effects of automation and task load on task switching during human supervision of multiple semi-autonomous robots in a dynamic environment.

    PubMed

    Squire, P N; Parasuraman, R

    2010-08-01

    The present study assessed the impact of task load and level of automation (LOA) on task switching in participants supervising a team of four or eight semi-autonomous robots in a simulated 'capture the flag' game. Participants were faster to perform the same task than when they chose to switch between different task actions. They also took longer to switch between different tasks when supervising the robots at a high compared to a low LOA. Task load, as manipulated by the number of robots to be supervised, did not influence switch costs. The results suggest that the design of future unmanned vehicle (UV) systems should take into account not simply how many UVs an operator can supervise, but also the impact of LOA and task operations on task switching during supervision of multiple UVs. The findings of this study are relevant for the ergonomics practice of UV systems. This research extends the cognitive theory of task switching to inform the design of UV systems and results show that switching between UVs is an important factor to consider.

  8. Advanced Algorithms and Automation Tools for Discrete Ordinates Methods in Parallel Environments

    SciTech Connect

    Alireza Haghighat

    2003-05-07

    This final report discusses major accomplishments of a 3-year project under the DOE's NEER Program. The project has developed innovative and automated algorithms, codes, and tools for solving the discrete ordinates particle transport method efficiently in parallel environments. Using a number of benchmark and real-life problems, the performance and accuracy of the new algorithms have been measured and analyzed.

  9. Automated Test Assembly for Cognitive Diagnosis Models Using a Genetic Algorithm

    ERIC Educational Resources Information Center

    Finkelman, Matthew; Kim, Wonsuk; Roussos, Louis A.

    2009-01-01

    Much recent psychometric literature has focused on cognitive diagnosis models (CDMs), a promising class of instruments used to measure the strengths and weaknesses of examinees. This article introduces a genetic algorithm to perform automated test assembly alongside CDMs. The algorithm is flexible in that it can be applied whether the goal is to…

  10. Automated Test Assembly for Cognitive Diagnosis Models Using a Genetic Algorithm

    ERIC Educational Resources Information Center

    Finkelman, Matthew; Kim, Wonsuk; Roussos, Louis A.

    2009-01-01

    Much recent psychometric literature has focused on cognitive diagnosis models (CDMs), a promising class of instruments used to measure the strengths and weaknesses of examinees. This article introduces a genetic algorithm to perform automated test assembly alongside CDMs. The algorithm is flexible in that it can be applied whether the goal is to…

  11. Multi-label classification of chronically ill patients with bag of words and supervised dimensionality reduction algorithms.

    PubMed

    Bromuri, Stefano; Zufferey, Damien; Hennebert, Jean; Schumacher, Michael

    2014-10-01

    This research is motivated by the issue of classifying illnesses of chronically ill patients for decision support in clinical settings. Our main objective is to propose multi-label classification of multivariate time series contained in medical records of chronically ill patients, by means of quantization methods, such as bag of words (BoW), and multi-label classification algorithms. Our second objective is to compare supervised dimensionality reduction techniques to state-of-the-art multi-label classification algorithms. The hypothesis is that kernel methods and locality preserving projections make such algorithms good candidates to study multi-label medical time series. We combine BoW and supervised dimensionality reduction algorithms to perform multi-label classification on health records of chronically ill patients. The considered algorithms are compared with state-of-the-art multi-label classifiers in two real world datasets. Portavita dataset contains 525 diabetes type 2 (DT2) patients, with co-morbidities of DT2 such as hypertension, dyslipidemia, and microvascular or macrovascular issues. MIMIC II dataset contains 2635 patients affected by thyroid disease, diabetes mellitus, lipoid metabolism disease, fluid electrolyte disease, hypertensive disease, thrombosis, hypotension, chronic obstructive pulmonary disease (COPD), liver disease and kidney disease. The algorithms are evaluated using multi-label evaluation metrics such as hamming loss, one error, coverage, ranking loss, and average precision. Non-linear dimensionality reduction approaches behave well on medical time series quantized using the BoW algorithm, with results comparable to state-of-the-art multi-label classification algorithms. Chaining the projected features has a positive impact on the performance of the algorithm with respect to pure binary relevance approaches. The evaluation highlights the feasibility of representing medical health records using the BoW for multi-label classification

  12. An automated blood vessel segmentation algorithm using histogram equalization and automatic threshold selection.

    PubMed

    Saleh, Marwan D; Eswaran, C; Mueen, Ahmed

    2011-08-01

    This paper focuses on the detection of retinal blood vessels which play a vital role in reducing the proliferative diabetic retinopathy and for preventing the loss of visual capability. The proposed algorithm which takes advantage of the powerful preprocessing techniques such as the contrast enhancement and thresholding offers an automated segmentation procedure for retinal blood vessels. To evaluate the performance of the new algorithm, experiments are conducted on 40 images collected from DRIVE database. The results show that the proposed algorithm performs better than the other known algorithms in terms of accuracy. Furthermore, the proposed algorithm being simple and easy to implement, is best suited for fast processing applications.

  13. Automated maneuver planning using a fuzzy logic algorithm

    NASA Technical Reports Server (NTRS)

    Conway, D.; Sperling, R.; Folta, D.; Richon, K.; Defazio, R.

    1994-01-01

    Spacecraft orbital control requires intensive interaction between the analyst and the system used to model the spacecraft trajectory. For orbits with right mission constraints and a large number of maneuvers, this interaction is difficult or expensive to accomplish in a timely manner. Some automation of maneuver planning can reduce these difficulties for maneuver-intensive missions. One approach to this automation is to use fuzzy logic in the control mechanism. Such a prototype system currently under development is discussed. The Tropical Rainfall Measurement Mission (TRMM) is one of several missions that could benefit from automated maneuver planning. TRMM is scheduled for launch in August 1997. The spacecraft is to be maintained in a 350-km circular orbit throughout the 3-year lifetime of the mission, with very small variations in this orbit allowed. Since solar maximum will occur as early as 1999, the solar activity during the TRMM mission will be increasing. The increasing solar activity will result in orbital maneuvers being performed as often as every other day. The results of automated maneuver planning for the TRMM mission will be presented to demonstrate the prototype of the fuzzy logic tool.

  14. Progress on the development of automated data analysis algorithms and software for ultrasonic inspection of composites

    NASA Astrophysics Data System (ADS)

    Aldrin, John C.; Coughlin, Chris; Forsyth, David S.; Welter, John T.

    2014-02-01

    Progress is presented on the development and implementation of automated data analysis (ADA) software to address the burden in interpreting ultrasonic inspection data for large composite structures. The automated data analysis algorithm is presented in detail, which follows standard procedures for analyzing signals for time-of-flight indications and backwall amplitude dropout. ADA processing results are presented for test specimens that include inserted materials and discontinuities produced under poor manufacturing conditions.

  15. Comparison of automated treponemal and nontreponemal test algorithms as first-line syphilis screening assays.

    PubMed

    Huh, Hee Jin; Chung, Jae Woo; Park, Seong Yeon; Chae, Seok Lae

    2016-01-01

    Automated Mediace Treponema pallidum latex agglutination (TPLA) and Mediace rapid plasma reagin (RPR) assays are used by many laboratories for syphilis diagnosis. This study compared the results of the traditional syphilis screening algorithm and a reverse algorithm using automated Mediace RPR or Mediace TPLA as first-line screening assays in subjects undergoing a health checkup. Samples from 24,681 persons were included in this study. We routinely performed Mediace RPR and Mediace TPLA simultaneously. Results were analyzed according to both the traditional algorithm and reverse algorithm. Samples with discordant results on the reverse algorithm (e.g., positive Mediace TPLA, negative Mediace RPR) were tested with Treponema pallidum particle agglutination (TPPA). Among the 24,681 samples, 30 (0.1%) were found positive by traditional screening, and 190 (0.8%) by reverse screening. The identified syphilis rate and overall false-positive rate according to the traditional algorithm were lower than those according to the reverse algorithm (0.07% and 0.05% vs. 0.64% and 0.13%, respectively). A total of 173 discordant samples were tested with TPPA by using the reverse algorithm, of which 140 (80.9%) were TPPA positive. Despite the increased false-positive results in populations with a low prevalence of syphilis, the reverse algorithm detected 140 samples with treponemal antibody that went undetected by the traditional algorithm. The reverse algorithm using Mediace TPLA as a screening test is more sensitive for the detection of syphilis.

  16. Development of an algorithm for automated enhancement of digital prototypes in machine engineering

    NASA Astrophysics Data System (ADS)

    Sokolova, E. A.; Dzhioev, G. A.

    2017-02-01

    The paper deals with the problem of processing digital prototypes in machine engineering with the use of modern approaches to computer vision, methods of taxonomy (a section of the decision theory), automation of manual retouching techniques. Upon further study of the problem, different taxonomic methods have been considered, among which the reference method has been chosen as the most appropriate for automated search of defective areas of the prototype. As a result, the algorithm for automated enhancement of digital prototypes of the digital image has been developed, using modern information technologies.

  17. Automated algorithm for CBCT-based dose calculations of prostate radiotherapy with bilateral hip prostheses.

    PubMed

    Almatani, Turki; Hugtenburg, Richard P; Lewis, Ryan D; Barley, Susan E; Edwards, Mark A

    2016-10-01

    Cone beam CT (CBCT) images contain more scatter than a conventional CT image and therefore provide inaccurate Hounsfield units (HUs). Consequently, CBCT images cannot be used directly for radiotherapy dose calculation. The aim of this study is to enable dose calculations to be performed with the use of CBCT images taken during radiotherapy and evaluate the necessity of replanning. A patient with prostate cancer with bilateral metallic prosthetic hip replacements was imaged using both CT and CBCT. The multilevel threshold (MLT) algorithm was used to categorize pixel values in the CBCT images into segments of homogeneous HU. The variation in HU with position in the CBCT images was taken into consideration. This segmentation method relies on the operator dividing the CBCT data into a set of volumes where the variation in the relationship between pixel values and HUs is small. An automated MLT algorithm was developed to reduce the operator time associated with the process. An intensity-modulated radiation therapy plan was generated from CT images of the patient. The plan was then copied to the segmented CBCT (sCBCT) data sets with identical settings, and the doses were recalculated and compared. Gamma evaluation showed that the percentage of points in the rectum with γ < 1 (3%/3 mm) were 98.7% and 97.7% in the sCBCT using MLT and the automated MLT algorithms, respectively. Compared with the planning CT (pCT) plan, the MLT algorithm showed -0.46% dose difference with 8 h operator time while the automated MLT algorithm showed -1.3%, which are both considered to be clinically acceptable, when using collapsed cone algorithm. The segmentation of CBCT images using the method in this study can be used for dose calculation. For a patient with prostate cancer with bilateral hip prostheses and the associated issues with CT imaging, the MLT algorithms achieved a sufficient dose calculation accuracy that is clinically acceptable. The automated MLT algorithm reduced the

  18. Predicting pupylation sites in prokaryotic proteins using semi-supervised self-training support vector machine algorithm.

    PubMed

    Ju, Zhe; Gu, Hong

    2016-08-15

    As one important post-translational modification of prokaryotic proteins, pupylation plays a key role in regulating various biological processes. The accurate identification of pupylation sites is crucial for understanding the underlying mechanisms of pupylation. Although several computational methods have been developed for the identification of pupylation sites, the prediction accuracy of them is still unsatisfactory. Here, a novel bioinformatics tool named IMP-PUP is proposed to improve the prediction of pupylation sites. IMP-PUP is constructed on the composition of k-spaced amino acid pairs and trained with a modified semi-supervised self-training support vector machine (SVM) algorithm. The proposed algorithm iteratively trains a series of support vector machine classifiers on both annotated and non-annotated pupylated proteins. Computational results show that IMP-PUP achieves the area under receiver operating characteristic curves of 0.91, 0.73, and 0.75 on our training set, Tung's testing set, and our testing set, respectively, which are better than those of the different error costs SVM algorithm and the original self-training SVM algorithm. Independent tests also show that IMP-PUP significantly outperforms three other existing pupylation site predictors: GPS-PUP, iPUP, and pbPUP. Therefore, IMP-PUP can be a useful tool for accurate prediction of pupylation sites. A MATLAB software package for IMP-PUP is available at https://juzhe1120.github.io/. Copyright © 2016 Elsevier Inc. All rights reserved.

  19. Automated isotope identification algorithm using artificial neural networks

    DOE PAGES

    Kamuda, Mark; Stinnett, Jacob; Sullivan, Clair

    2017-04-12

    There is a need to develop an algorithm that can determine the relative activities of radio-isotopes in a large dataset of low-resolution gamma-ray spectra that contains a mixture of many radio-isotopes. Low-resolution gamma-ray spectra that contain mixtures of radio-isotopes often exhibit feature over-lap, requiring algorithms that can analyze these features when overlap occurs. While machine learning and pattern recognition algorithms have shown promise for the problem of radio-isotope identification, their ability to identify and quantify mixtures of radio-isotopes has not been studied. Because machine learning algorithms use abstract features of the spectrum, such as the shape of overlapping peaks andmore » Compton continuum, they are a natural choice for analyzing radio-isotope mixtures. An artificial neural network (ANN) has be trained to calculate the relative activities of 32 radio-isotopes in a spectrum. Furthermore, the ANN is trained with simulated gamma-ray spectra, allowing easy expansion of the library of target radio-isotopes. In this paper we present our initial algorithms based on an ANN and evaluate them against a series measured and simulated spectra.« less

  20. Normalized Cut Algorithm for Automated Assignment of Protein Domains

    NASA Technical Reports Server (NTRS)

    Samanta, M. P.; Liang, S.; Zha, H.; Biegel, Bryan A. (Technical Monitor)

    2002-01-01

    We present a novel computational method for automatic assignment of protein domains from structural data. At the core of our algorithm lies a recently proposed clustering technique that has been very successful for image-partitioning applications. This grap.,l-theory based clustering method uses the notion of a normalized cut to partition. an undirected graph into its strongly-connected components. Computer implementation of our method tested on the standard comparison set of proteins from the literature shows a high success rate (84%), better than most existing alternative In addition, several other features of our algorithm, such as reliance on few adjustable parameters, linear run-time with respect to the size of the protein and reduced complexity compared to other graph-theory based algorithms, would make it an attractive tool for structural biologists.

  1. A Parallel Genetic Algorithm for Automated Electronic Circuit Design

    NASA Technical Reports Server (NTRS)

    Lohn, Jason D.; Colombano, Silvano P.; Haith, Gary L.; Stassinopoulos, Dimitris; Norvig, Peter (Technical Monitor)

    2000-01-01

    We describe a parallel genetic algorithm (GA) that automatically generates circuit designs using evolutionary search. A circuit-construction programming language is introduced and we show how evolution can generate practical analog circuit designs. Our system allows circuit size (number of devices), circuit topology, and device values to be evolved. We present experimental results as applied to analog filter and amplifier design tasks.

  2. An automated algorithm for the generation of dynamically reconstructed trajectories

    NASA Astrophysics Data System (ADS)

    Komalapriya, C.; Romano, M. C.; Thiel, M.; Marwan, N.; Kurths, J.; Kiss, I. Z.; Hudson, J. L.

    2010-03-01

    The lack of long enough data sets is a major problem in the study of many real world systems. As it has been recently shown [C. Komalapriya, M. Thiel, M. C. Romano, N. Marwan, U. Schwarz, and J. Kurths, Phys. Rev. E 78, 066217 (2008)], this problem can be overcome in the case of ergodic systems if an ensemble of short trajectories is available, from which dynamically reconstructed trajectories can be generated. However, this method has some disadvantages which hinder its applicability, such as the need for estimation of optimal parameters. Here, we propose a substantially improved algorithm that overcomes the problems encountered by the former one, allowing its automatic application. Furthermore, we show that the new algorithm not only reproduces the short term but also the long term dynamics of the system under study, in contrast to the former algorithm. To exemplify the potential of the new algorithm, we apply it to experimental data from electrochemical oscillators and also to analyze the well-known problem of transient chaotic trajectories.

  3. An algorithm for automated layout of process description maps drawn in SBGN.

    PubMed

    Genc, Begum; Dogrusoz, Ugur

    2016-01-01

    Evolving technology has increased the focus on genomics. The combination of today's advanced techniques with decades of molecular biology research has yielded huge amounts of pathway data. A standard, named the Systems Biology Graphical Notation (SBGN), was recently introduced to allow scientists to represent biological pathways in an unambiguous, easy-to-understand and efficient manner. Although there are a number of automated layout algorithms for various types of biological networks, currently none specialize on process description (PD) maps as defined by SBGN. We propose a new automated layout algorithm for PD maps drawn in SBGN. Our algorithm is based on a force-directed automated layout algorithm called Compound Spring Embedder (CoSE). On top of the existing force scheme, additional heuristics employing new types of forces and movement rules are defined to address SBGN-specific rules. Our algorithm is the only automatic layout algorithm that properly addresses all SBGN rules for drawing PD maps, including placement of substrates and products of process nodes on opposite sides, compact tiling of members of molecular complexes and extensively making use of nested structures (compound nodes) to properly draw cellular locations and molecular complex structures. As demonstrated experimentally, the algorithm results in significant improvements over use of a generic layout algorithm such as CoSE in addressing SBGN rules on top of commonly accepted graph drawing criteria. An implementation of our algorithm in Java is available within ChiLay library (https://github.com/iVis-at-Bilkent/chilay). ugur@cs.bilkent.edu.tr or dogrusoz@cbio.mskcc.org Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press.

  4. An algorithm for automated layout of process description maps drawn in SBGN

    PubMed Central

    Genc, Begum; Dogrusoz, Ugur

    2016-01-01

    Motivation: Evolving technology has increased the focus on genomics. The combination of today’s advanced techniques with decades of molecular biology research has yielded huge amounts of pathway data. A standard, named the Systems Biology Graphical Notation (SBGN), was recently introduced to allow scientists to represent biological pathways in an unambiguous, easy-to-understand and efficient manner. Although there are a number of automated layout algorithms for various types of biological networks, currently none specialize on process description (PD) maps as defined by SBGN. Results: We propose a new automated layout algorithm for PD maps drawn in SBGN. Our algorithm is based on a force-directed automated layout algorithm called Compound Spring Embedder (CoSE). On top of the existing force scheme, additional heuristics employing new types of forces and movement rules are defined to address SBGN-specific rules. Our algorithm is the only automatic layout algorithm that properly addresses all SBGN rules for drawing PD maps, including placement of substrates and products of process nodes on opposite sides, compact tiling of members of molecular complexes and extensively making use of nested structures (compound nodes) to properly draw cellular locations and molecular complex structures. As demonstrated experimentally, the algorithm results in significant improvements over use of a generic layout algorithm such as CoSE in addressing SBGN rules on top of commonly accepted graph drawing criteria. Availability and implementation: An implementation of our algorithm in Java is available within ChiLay library (https://github.com/iVis-at-Bilkent/chilay). Contact: ugur@cs.bilkent.edu.tr or dogrusoz@cbio.mskcc.org Supplementary information: Supplementary data are available at Bioinformatics online. PMID:26363029

  5. Non-Algorithmic Issues in Automated Computational Mechanics

    DTIC Science & Technology

    1991-04-30

    appby-is the A-p adaptive methodL Te advantiap of this approa& is that while the com tional FE’s can provide only I algebraic rates o covegence, an...be generally superior to the others. Thus, proper selection of the algorithm for a specific problem can yield better results, often with less...usually analyzed by the finite element method (or any other numerical method). This stage of the design process usually, amounts to massive algebraic

  6. Automation Middleware and Algorithms for Robotic Underwater Sensor Networks

    DTIC Science & Technology

    2010-09-30

    international ROV competition with capability of underwater manipulation , as shown in Figure 4b. RESULTS 1. We noticed similar behaviors for all three...underwater robot control systems. We develop a testbed that including ROVs and AUVs to test this system. 2 Figure 1. Similar behaviors of the...Figure 3. Demonstrations of three dimensional cooperative exploration algorithms. We have developed a remotely operative vehicle ( ROV ) that has won a

  7. Algorithm for Automated Detection of Edges of Clouds

    NASA Technical Reports Server (NTRS)

    Ward, Jennifer G.; Merceret, Francis J.

    2006-01-01

    An algorithm processes cloud-physics data gathered in situ by an aircraft, along with reflectivity data gathered by ground-based radar, to determine whether the aircraft is inside or outside a cloud at a given time. A cloud edge is deemed to be detected when the in/out state changes, subject to a hysteresis constraint. Such determinations are important in continuing research on relationships among lightning, electric charges in clouds, and decay of electric fields with distance from cloud edges.

  8. An algorithm for automated identification of fault zone trapped waves

    NASA Astrophysics Data System (ADS)

    Ross, Z. E.; Ben-Zion, Y.

    2015-08-01

    We develop an algorithm for automatic identification of fault zone trapped waves in data recorded by seismic fault zone arrays. Automatic S picks are used to identify time windows in the seismograms for subsequent search for trapped waves. The algorithm calculates five features in each seismogram recorded by each station: predominant period, 1 s duration energy (representative of trapped waves), relative peak strength, arrival delay and 6 s duration energy (representative of the entire seismogram). These features are used collectively to identify stations in the array with seismograms that are statistical outliers. Applying the algorithm to large data sets allows for distinguishing genuine trapped waves from occasional localized site amplification in seismograms of other stations. The method is verified on a test data set recorded across the rupture zone of the 1992 Landers earthquake, for which trapped waves were previously identified manually, and is then applied to a larger data set with several thousand events recorded across the San Jacinto fault zone. The developed technique provides an important tool for systematic objective processing of large seismic waveform data sets recorded near fault zones.

  9. The effects of automated artifact removal algorithms on electroencephalography-based Alzheimer's disease diagnosis.

    PubMed

    Cassani, Raymundo; Falk, Tiago H; Fraga, Francisco J; Kanda, Paulo A M; Anghinah, Renato

    2014-01-01

    Over the last decade, electroencephalography (EEG) has emerged as a reliable tool for the diagnosis of cortical disorders such as Alzheimer's disease (AD). EEG signals, however, are susceptible to several artifacts, such as ocular, muscular, movement, and environmental. To overcome this limitation, existing diagnostic systems commonly depend on experienced clinicians to manually select artifact-free epochs from the collected multi-channel EEG data. Manual selection, however, is a tedious and time-consuming process, rendering the diagnostic system "semi-automated." Notwithstanding, a number of EEG artifact removal algorithms have been proposed in the literature. The (dis)advantages of using such algorithms in automated AD diagnostic systems, however, have not been documented; this paper aims to fill this gap. Here, we investigate the effects of three state-of-the-art automated artifact removal (AAR) algorithms (both alone and in combination with each other) on AD diagnostic systems based on four different classes of EEG features, namely, spectral, amplitude modulation rate of change, coherence, and phase. The three AAR algorithms tested are statistical artifact rejection (SAR), blind source separation based on second order blind identification and canonical correlation analysis (BSS-SOBI-CCA), and wavelet enhanced independent component analysis (wICA). Experimental results based on 20-channel resting-awake EEG data collected from 59 participants (20 patients with mild AD, 15 with moderate-to-severe AD, and 24 age-matched healthy controls) showed the wICA algorithm alone outperforming other enhancement algorithm combinations across three tasks: diagnosis (control vs. mild vs. moderate), early detection (control vs. mild), and disease progression (mild vs. moderate), thus opening the doors for fully-automated systems that can assist clinicians with early detection of AD, as well as disease severity progression assessment.

  10. The effects of automated artifact removal algorithms on electroencephalography-based Alzheimer's disease diagnosis

    PubMed Central

    Cassani, Raymundo; Falk, Tiago H.; Fraga, Francisco J.; Kanda, Paulo A. M.; Anghinah, Renato

    2014-01-01

    Over the last decade, electroencephalography (EEG) has emerged as a reliable tool for the diagnosis of cortical disorders such as Alzheimer's disease (AD). EEG signals, however, are susceptible to several artifacts, such as ocular, muscular, movement, and environmental. To overcome this limitation, existing diagnostic systems commonly depend on experienced clinicians to manually select artifact-free epochs from the collected multi-channel EEG data. Manual selection, however, is a tedious and time-consuming process, rendering the diagnostic system “semi-automated.” Notwithstanding, a number of EEG artifact removal algorithms have been proposed in the literature. The (dis)advantages of using such algorithms in automated AD diagnostic systems, however, have not been documented; this paper aims to fill this gap. Here, we investigate the effects of three state-of-the-art automated artifact removal (AAR) algorithms (both alone and in combination with each other) on AD diagnostic systems based on four different classes of EEG features, namely, spectral, amplitude modulation rate of change, coherence, and phase. The three AAR algorithms tested are statistical artifact rejection (SAR), blind source separation based on second order blind identification and canonical correlation analysis (BSS-SOBI-CCA), and wavelet enhanced independent component analysis (wICA). Experimental results based on 20-channel resting-awake EEG data collected from 59 participants (20 patients with mild AD, 15 with moderate-to-severe AD, and 24 age-matched healthy controls) showed the wICA algorithm alone outperforming other enhancement algorithm combinations across three tasks: diagnosis (control vs. mild vs. moderate), early detection (control vs. mild), and disease progression (mild vs. moderate), thus opening the doors for fully-automated systems that can assist clinicians with early detection of AD, as well as disease severity progression assessment. PMID:24723886

  11. Automated mineral identification algorithm using optical properties of crystals

    NASA Astrophysics Data System (ADS)

    Aligholi, Saeed; Khajavi, Reza; Razmara, Morteza

    2015-12-01

    A method has been developed to automatically characterize the type of mineral phases by means of digital image analysis using optical properties of crystals. The method relies on microscope automation, digital image acquisition, image processing and analysis. Two hundred series of digital images were taken from 45 standard thin sections using a digital camera mounted on a conventional microscope and then transmitted to a computer. CIELab color space is selected for the processing, in order to effectively employ its well-defined color difference metric for introducing appropriate color-based feature. Seven basic optical properties of minerals (A. color; B. pleochroism; C. interference color; D. birefringence; E. opacity; F. isotropy; G. extinction angle) are redefined. The Local Binary Pattern (LBP) operator and modeling texture is integrated in the Mineral Identification (MI) scheme to identify homogeneous regions in microscopic images of minerals. The accuracy of mineral identification using the method was %99, %98, %96 and %95 for biotite, hornblende, quartz and calcite minerals, respectively. The method is applicable to other minerals and phases for which individual optical properties of crystals do not provide enough discrimination between the relevant phases. On the basis of this research, it can be concluded that if the CIELab color space and the local binary pattern (LBP) are applied, it is possible to recognize the mineral samples with the accuracy of more than 98%.

  12. [Lupus anticoagulant: complete automation (ACL-Futura) and diagnostic algorithm].

    PubMed

    Aulesa, C; Tusell, J; Ortega, J J; Sentis, M

    1999-12-01

    The technical and clinical evaluation of a new laboratory profile with six tests, for the realization of the assay called "lupic anticoagulant" with the help of ACL-Futura analyzer (Intrumentation Laboratory). The within-between day imprecision of the tests that compose the profile: APTT-Diluted, APTT-D Mix, LacScreen, LacScreen Mix, LacConfirm and LacConfirm Mix, are between 2.87%-11.61% with controls, this imprecision is lowest with patients. A study of bilirrubin and lipemia interferences is presented. The practicability study present the technical, time consuming difficulty, and shows that the cost of the screening test is about 1136 ptas (6.83 euros) and with the confirmative test is 2766 ptas (16.62 euros). The clinical study describes our preliminary results with the application of this new profile for almost 2 years and six clinical cases are presented. The good technical and clinical results of the evaluation of the new profile proposed to detect the positive lupic anticoagulant, in addition to the fully automated assay with the ACL-Futura analyzer, validate the whole method to resolve the increased demand of these parameters.

  13. Evaluation of algorithms for automated phase correction of NMR spectra.

    PubMed

    de Brouwer, Hans

    2009-12-01

    In our attempt to fully automate the data acquisition and processing of NMR analysis of dissolved synthetic polymers, phase correction was found to be the most challenging aspect. Several approaches in literature were evaluated but none of these was found to be capable of phasing NMR spectra with sufficient robustness and high enough accuracy to fully eliminate intervention by a human operator. Step by step, aspects from the process of manual/visual phase correction were translated into mathematical concepts and evaluated. This included area minimization, peak height maximization, negative peak minimization and baseline correction. It was found that not one single approach would lead to acceptable results but that a combination of aspects was required, in line again with the process of manual phase correction. The combination of baseline correction, area minimization and negative area penalization was found to give the desired results. The robustness was found to be 100% which means that the correct zeroth order and first order phasing parameters are returned independent of the position of the starting point of the search in this parameter space. When applied to high signal-to-noise proton spectra, the accuracy was such that the returned phasing parameters were within a distance of 0.1-0.4 degrees in the two dimensional parameter space which resulted in an average error of 0.1% in calculated properties such as copolymer composition and end groups.

  14. Design principles and algorithms for automated air traffic management

    NASA Technical Reports Server (NTRS)

    Erzberger, Heinz

    1995-01-01

    This paper presents design principles and algorithm for building a real time scheduler. The primary objective of the scheduler is to assign arrival aircraft to a favorable landing runway and schedule them to land at times that minimize delays. A further objective of the scheduler is to allocate delays between high altitude airspace far from the airport and low altitude airspace near the airport. A method of delay allocation is described that minimizes the average operating cost in the presence of errors in controlling aircraft to a specified landing time.

  15. Design principles and algorithms for automated air traffic management

    NASA Technical Reports Server (NTRS)

    Erzberger, Heinz

    1995-01-01

    This paper presents design principles and algorithm for building a real time scheduler. The primary objective of the scheduler is to assign arrival aircraft to a favorable landing runway and schedule them to land at times that minimize delays. A further objective of the scheduler is to allocate delays between high altitude airspace far from the airport and low altitude airspace near the airport. A method of delay allocation is described that minimizes the average operating cost in the presence of errors in controlling aircraft to a specified landing time.

  16. Automated Photogrammetric Image Matching with Sift Algorithm and Delaunay Triangulation

    NASA Astrophysics Data System (ADS)

    Karagiannis, Georgios; Antón Castro, Francesc; Mioc, Darka

    2016-06-01

    An algorithm for image matching of multi-sensor and multi-temporal satellite images is developed. The method is based on the SIFT feature detector proposed by Lowe in (Lowe, 1999). First, SIFT feature points are detected independently in two images (reference and sensed image). The features detected are invariant to image rotations, translations, scaling and also to changes in illumination, brightness and 3-dimensional viewpoint. Afterwards, each feature of the reference image is matched with one in the sensed image if, and only if, the distance between them multiplied by a threshold is shorter than the distances between the point and all the other points in the sensed image. Then, the matched features are used to compute the parameters of the homography that transforms the coordinate system of the sensed image to the coordinate system of the reference image. The Delaunay triangulations of each feature set for each image are computed. The isomorphism of the Delaunay triangulations is determined to guarantee the quality of the image matching. The algorithm is implemented in Matlab and tested on World-View 2, SPOT6 and TerraSAR-X image patches.

  17. Validation of an automated seizure detection algorithm for term neonates

    PubMed Central

    Mathieson, Sean R.; Stevenson, Nathan J.; Low, Evonne; Marnane, William P.; Rennie, Janet M.; Temko, Andrey; Lightbody, Gordon; Boylan, Geraldine B.

    2016-01-01

    Objective The objective of this study was to validate the performance of a seizure detection algorithm (SDA) developed by our group, on previously unseen, prolonged, unedited EEG recordings from 70 babies from 2 centres. Methods EEGs of 70 babies (35 seizure, 35 non-seizure) were annotated for seizures by experts as the gold standard. The SDA was tested on the EEGs at a range of sensitivity settings. Annotations from the expert and SDA were compared using event and epoch based metrics. The effect of seizure duration on SDA performance was also analysed. Results Between sensitivity settings of 0.5 and 0.3, the algorithm achieved seizure detection rates of 52.6–75.0%, with false detection (FD) rates of 0.04–0.36 FD/h for event based analysis, which was deemed to be acceptable in a clinical environment. Time based comparison of expert and SDA annotations using Cohen’s Kappa Index revealed a best performing SDA threshold of 0.4 (Kappa 0.630). The SDA showed improved detection performance with longer seizures. Conclusion The SDA achieved promising performance and warrants further testing in a live clinical evaluation. Significance The SDA has the potential to improve seizure detection and provide a robust tool for comparing treatment regimens. PMID:26055336

  18. Novel Approaches for Diagnosing Melanoma Skin Lesions Through Supervised and Deep Learning Algorithms.

    PubMed

    Premaladha, J; Ravichandran, K S

    2016-04-01

    Dermoscopy is a technique used to capture the images of skin, and these images are useful to analyze the different types of skin diseases. Malignant melanoma is a kind of skin cancer whose severity even leads to death. Earlier detection of melanoma prevents death and the clinicians can treat the patients to increase the chances of survival. Only few machine learning algorithms are developed to detect the melanoma using its features. This paper proposes a Computer Aided Diagnosis (CAD) system which equips efficient algorithms to classify and predict the melanoma. Enhancement of the images are done using Contrast Limited Adaptive Histogram Equalization technique (CLAHE) and median filter. A new segmentation algorithm called Normalized Otsu's Segmentation (NOS) is implemented to segment the affected skin lesion from the normal skin, which overcomes the problem of variable illumination. Fifteen features are derived and extracted from the segmented images are fed into the proposed classification techniques like Deep Learning based Neural Networks and Hybrid Adaboost-Support Vector Machine (SVM) algorithms. The proposed system is tested and validated with nearly 992 images (malignant & benign lesions) and it provides a high classification accuracy of 93 %. The proposed CAD system can assist the dermatologists to confirm the decision of the diagnosis and to avoid excisional biopsies.

  19. Image-derived input function derived from a supervised clustering algorithm: methodology and validation in a clinical protocol using [11C](R)-rolipram.

    PubMed

    Lyoo, Chul Hyoung; Zanotti-Fregonara, Paolo; Zoghbi, Sami S; Liow, Jeih-San; Xu, Rong; Pike, Victor W; Zarate, Carlos A; Fujita, Masahiro; Innis, Robert B

    2014-01-01

    Image-derived input function (IDIF) obtained by manually drawing carotid arteries (manual-IDIF) can be reliably used in [(11)C](R)-rolipram positron emission tomography (PET) scans. However, manual-IDIF is time consuming and subject to inter- and intra-operator variability. To overcome this limitation, we developed a fully automated technique for deriving IDIF with a supervised clustering algorithm (SVCA). To validate this technique, 25 healthy controls and 26 patients with moderate to severe major depressive disorder (MDD) underwent T1-weighted brain magnetic resonance imaging (MRI) and a 90-minute [(11)C](R)-rolipram PET scan. For each subject, metabolite-corrected input function was measured from the radial artery. SVCA templates were obtained from 10 additional healthy subjects who underwent the same MRI and PET procedures. Cluster-IDIF was obtained as follows: 1) template mask images were created for carotid and surrounding tissue; 2) parametric image of weights for blood were created using SVCA; 3) mask images to the individual PET image were inversely normalized; 4) carotid and surrounding tissue time activity curves (TACs) were obtained from weighted and unweighted averages of each voxel activity in each mask, respectively; 5) partial volume effects and radiometabolites were corrected using individual arterial data at four points. Logan-distribution volume (V T/f P) values obtained by cluster-IDIF were similar to reference results obtained using arterial data, as well as those obtained using manual-IDIF; 39 of 51 subjects had a V T/f P error of <5%, and only one had error >10%. With automatic voxel selection, cluster-IDIF curves were less noisy than manual-IDIF and free of operator-related variability. Cluster-IDIF showed widespread decrease of about 20% [(11)C](R)-rolipram binding in the MDD group. Taken together, the results suggest that cluster-IDIF is a good alternative to full arterial input function for estimating Logan-V T/f P in [(11)C

  20. Image-Derived Input Function Derived from a Supervised Clustering Algorithm: Methodology and Validation in a Clinical Protocol Using [11C](R)-Rolipram

    PubMed Central

    Zoghbi, Sami S.; Liow, Jeih-San; Xu, Rong; Pike, Victor W.; Zarate, Carlos A.; Fujita, Masahiro; Innis, Robert B.

    2014-01-01

    Image-derived input function (IDIF) obtained by manually drawing carotid arteries (manual-IDIF) can be reliably used in [11C](R)-rolipram positron emission tomography (PET) scans. However, manual-IDIF is time consuming and subject to inter- and intra-operator variability. To overcome this limitation, we developed a fully automated technique for deriving IDIF with a supervised clustering algorithm (SVCA). To validate this technique, 25 healthy controls and 26 patients with moderate to severe major depressive disorder (MDD) underwent T1-weighted brain magnetic resonance imaging (MRI) and a 90-minute [11C](R)-rolipram PET scan. For each subject, metabolite-corrected input function was measured from the radial artery. SVCA templates were obtained from 10 additional healthy subjects who underwent the same MRI and PET procedures. Cluster-IDIF was obtained as follows: 1) template mask images were created for carotid and surrounding tissue; 2) parametric image of weights for blood were created using SVCA; 3) mask images to the individual PET image were inversely normalized; 4) carotid and surrounding tissue time activity curves (TACs) were obtained from weighted and unweighted averages of each voxel activity in each mask, respectively; 5) partial volume effects and radiometabolites were corrected using individual arterial data at four points. Logan-distribution volume (VT/fP) values obtained by cluster-IDIF were similar to reference results obtained using arterial data, as well as those obtained using manual-IDIF; 39 of 51 subjects had a VT/fP error of <5%, and only one had error >10%. With automatic voxel selection, cluster-IDIF curves were less noisy than manual-IDIF and free of operator-related variability. Cluster-IDIF showed widespread decrease of about 20% [11C](R)-rolipram binding in the MDD group. Taken together, the results suggest that cluster-IDIF is a good alternative to full arterial input function for estimating Logan-VT/fP in [11C](R)-rolipram PET clinical

  1. Evaluation of supervised machine-learning algorithms to distinguish between inflammatory bowel disease and alimentary lymphoma in cats.

    PubMed

    Awaysheh, Abdullah; Wilcke, Jeffrey; Elvinger, François; Rees, Loren; Fan, Weiguo; Zimmerman, Kurt L

    2016-11-01

    Inflammatory bowel disease (IBD) and alimentary lymphoma (ALA) are common gastrointestinal diseases in cats. The very similar clinical signs and histopathologic features of these diseases make the distinction between them diagnostically challenging. We tested the use of supervised machine-learning algorithms to differentiate between the 2 diseases using data generated from noninvasive diagnostic tests. Three prediction models were developed using 3 machine-learning algorithms: naive Bayes, decision trees, and artificial neural networks. The models were trained and tested on data from complete blood count (CBC) and serum chemistry (SC) results for the following 3 groups of client-owned cats: normal, inflammatory bowel disease (IBD), or alimentary lymphoma (ALA). Naive Bayes and artificial neural networks achieved higher classification accuracy (sensitivities of 70.8% and 69.2%, respectively) than the decision tree algorithm (63%, p < 0.0001). The areas under the receiver-operating characteristic curve for classifying cases into the 3 categories was 83% by naive Bayes, 79% by decision tree, and 82% by artificial neural networks. Prediction models using machine learning provided a method for distinguishing between ALA-IBD, ALA-normal, and IBD-normal. The naive Bayes and artificial neural networks classifiers used 10 and 4 of the CBC and SC variables, respectively, to outperform the C4.5 decision tree, which used 5 CBC and SC variables in classifying cats into the 3 classes. These models can provide another noninvasive diagnostic tool to assist clinicians with differentiating between IBD and ALA, and between diseased and nondiseased cats. © 2016 The Author(s).

  2. Automated Target Planning for FUSE Using the SOVA Algorithm

    NASA Technical Reports Server (NTRS)

    Heatwole, Scott; Lanzi, R. James; Civeit, Thomas; Calvani, Humberto; Kruk, Jeffrey W.; Suchkov, Anatoly

    2007-01-01

    The SOVA algorithm was originally developed under the Resilient Systems and Operations Project of the Engineering for Complex Systems Program from NASA s Aerospace Technology Enterprise as a conceptual framework to support real-time autonomous system mission and contingency management. The algorithm and its software implementation were formulated for generic application to autonomous flight vehicle systems, and its efficacy was demonstrated by simulation within the problem domain of Unmanned Aerial Vehicle autonomous flight management. The approach itself is based upon the precept that autonomous decision making for a very complex system can be made tractable by distillation of the system state to a manageable set of strategic objectives (e.g. maintain power margin, maintain mission timeline, and et cetera), which if attended to, will result in a favorable outcome. From any given starting point, the attainability of the end-states resulting from a set of candidate decisions is assessed by propagating a system model forward in time while qualitatively mapping simulated states into margins on strategic objectives using fuzzy inference systems. The expected return value of each candidate decision is evaluated as the product of the assigned value of the end-state with the assessed attainability of the end-state. The candidate decision yielding the highest expected return value is selected for implementation; thus, the approach provides a software framework for intelligent autonomous risk management. The name adopted for the technique incorporates its essential elements: Strategic Objective Valuation and Attainability (SOVA). Maximum value of the approach is realized for systems where human intervention is unavailable in the timeframe within which critical control decisions must be made. The Far Ultraviolet Spectroscopic Explorer (FUSE) satellite, launched in 1999, has been collecting science data for eight years.[1] At its beginning of life, FUSE had six gyros in two

  3. Acoustic diagnosis of pulmonary hypertension: automated speech- recognition-inspired classification algorithm outperforms physicians.

    PubMed

    Kaddoura, Tarek; Vadlamudi, Karunakar; Kumar, Shine; Bobhate, Prashant; Guo, Long; Jain, Shreepal; Elgendi, Mohamed; Coe, James Y; Kim, Daniel; Taylor, Dylan; Tymchak, Wayne; Schuurmans, Dale; Zemp, Roger J; Adatia, Ian

    2016-09-09

    We hypothesized that an automated speech- recognition-inspired classification algorithm could differentiate between the heart sounds in subjects with and without pulmonary hypertension (PH) and outperform physicians. Heart sounds, electrocardiograms, and mean pulmonary artery pressures (mPAp) were recorded simultaneously. Heart sound recordings were digitized to train and test speech-recognition-inspired classification algorithms. We used mel-frequency cepstral coefficients to extract features from the heart sounds. Gaussian-mixture models classified the features as PH (mPAp ≥ 25 mmHg) or normal (mPAp < 25 mmHg). Physicians blinded to patient data listened to the same heart sound recordings and attempted a diagnosis. We studied 164 subjects: 86 with mPAp ≥ 25 mmHg (mPAp 41 ± 12 mmHg) and 78 with mPAp < 25 mmHg (mPAp 17 ± 5 mmHg) (p  < 0.005). The correct diagnostic rate of the automated speech-recognition-inspired algorithm was 74% compared to 56% by physicians (p = 0.005). The false positive rate for the algorithm was 34% versus 50% (p = 0.04) for clinicians. The false negative rate for the algorithm was 23% and 68% (p = 0.0002) for physicians. We developed an automated speech-recognition-inspired classification algorithm for the acoustic diagnosis of PH that outperforms physicians that could be used to screen for PH and encourage earlier specialist referral.

  4. Acoustic diagnosis of pulmonary hypertension: automated speech- recognition-inspired classification algorithm outperforms physicians

    NASA Astrophysics Data System (ADS)

    Kaddoura, Tarek; Vadlamudi, Karunakar; Kumar, Shine; Bobhate, Prashant; Guo, Long; Jain, Shreepal; Elgendi, Mohamed; Coe, James Y.; Kim, Daniel; Taylor, Dylan; Tymchak, Wayne; Schuurmans, Dale; Zemp, Roger J.; Adatia, Ian

    2016-09-01

    We hypothesized that an automated speech- recognition-inspired classification algorithm could differentiate between the heart sounds in subjects with and without pulmonary hypertension (PH) and outperform physicians. Heart sounds, electrocardiograms, and mean pulmonary artery pressures (mPAp) were recorded simultaneously. Heart sound recordings were digitized to train and test speech-recognition-inspired classification algorithms. We used mel-frequency cepstral coefficients to extract features from the heart sounds. Gaussian-mixture models classified the features as PH (mPAp ≥ 25 mmHg) or normal (mPAp < 25 mmHg). Physicians blinded to patient data listened to the same heart sound recordings and attempted a diagnosis. We studied 164 subjects: 86 with mPAp ≥ 25 mmHg (mPAp 41 ± 12 mmHg) and 78 with mPAp < 25 mmHg (mPAp 17 ± 5 mmHg) (p  < 0.005). The correct diagnostic rate of the automated speech-recognition-inspired algorithm was 74% compared to 56% by physicians (p = 0.005). The false positive rate for the algorithm was 34% versus 50% (p = 0.04) for clinicians. The false negative rate for the algorithm was 23% and 68% (p = 0.0002) for physicians. We developed an automated speech-recognition-inspired classification algorithm for the acoustic diagnosis of PH that outperforms physicians that could be used to screen for PH and encourage earlier specialist referral.

  5. Acoustic diagnosis of pulmonary hypertension: automated speech- recognition-inspired classification algorithm outperforms physicians

    PubMed Central

    Kaddoura, Tarek; Vadlamudi, Karunakar; Kumar, Shine; Bobhate, Prashant; Guo, Long; Jain, Shreepal; Elgendi, Mohamed; Coe, James Y; Kim, Daniel; Taylor, Dylan; Tymchak, Wayne; Schuurmans, Dale; Zemp, Roger J.; Adatia, Ian

    2016-01-01

    We hypothesized that an automated speech- recognition-inspired classification algorithm could differentiate between the heart sounds in subjects with and without pulmonary hypertension (PH) and outperform physicians. Heart sounds, electrocardiograms, and mean pulmonary artery pressures (mPAp) were recorded simultaneously. Heart sound recordings were digitized to train and test speech-recognition-inspired classification algorithms. We used mel-frequency cepstral coefficients to extract features from the heart sounds. Gaussian-mixture models classified the features as PH (mPAp ≥ 25 mmHg) or normal (mPAp < 25 mmHg). Physicians blinded to patient data listened to the same heart sound recordings and attempted a diagnosis. We studied 164 subjects: 86 with mPAp ≥ 25 mmHg (mPAp 41 ± 12 mmHg) and 78 with mPAp < 25 mmHg (mPAp 17 ± 5 mmHg) (p  < 0.005). The correct diagnostic rate of the automated speech-recognition-inspired algorithm was 74% compared to 56% by physicians (p = 0.005). The false positive rate for the algorithm was 34% versus 50% (p = 0.04) for clinicians. The false negative rate for the algorithm was 23% and 68% (p = 0.0002) for physicians. We developed an automated speech-recognition-inspired classification algorithm for the acoustic diagnosis of PH that outperforms physicians that could be used to screen for PH and encourage earlier specialist referral. PMID:27609672

  6. Astronomical algorithms for automated analysis of tissue protein expression in breast cancer

    PubMed Central

    Ali, H R; Irwin, M; Morris, L; Dawson, S-J; Blows, F M; Provenzano, E; Mahler-Araujo, B; Pharoah, P D; Walton, N A; Brenton, J D; Caldas, C

    2013-01-01

    Background: High-throughput evaluation of tissue biomarkers in oncology has been greatly accelerated by the widespread use of tissue microarrays (TMAs) and immunohistochemistry. Although TMAs have the potential to facilitate protein expression profiling on a scale to rival experiments of tumour transcriptomes, the bottleneck and imprecision of manually scoring TMAs has impeded progress. Methods: We report image analysis algorithms adapted from astronomy for the precise automated analysis of IHC in all subcellular compartments. The power of this technique is demonstrated using over 2000 breast tumours and comparing quantitative automated scores against manual assessment by pathologists. Results: All continuous automated scores showed good correlation with their corresponding ordinal manual scores. For oestrogen receptor (ER), the correlation was 0.82, P<0.0001, for BCL2 0.72, P<0.0001 and for HER2 0.62, P<0.0001. Automated scores showed excellent concordance with manual scores for the unsupervised assignment of cases to ‘positive' or ‘negative' categories with agreement rates of up to 96%. Conclusion: The adaptation of astronomical algorithms coupled with their application to large annotated study cohorts, constitutes a powerful tool for the realisation of the enormous potential of digital pathology. PMID:23329232

  7. Application of supervised machine learning algorithms for the classification of regulatory RNA riboswitches.

    PubMed

    Singh, Swadha; Singh, Raghvendra

    2017-03-01

    Riboswitches, the small structured RNA elements, were discovered about a decade ago. It has been the subject of intense interest to identify riboswitches, understand their mechanisms of action and use them in genetic engineering. The accumulation of genome and transcriptome sequence data and comparative genomics provide unprecedented opportunities to identify riboswitches in the genome. In the present study, we have evaluated the following six machine learning algorithms for their efficiency to classify riboswitches: J48, BayesNet, Naïve Bayes, Multilayer Perceptron, sequential minimal optimization, hidden Markov model (HMM). For determining effective classifier, the algorithms were compared on the statistical measures of specificity, sensitivity, accuracy, F-measure and receiver operating characteristic (ROC) plot analysis. The classifier Multilayer Perceptron achieved the best performance, with the highest specificity, sensitivity, F-score and accuracy, and with the largest area under the ROC curve, whereas HMM was the poorest performer. At present, the available tools for the prediction and classification of riboswitches are based on covariance model, support vector machine and HMM. The present study determines Multilayer Perceptron as a better classifier for the genome-wide riboswitch searches.

  8. A New Avenue for Classification and Prediction of Olive Cultivars Using Supervised and Unsupervised Algorithms

    PubMed Central

    Beiki, Amir H.; Saboor, Saba; Ebrahimi, Mansour

    2012-01-01

    Various methods have been used to identify cultivares of olive trees; herein we used different bioinformatics algorithms to propose new tools to classify 10 cultivares of olive based on RAPD and ISSR genetic markers datasets generated from PCR reactions. Five RAPD markers (OPA0a21, OPD16a, OP01a1, OPD16a1 and OPA0a8) and five ISSR markers (UBC841a4, UBC868a7, UBC841a14, U12BC807a and UBC810a13) selected as the most important markers by all attribute weighting models. K-Medoids unsupervised clustering run on SVM dataset was fully able to cluster each olive cultivar to the right classes. All trees (176) induced by decision tree models generated meaningful trees and UBC841a4 attribute clearly distinguished between foreign and domestic olive cultivars with 100% accuracy. Predictive machine learning algorithms (SVM and Naïve Bayes) were also able to predict the right class of olive cultivares with 100% accuracy. For the first time, our results showed data mining techniques can be effectively used to distinguish between plant cultivares and proposed machine learning based systems in this study can predict new olive cultivars with the best possible accuracy. PMID:22957050

  9. Clinical evaluation of the vector algorithm for neonatal hearing screening using automated auditory brainstem response.

    PubMed

    Keohane, Bernie M; Mason, Steve M; Baguley, David M

    2004-02-01

    A novel auditory brainstem response (ABR) detection and scoring algorithm, entitled the Vector algorithm is described. An independent clinical evaluation of the algorithm using 464 tests (120 non-stimulated and 344 stimulated tests) on 60 infants, with a mean age of approximately 6.5 weeks, estimated test sensitivity greater than 0.99 and test specificity at 0.87 for one test. Specificity was estimated to be greater than 0.95 for a two stage screen. Test times were of the order of 1.5 minutes per ear for detection of an ABR and 4.5 minutes per ear in the absence of a clear response. The Vector algorithm is commercially available for both automated screening and threshold estimation in hearing screening devices.

  10. The GOES-R ABI Wild Fire Automated Biomass Burning Algorithm

    NASA Astrophysics Data System (ADS)

    Hoffman, J.; Schmidt, C. C.; Prins, E. M.; Brunner, J. C.

    2011-12-01

    The global Wild Fire Automated Biomass Burning Algorithm (WF_ABBA) at the Cooperative Institute for Meteorological Satellite Studies (CIMSS) provides fire detection and characterization using data from a global constellation of geostationary satellites, currently including GOES, MTSAT, and Meteosat. CIMSS continues to enhance the legacy of the WF_ABBA by adapting the algorithm to utilize the advanced spatial, spectral, and temporal capabilities of GOES-R ABI. A wide range of simulated ABI data cases have been generated and processed with the GOES-R fire detection and characterization algorithm. Simulated cases included MODIS derived projections as well as model derived simulations that span a variety of satellite zenith angles and ecosystems. The GOES-R ABI fire product development focuses on active fire detection and sub-pixel characterization, including fire radiative power (FRP) and instantaneous fire size and temperature. With the algorithm delivered to the system contractor, the focus has moved to developing innovative new validation techniques.

  11. Automated anatomical labeling algorithm of bronchial branches based on multi-slice CT images

    NASA Astrophysics Data System (ADS)

    Kawai, J.; Saita, S.; Kubo, M.; Kawata, Y.; Niki, N.; Nakano, Y.; Nishitani, H.; Ohmatsu, H.; Eguchi, K.; Moriyama, N.

    2006-03-01

    Multi-slice CT technology was developed, so, we can get clear contrast images and thin slice images. But doctors need to diagnosis many image, thus their load increases. Therefore, development of the algorithm that analyses lung internal-organs is expected. When doctors diagnose lung internal-organs, they understand it. So, detailed analyze of lung internal-organs is applicant to early detection of a nodule. Especially, analyzing bronchus provides that useful information of detection of airway disease and classification of the pulmonary vein and artery. In this paper, we describe a method for automated anatomical labeling algorithm of bronchial branches based on Multi-Slice CT images.

  12. Automated anatomical labeling algorithm of bronchial branches based on multi-slice CT images

    NASA Astrophysics Data System (ADS)

    Kawai, J.; Saita, S.; Kubo, M.; Kawata, Y.; Niki, N.; Nakano, Y.; Nishitani, H.; Ohmatsu, H.; Eguchi, K.; Kaneko, M.; Kusumoto, M.; Kakinuma, R.; Moriyama, N.

    2007-03-01

    Multi-slice CT technology was developed, so, we can get clear contrast images and thin slice images. But doctors need to diagnosis many image, thus their load increases. Therefore, development of the algorithm that analyses lung internal-organs is expected. When doctors diagnose lung internal-organs, they understand it. So, detailed analyze of lung internal-organs is applicant to early detection of a nodule. Especially, analyzing bronchus provides that useful information of detection of airway disease and classification of the pulmonary vein and artery. In this paper, we describe a method for automated anatomical labeling algorithm of bronchial branches based on Multi-Slice CT images.

  13. Improved automated monitoring and new analysis algorithm for circadian phototaxis rhythms in Chlamydomonas

    PubMed Central

    Gaskill, Christa; Forbes-Stovall, Jennifer; Kessler, Bruce; Young, Mike; Rinehart, Claire A.; Jacobshagen, Sigrid

    2010-01-01

    Automated monitoring of circadian rhythms is an efficient way of gaining insight into oscillation parameters like period and phase for the underlying pacemaker of the circadian clock. Measurement of the circadian rhythm of phototaxis (swimming towards light) exhibited by the green alga Chlamydomonas reinhardtii has been automated by directing a narrow and dim light beam through a culture at regular intervals and determining the decrease in light transmittance due to the accumulation of cells in the beam. In this study, the monitoring process was optimized by constructing a new computer-controlled measuring machine that limits the test beam to wavelengths reported to be specific for phototaxis and by choosing an algal strain, which does not need background illumination between test light cycles for proper expression of the rhythm. As a result, period and phase of the rhythm are now unaffected by the time a culture is placed into the machine. Analysis of the rhythm data was also optimized through a new algorithm, whose robustness was demonstrated using virtual rhythms with various noises. The algorithm differs in particular from other reported algorithms by maximizing the fit of the data to a sinusoidal curve that dampens exponentially. The algorithm was also used to confirm the reproducibility of rhythm monitoring by the machine. Machine and algorithm can now be used for a multitude of circadian clock studies that require unambiguous period and phase determinations such as light pulse experiments to identify the photoreceptor(s) that reset the circadian clock in C. reinhardtii. PMID:20116270

  14. Reliability of old and new ventricular fibrillation detection algorithms for automated external defibrillators

    PubMed Central

    Amann, Anton; Tratnig, Robert; Unterkofler, Karl

    2005-01-01

    Background A pivotal component in automated external defibrillators (AEDs) is the detection of ventricular fibrillation by means of appropriate detection algorithms. In scientific literature there exists a wide variety of methods and ideas for handling this task. These algorithms should have a high detection quality, be easily implementable, and work in real time in an AED. Testing of these algorithms should be done by using a large amount of annotated data under equal conditions. Methods For our investigation we simulated a continuous analysis by selecting the data in steps of one second without any preselection. We used the complete BIH-MIT arrhythmia database, the CU database, and the files 7001 – 8210 of the AHA database. All algorithms were tested under equal conditions. Results For 5 well-known standard and 5 new ventricular fibrillation detection algorithms we calculated the sensitivity, specificity, and the area under their receiver operating characteristic. In addition, two QRS detection algorithms were included. These results are based on approximately 330 000 decisions (per algorithm). Conclusion Our values for sensitivity and specificity differ from earlier investigations since we used no preselection. The best algorithm is a new one, presented here for the first time. PMID:16253134

  15. Taboo search algorithm for item assignment in synchronized zone automated order picking system

    NASA Astrophysics Data System (ADS)

    Wu, Yingying; Wu, Yaohua

    2014-07-01

    The idle time which is part of the order fulfillment time is decided by the number of items in the zone; therefore the item assignment method affects the picking efficiency. Whereas previous studies only focus on the balance of number of kinds of items between different zones but not the number of items and the idle time in each zone. In this paper, an idle factor is proposed to measure the idle time exactly. The idle factor is proven to obey the same vary trend with the idle time, so the object of this problem can be simplified from minimizing idle time to minimizing idle factor. Based on this, the model of item assignment problem in synchronized zone automated order picking system is built. The model is a form of relaxation of parallel machine scheduling problem which had been proven to be NP-complete. To solve the model, a taboo search algorithm is proposed. The main idea of the algorithm is minimizing the greatest idle factor of zones with the 2-exchange algorithm. Finally, the simulation which applies the data collected from a tobacco distribution center is conducted to evaluate the performance of the algorithm. The result verifies the model and shows the algorithm can do a steady work to reduce idle time and the idle time can be reduced by 45.63% on average. This research proposed an approach to measure the idle time in synchronized zone automated order picking system. The approach can improve the picking efficiency significantly and can be seen as theoretical basis when optimizing the synchronized automated order picking systems.

  16. Automated contouring error detection based on supervised geometric attribute distribution models for radiation therapy: a general strategy.

    PubMed

    Chen, Hsin-Chen; Tan, Jun; Dolly, Steven; Kavanaugh, James; Anastasio, Mark A; Low, Daniel A; Li, H Harold; Altman, Michael; Gay, Hiram; Thorstad, Wade L; Mutic, Sasa; Li, Hua

    2015-02-01

    One of the most critical steps in radiation therapy treatment is accurate tumor and critical organ-at-risk (OAR) contouring. Both manual and automated contouring processes are prone to errors and to a large degree of inter- and intraobserver variability. These are often due to the limitations of imaging techniques in visualizing human anatomy as well as to inherent anatomical variability among individuals. Physicians/physicists have to reverify all the radiation therapy contours of every patient before using them for treatment planning, which is tedious, laborious, and still not an error-free process. In this study, the authors developed a general strategy based on novel geometric attribute distribution (GAD) models to automatically detect radiation therapy OAR contouring errors and facilitate the current clinical workflow. Considering the radiation therapy structures' geometric attributes (centroid, volume, and shape), the spatial relationship of neighboring structures, as well as anatomical similarity of individual contours among patients, the authors established GAD models to characterize the interstructural centroid and volume variations, and the intrastructural shape variations of each individual structure. The GAD models are scalable and deformable, and constrained by their respective principal attribute variations calculated from training sets with verified OAR contours. A new iterative weighted GAD model-fitting algorithm was developed for contouring error detection. Receiver operating characteristic (ROC) analysis was employed in a unique way to optimize the model parameters to satisfy clinical requirements. A total of forty-four head-and-neck patient cases, each of which includes nine critical OAR contours, were utilized to demonstrate the proposed strategy. Twenty-nine out of these forty-four patient cases were utilized to train the inter- and intrastructural GAD models. These training data and the remaining fifteen testing data sets were separately

  17. Automated contouring error detection based on supervised geometric attribute distribution models for radiation therapy: A general strategy

    SciTech Connect

    Chen, Hsin-Chen; Tan, Jun; Dolly, Steven; Kavanaugh, James; Harold Li, H.; Altman, Michael; Gay, Hiram; Thorstad, Wade L.; Mutic, Sasa; Li, Hua; Anastasio, Mark A.; Low, Daniel A.

    2015-02-15

    Purpose: One of the most critical steps in radiation therapy treatment is accurate tumor and critical organ-at-risk (OAR) contouring. Both manual and automated contouring processes are prone to errors and to a large degree of inter- and intraobserver variability. These are often due to the limitations of imaging techniques in visualizing human anatomy as well as to inherent anatomical variability among individuals. Physicians/physicists have to reverify all the radiation therapy contours of every patient before using them for treatment planning, which is tedious, laborious, and still not an error-free process. In this study, the authors developed a general strategy based on novel geometric attribute distribution (GAD) models to automatically detect radiation therapy OAR contouring errors and facilitate the current clinical workflow. Methods: Considering the radiation therapy structures’ geometric attributes (centroid, volume, and shape), the spatial relationship of neighboring structures, as well as anatomical similarity of individual contours among patients, the authors established GAD models to characterize the interstructural centroid and volume variations, and the intrastructural shape variations of each individual structure. The GAD models are scalable and deformable, and constrained by their respective principal attribute variations calculated from training sets with verified OAR contours. A new iterative weighted GAD model-fitting algorithm was developed for contouring error detection. Receiver operating characteristic (ROC) analysis was employed in a unique way to optimize the model parameters to satisfy clinical requirements. A total of forty-four head-and-neck patient cases, each of which includes nine critical OAR contours, were utilized to demonstrate the proposed strategy. Twenty-nine out of these forty-four patient cases were utilized to train the inter- and intrastructural GAD models. These training data and the remaining fifteen testing data sets

  18. Classification models for clear cell renal carcinoma stage progression, based on tumor RNAseq expression trained supervised machine learning algorithms

    PubMed Central

    2014-01-01

    Background Clear-cell Renal Cell Carcinoma (ccRCC) is the most- prevalent, chemotherapy resistant and lethal adult kidney cancer. There is a need for novel diagnostic and prognostic biomarkers for ccRCC, due to its heterogeneous molecular profiles and asymptomatic early stage. This study aims to develop classification models to distinguish early stage and late stage of ccRCC based on gene expression profiles. We employed supervised learning algorithms- J48, Random Forest, SMO and Naïve Bayes; with enriched model learning by fast correlation based feature selection to develop classification models trained on sequencing based gene expression data of RNAseq experiments, obtained from The Cancer Genome Atlas. Results Different models developed in the study were evaluated on the basis of 10 fold cross validations and independent dataset testing. Random Forest based prediction model performed best amongst the models developed in the study, with a sensitivity of 89%, accuracy of 77% and area under Receivers Operating Curve of 0.8. Conclusions We anticipate that the prioritized subset of 62 genes and prediction models developed in this study will aid experimental oncologists to expedite understanding of the molecular mechanisms of stage progression and discovery of prognostic factors for ccRCC tumors. PMID:25374611

  19. Classification models for clear cell renal carcinoma stage progression, based on tumor RNAseq expression trained supervised machine learning algorithms.

    PubMed

    Jagga, Zeenia; Gupta, Dinesh

    2014-01-01

    Clear-cell Renal Cell Carcinoma (ccRCC) is the most- prevalent, chemotherapy resistant and lethal adult kidney cancer. There is a need for novel diagnostic and prognostic biomarkers for ccRCC, due to its heterogeneous molecular profiles and asymptomatic early stage. This study aims to develop classification models to distinguish early stage and late stage of ccRCC based on gene expression profiles. We employed supervised learning algorithms- J48, Random Forest, SMO and Naïve Bayes; with enriched model learning by fast correlation based feature selection to develop classification models trained on sequencing based gene expression data of RNAseq experiments, obtained from The Cancer Genome Atlas. Different models developed in the study were evaluated on the basis of 10 fold cross validations and independent dataset testing. Random Forest based prediction model performed best amongst the models developed in the study, with a sensitivity of 89%, accuracy of 77% and area under Receivers Operating Curve of 0.8. We anticipate that the prioritized subset of 62 genes and prediction models developed in this study will aid experimental oncologists to expedite understanding of the molecular mechanisms of stage progression and discovery of prognostic factors for ccRCC tumors.

  20. Supervised machine learning algorithms to diagnose stress for vehicle drivers based on physiological sensor signals.

    PubMed

    Barua, Shaibal; Begum, Shahina; Ahmed, Mobyen Uddin

    2015-01-01

    Machine learning algorithms play an important role in computer science research. Recent advancement in sensor data collection in clinical sciences lead to a complex, heterogeneous data processing, and analysis for patient diagnosis and prognosis. Diagnosis and treatment of patients based on manual analysis of these sensor data are difficult and time consuming. Therefore, development of Knowledge-based systems to support clinicians in decision-making is important. However, it is necessary to perform experimental work to compare performances of different machine learning methods to help to select appropriate method for a specific characteristic of data sets. This paper compares classification performance of three popular machine learning methods i.e., case-based reasoning, neutral networks and support vector machine to diagnose stress of vehicle drivers using finger temperature and heart rate variability. The experimental results show that case-based reasoning outperforms other two methods in terms of classification accuracy. Case-based reasoning has achieved 80% and 86% accuracy to classify stress using finger temperature and heart rate variability. On contrary, both neural network and support vector machine have achieved less than 80% accuracy by using both physiological signals.

  1. Design and Demonstration of Automated Data Analysis Algorithms for Ultrasonic Inspection of Complex Composite Panels with Bonds

    DTIC Science & Technology

    2016-02-01

    AFRL-RX-WP-JA-2017-0194 DESIGN AND DEMONSTRATION OF AUTOMATED DATA ANALYSIS ALGORITHMS FOR ULTRASONIC INSPECTION OF COMPLEX COMPOSITE PANELS...INSPECTION OF COMPLEX COMPOSITE PANELS WITH BONDS (POSTPRINT) 5a. CONTRACT NUMBER IN-HOUSE 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR...data review burden and improve the reliability of the ultrasonic inspection of large composite structures, automated data analysis (ADA) algorithms

  2. Automated Development of Accurate Algorithms and Efficient Codes for Computational Aeroacoustics

    NASA Technical Reports Server (NTRS)

    Goodrich, John W.; Dyson, Rodger W.

    1999-01-01

    The simulation of sound generation and propagation in three space dimensions with realistic aircraft components is a very large time dependent computation with fine details. Simulations in open domains with embedded objects require accurate and robust algorithms for propagation, for artificial inflow and outflow boundaries, and for the definition of geometrically complex objects. The development, implementation, and validation of methods for solving these demanding problems is being done to support the NASA pillar goals for reducing aircraft noise levels. Our goal is to provide algorithms which are sufficiently accurate and efficient to produce usable results rapidly enough to allow design engineers to study the effects on sound levels of design changes in propulsion systems, and in the integration of propulsion systems with airframes. There is a lack of design tools for these purposes at this time. Our technical approach to this problem combines the development of new, algorithms with the use of Mathematica and Unix utilities to automate the algorithm development, code implementation, and validation. We use explicit methods to ensure effective implementation by domain decomposition for SPMD parallel computing. There are several orders of magnitude difference in the computational efficiencies of the algorithms which we have considered. We currently have new artificial inflow and outflow boundary conditions that are stable, accurate, and unobtrusive, with implementations that match the accuracy and efficiency of the propagation methods. The artificial numerical boundary treatments have been proven to have solutions which converge to the full open domain problems, so that the error from the boundary treatments can be driven as low as is required. The purpose of this paper is to briefly present a method for developing highly accurate algorithms for computational aeroacoustics, the use of computer automation in this process, and a brief survey of the algorithms that

  3. Automated Development of Accurate Algorithms and Efficient Codes for Computational Aeroacoustics

    NASA Technical Reports Server (NTRS)

    Goodrich, John W.; Dyson, Rodger W.

    1999-01-01

    The simulation of sound generation and propagation in three space dimensions with realistic aircraft components is a very large time dependent computation with fine details. Simulations in open domains with embedded objects require accurate and robust algorithms for propagation, for artificial inflow and outflow boundaries, and for the definition of geometrically complex objects. The development, implementation, and validation of methods for solving these demanding problems is being done to support the NASA pillar goals for reducing aircraft noise levels. Our goal is to provide algorithms which are sufficiently accurate and efficient to produce usable results rapidly enough to allow design engineers to study the effects on sound levels of design changes in propulsion systems, and in the integration of propulsion systems with airframes. There is a lack of design tools for these purposes at this time. Our technical approach to this problem combines the development of new, algorithms with the use of Mathematica and Unix utilities to automate the algorithm development, code implementation, and validation. We use explicit methods to ensure effective implementation by domain decomposition for SPMD parallel computing. There are several orders of magnitude difference in the computational efficiencies of the algorithms which we have considered. We currently have new artificial inflow and outflow boundary conditions that are stable, accurate, and unobtrusive, with implementations that match the accuracy and efficiency of the propagation methods. The artificial numerical boundary treatments have been proven to have solutions which converge to the full open domain problems, so that the error from the boundary treatments can be driven as low as is required. The purpose of this paper is to briefly present a method for developing highly accurate algorithms for computational aeroacoustics, the use of computer automation in this process, and a brief survey of the algorithms that

  4. An automated knickzone selection algorithm (KZ-Picker) to analyze transient landscapes: Calibration and validation

    NASA Astrophysics Data System (ADS)

    Neely, A. B.; Bookhagen, B.; Burbank, D. W.

    2017-06-01

    Streams commonly respond to base-level fall by localizing erosion within steepened, convex knickzone reaches. Localized incision causes knickzone reaches to migrate upstream. Such migrating knickzones dictate the pace of landscape response to changes in tectonics or erosional efficiency and can help quantify the timing and source of base-level fall. Identification of knickzones typically requires individual selection of steepened reaches: a process that is tedious and subjective and has no efficient means to measure knickzone size. We construct an algorithm to automate this procedure by selecting the bounds of knickzone reaches in a χ-space (drainage-area normalized) framework. An automated feature calibrates algorithm parameters to a subset of knickzones handpicked by the user. The algorithm uses these parameters as consistent criteria to identify knickzones objectively, and then the algorithm measures the height, length, and slope of each knickzone reach. We test the algorithm on 1, 10, and 30 m resolution digital elevation models (DEMs) of six catchments (trunk-stream lengths: 2.1-5.4 km) on Santa Cruz Island, southern California. On the 1 m DEM, algorithm-selected knickzones confirm 93% of handpicked knickzone positions (n = 178) to a spatial accuracy of ≤100 m, 88% to an accuracy within 50 m, and 46% to an accuracy within 10 m. Using 10 and 30 m DEMs, accuracy is similar: 88-86% to ≤100 m and 82% to ≤50 m (n = 38 and 36, respectively). The algorithm enables efficient regional comparison of the size and location of knickzones with geologic structures, mapped landforms, and hillslope morphology, thereby facilitating approaches to characterize the dynamics of transient landscapes.

  5. Rapid classification of landsat TM imagery for phase 1 stratification using the automated NDVI threshold supervised classification (ANTSC) methodology

    Treesearch

    William H. Cooke; Dennis M. Jacobs

    2002-01-01

    FIA annual inventories require rapid updating of pixel-based Phase 1 estimates. Scientists at the Southern Research Station are developing an automated methodology that uses a Normalized Difference Vegetation Index (NDVI) for identifying and eliminating problem FIA plots from the analysis. Problem plots are those that have questionable land useiland cover information....

  6. Rapid Classification of Landsat TM Imagery for Phase 1 Stratification Using the Automated NDVI Threshold Supervised Classification (ANTSC) Methodology

    Treesearch

    William H. Cooke; Dennis M. Jacobs

    2005-01-01

    FIA annual inventories require rapid updating of pixel-based Phase 1 estimates. Scientists at the Southern Research Station are developing an automated methodology that uses a Normalized Difference Vegetation Index (NDVI) for identifying and eliminating problem FIA plots from the analysis. Problem plots are those that have questionable land use/land cover information....

  7. Dynamics of G-band bright points derived using two fully automated algorithms

    NASA Astrophysics Data System (ADS)

    Bodnárová, M.; Utz, D.; Rybák, J.; Hanslmeier, A.

    Small-scale magnetic field concentrations (˜ 1 kG) in the solar photosphere can be identified in the G-band of the solar spectrum as bright points. Study of the G-band bright points (GBPs) dynamics can help us in solving several questions related also to the coronal heating problem. Here a set of 142 G-band speckled images obtained using the Dutch Open Telescope (DOT) on October 19, 2005 are used to compare identification of the GBPs by two different fully automated identification algorithms: an algorithm developed by Utz et al. (2009a, 2009b) and an algorithm developed according to papers of Berger et al. (1995, 1998). Temporal and spatial tracking of the GBPs identified by both algorithms was performed resulting in distributions of lifetimes, sizes and velocities of the GBPs. The obtained results show that both algorithms give very similar values in the case of lifetime and velocity estimation of the GBPs, but they differ significantly in case of estimation of the GBPs sizes. This difference is caused by the fact that we have applied no additional exclusive criteria on the GBPs identified by the algorithm based on the work of Berger et al. (1995, 1998). Therefore we conclude that in a future study of the GBPs dynamics we will prefer to use the Utz's algorithm to perform identification and tracking of the GBPs in G-band images.

  8. Progress on automated data analysis algorithms for ultrasonic inspection of composites

    NASA Astrophysics Data System (ADS)

    Aldrin, John C.; Forsyth, David S.; Welter, John T.

    2015-03-01

    Progress is presented on the development and demonstration of automated data analysis (ADA) software to address the burden in interpreting ultrasonic inspection data for large composite structures. The automated data analysis algorithm is presented in detail, which follows standard procedures for analyzing signals for time-of-flight indications and backwall amplitude dropout. New algorithms have been implemented to reliably identify indications in time-of-flight images near the front and back walls of composite panels. Adaptive call criteria have also been applied to address sensitivity to variation in backwall signal level, panel thickness variation, and internal signal noise. ADA processing results are presented for a variety of test specimens that include inserted materials and discontinuities produced under poor manufacturing conditions. Software tools have been developed to support both ADA algorithm design and certification, producing a statistical evaluation of indication results and false calls using a matching process with predefined truth tables. Parametric studies were performed to evaluate detection and false call results with respect to varying algorithm settings.

  9. CorPITA: An Automated Algorithm for the Identification and Analysis of Coronal "EIT Waves"

    NASA Astrophysics Data System (ADS)

    Long, D. M.; Bloomfield, D. S.; Gallagher, P. T.; Pérez-Suárez, D.

    2014-09-01

    The continuous stream of data available from the Atmospheric Imaging Assembly (AIA) telescopes onboard the Solar Dynamics Observatory (SDO) spacecraft has allowed a deeper understanding of the Sun. However, the sheer volume of data has necessitated the development of automated techniques to identify and analyse various phenomena. In this article, we describe the Coronal Pulse Identification and Tracking Algorithm ( CorPITA) for the identification and analysis of coronal "EIT waves". CorPITA uses an intensity-profile technique to identify the propagating pulse, tracking it throughout its evolution before returning estimates of its kinematics. The algorithm is applied here to a data set from February 2011, allowing its capabilities to be examined and critiqued. This algorithm forms part of the SDO Feature Finding Team initiative and will be implemented as part of the Heliophysics Event Knowledgebase (HEK). This is the first fully automated algorithm to identify and track the propagating "EIT wave" rather than any associated phenomenon and will allow a deeper understanding of this controversial phenomenon.

  10. Algorithm design for automated transportation photo enforcement camera image and video quality diagnostic check modules

    NASA Astrophysics Data System (ADS)

    Raghavan, Ajay; Saha, Bhaskar

    2013-03-01

    Photo enforcement devices for traffic rules such as red lights, toll, stops, and speed limits are increasingly being deployed in cities and counties around the world to ensure smooth traffic flow and public safety. These are typically unattended fielded systems, and so it is important to periodically check them for potential image/video quality problems that might interfere with their intended functionality. There is interest in automating such checks to reduce the operational overhead and human error involved in manually checking large camera device fleets. Examples of problems affecting such camera devices include exposure issues, focus drifts, obstructions, misalignment, download errors, and motion blur. Furthermore, in some cases, in addition to the sub-algorithms for individual problems, one also has to carefully design the overall algorithm and logic to check for and accurately classifying these individual problems. Some of these issues can occur in tandem or have the potential to be confused for each other by automated algorithms. Examples include camera misalignment that can cause some scene elements to go out of focus for wide-area scenes or download errors that can be misinterpreted as an obstruction. Therefore, the sequence in which the sub-algorithms are utilized is also important. This paper presents an overview of these problems along with no-reference and reduced reference image and video quality solutions to detect and classify such faults.

  11. A Novel Validation Algorithm Allows for Automated Cell Tracking and the Extraction of Biologically Meaningful Parameters

    PubMed Central

    Madany Mamlouk, Amir; Schicktanz, Simone; Kruse, Charli

    2011-01-01

    Automated microscopy is currently the only method to non-invasively and label-free observe complex multi-cellular processes, such as cell migration, cell cycle, and cell differentiation. Extracting biological information from a time-series of micrographs requires each cell to be recognized and followed through sequential microscopic snapshots. Although recent attempts to automatize this process resulted in ever improving cell detection rates, manual identification of identical cells is still the most reliable technique. However, its tedious and subjective nature prevented tracking from becoming a standardized tool for the investigation of cell cultures. Here, we present a novel method to accomplish automated cell tracking with a reliability comparable to manual tracking. Previously, automated cell tracking could not rival the reliability of manual tracking because, in contrast to the human way of solving this task, none of the algorithms had an independent quality control mechanism; they missed validation. Thus, instead of trying to improve the cell detection or tracking rates, we proceeded from the idea to automatically inspect the tracking results and accept only those of high trustworthiness, while rejecting all other results. This validation algorithm works independently of the quality of cell detection and tracking through a systematic search for tracking errors. It is based only on very general assumptions about the spatiotemporal contiguity of cell paths. While traditional tracking often aims to yield genealogic information about single cells, the natural outcome of a validated cell tracking algorithm turns out to be a set of complete, but often unconnected cell paths, i.e. records of cells from mitosis to mitosis. This is a consequence of the fact that the validation algorithm takes complete paths as the unit of rejection/acceptance. The resulting set of complete paths can be used to automatically extract important biological parameters with high

  12. Automated algorithm for actinic cheilitis diagnosis by wide-field fluorescence imaging.

    PubMed

    Cosci, Alessandro; Takahama, Ademar; Correr, Wagner Rafael; Azevedo, Rebeca Souza; Fontes, Karla Bianca Fernandes da Costa; Kurachi, Cristina

    2016-10-01

    Actinic cheilitis (AC) is a disease caused by prolonged and cumulative sun exposure that mostly affects the lower lip, which can progress to a lip squamous cell carcinoma. Routine diagnosis relies on clinician experience and training. We investigated the diagnostic efficacy of wide-field fluorescence imaging coupled to an automated algorithm for AC recognition. Fluorescence images were acquired from 57 patients with confirmed AC and 46 normal volunteers. Three different algorithms were employed: two based on the emission characteristics of local heterogeneity, entropy and intensity range, and one based on the number of objects after K-mean clustering. A classification model was obtained using a fivefold cross correlation algorithm. Sensitivity and specificity rates were 86% and 89.1%, respectively.

  13. An automated bladder volume measurement algorithm by pixel classification using random forests.

    PubMed

    Annangi, Pavan; Frigstad, Sigmund; Subin, S B; Torp, Anders; Ramasubramaniam, Sundararajan; Varna, Srinivas

    2016-08-01

    Residual bladder volume measurement is a very important marker for patients with urinary retention problems. To be able to monitor patients with these conditions at the bedside by nurses or in an out patient setting by general physicians, hand held ultrasound devices will be extremely useful. However to increase the usage of these devices by non traditional users, automated tools that can aid them in the scanning and measurement process will be of great help. In our paper, we have developed a robust segmentation algorithm to automatically measure bladder volume by segmenting bladder contours from sagittal and transverse ultrasound views using a combination of machine learning and active contour algorithms. The algorithm is tested on 50 unseen images and 23 transverse and longitudinal image pairs and the performance is reported.

  14. Automated segmentation algorithm for detection of changes in vaginal epithelial morphology using optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Chitchian, Shahab; Vincent, Kathleen L.; Vargas, Gracie; Motamedi, Massoud

    2012-11-01

    We have explored the use of optical coherence tomography (OCT) as a noninvasive tool for assessing the toxicity of topical microbicides, products used to prevent HIV, by monitoring the integrity of the vaginal epithelium. A novel feature-based segmentation algorithm using a nearest-neighbor classifier was developed to monitor changes in the morphology of vaginal epithelium. The two-step automated algorithm yielded OCT images with a clearly defined epithelial layer, enabling differentiation of normal and damaged tissue. The algorithm was robust in that it was able to discriminate the epithelial layer from underlying stroma as well as residual microbicide product on the surface. This segmentation technique for OCT images has the potential to be readily adaptable to the clinical setting for noninvasively defining the boundaries of the epithelium, enabling quantifiable assessment of microbicide-induced damage in vaginal tissue.

  15. Simplifying multiobjective optimization: An automated design methodology for the nondominated sorted genetic algorithm-II

    NASA Astrophysics Data System (ADS)

    Reed, Patrick; Minsker, Barbara S.; Goldberg, David E.

    2003-07-01

    Many water resources problems require careful balancing of fiscal, technical, and social objectives. Informed negotiation and balancing of objectives can be greatly aided through the use of evolutionary multiobjective optimization (EMO) algorithms, which can evolve entire tradeoff (or Pareto) surfaces within a single run. The primary difficulty in using these methods lies in the large number of parameters that must be specified to ensure that these algorithms effectively quantify design tradeoffs. This technical note addresses this difficulty by introducing a multipopulation design methodology that automates parameter specification for the nondominated sorted genetic algorithm-II (NSGA-II). The NSGA-II design methodology is successfully demonstrated on a multiobjective long-term groundwater monitoring application. Using this methodology, multiobjective optimization problems can now be solved automatically with only a few simple user inputs.

  16. Automated classification of oral premalignant lesions using image cytometry and Random Forests-based algorithms.

    PubMed

    Baik, Jonathan; Ye, Qian; Zhang, Lewei; Poh, Catherine; Rosin, Miriam; MacAulay, Calum; Guillaud, Martial

    2014-06-01

    A major challenge for the early diagnosis of oral cancer is the ability to differentiate oral premalignant lesions (OPL) at high risk of progressing into invasive squamous cell carcinoma (SCC) from those at low risk. Our group has previously used high-resolution image analysis algorithms to quantify the nuclear phenotypic changes occurring in OPLs. This approach, however, requires a manual selection of nuclei images. Here, we investigated a new, semi-automated algorithm to identify OPLs at high risk of progressing into invasive SCC from those at low risk using Random Forests, a tree-based ensemble classifier. We trained a sequence of classifiers using morphometric data calculated on nuclei from 29 normal, 5 carcinoma in situ (CIS) and 28 SCC specimens. After automated discrimination of nuclei from other objects (i.e., debris, clusters, etc.), a nuclei classifier was trained to discriminate abnormal nuclei (8,841) from normal nuclei (5,762). We extracted voting scores from this trained classifier and created an automated nuclear phenotypic score (aNPS) to identify OPLs at high risk of progression. The new algorithm showed a correct classification rate of 80% (80.6% sensitivity, 79.3% specificity) at the cellular level for the test set, and a correct classification rate of 75% (77.8% sensitivity, 71.4% specificity) at the tissue level with a negative predictive value of 76% and a positive predictive value of 74% for predicting progression among 71 OPLs, performed on par with the manual method in our previous study. We conclude that the newly developed aNPS algorithm serves as a crucial asset in the implementation of high-resolution image analysis in routine clinical pathology practice to identify lesions that require molecular evaluation or more frequent follow-up.

  17. 3-D image pre-processing algorithms for improved automated tracing of neuronal arbors.

    PubMed

    Narayanaswamy, Arunachalam; Wang, Yu; Roysam, Badrinath

    2011-09-01

    The accuracy and reliability of automated neurite tracing systems is ultimately limited by image quality as reflected in the signal-to-noise ratio, contrast, and image variability. This paper describes a novel combination of image processing methods that operate on images of neurites captured by confocal and widefield microscopy, and produce synthetic images that are better suited to automated tracing. The algorithms are based on the curvelet transform (for denoising curvilinear structures and local orientation estimation), perceptual grouping by scalar voting (for elimination of non-tubular structures and improvement of neurite continuity while preserving branch points), adaptive focus detection, and depth estimation (for handling widefield images without deconvolution). The proposed methods are fast, and capable of handling large images. Their ability to handle images of unlimited size derives from automated tiling of large images along the lateral dimension, and processing of 3-D images one optical slice at a time. Their speed derives in part from the fact that the core computations are formulated in terms of the Fast Fourier Transform (FFT), and in part from parallel computation on multi-core computers. The methods are simple to apply to new images since they require very few adjustable parameters, all of which are intuitive. Examples of pre-processing DIADEM Challenge images are used to illustrate improved automated tracing resulting from our pre-processing methods.

  18. Crossword: A Fully Automated Algorithm for the Segmentation and Quality Control of Protein Microarray Images

    PubMed Central

    2015-01-01

    Biological assays formatted as microarrays have become a critical tool for the generation of the comprehensive data sets required for systems-level understanding of biological processes. Manual annotation of data extracted from images of microarrays, however, remains a significant bottleneck, particularly for protein microarrays due to the sensitivity of this technology to weak artifact signal. In order to automate the extraction and curation of data from protein microarrays, we describe an algorithm called Crossword that logically combines information from multiple approaches to fully automate microarray segmentation. Automated artifact removal is also accomplished by segregating structured pixels from the background noise using iterative clustering and pixel connectivity. Correlation of the location of structured pixels across image channels is used to identify and remove artifact pixels from the image prior to data extraction. This component improves the accuracy of data sets while reducing the requirement for time-consuming visual inspection of the data. Crossword enables a fully automated protocol that is robust to significant spatial and intensity aberrations. Overall, the average amount of user intervention is reduced by an order of magnitude and the data quality is increased through artifact removal and reduced user variability. The increase in throughput should aid the further implementation of microarray technologies in clinical studies. PMID:24417579

  19. A fully automated algorithm under modified FCM framework for improved brain MR image segmentation.

    PubMed

    Sikka, Karan; Sinha, Nitesh; Singh, Pankaj K; Mishra, Amit K

    2009-09-01

    Automated brain magnetic resonance image (MRI) segmentation is a complex problem especially if accompanied by quality depreciating factors such as intensity inhomogeneity and noise. This article presents a new algorithm for automated segmentation of both normal and diseased brain MRI. An entropy driven homomorphic filtering technique has been employed in this work to remove the bias field. The initial cluster centers are estimated using a proposed algorithm called histogram-based local peak merger using adaptive window. Subsequently, a modified fuzzy c-mean (MFCM) technique using the neighborhood pixel considerations is applied. Finally, a new technique called neighborhood-based membership ambiguity correction (NMAC) has been used for smoothing the boundaries between different tissue classes as well as to remove small pixel level noise, which appear as misclassified pixels even after the MFCM approach. NMAC leads to much sharper boundaries between tissues and, hence, has been found to be highly effective in prominently estimating the tissue and tumor areas in a brain MR scan. The algorithm has been validated against MFCM and FMRIB software library using MRI scans from BrainWeb. Superior results to those achieved with MFCM technique have been observed along with the collateral advantages of fully automatic segmentation, faster computation and faster convergence of the objective function.

  20. Validation of an Algorithm for Semi-automated Estimation of Voice Relative Fundamental Frequency.

    PubMed

    Lien, Yu-An S; Heller Murray, Elizabeth S; Calabrese, Carolyn R; Michener, Carolyn M; Van Stan, Jarrad H; Mehta, Daryush D; Hillman, Robert E; Noordzij, J Pieter; Stepp, Cara E

    2017-10-01

    Relative fundamental frequency (RFF) has shown promise as an acoustic measure of voice, but the subjective and time-consuming nature of its manual estimation has made clinical translation infeasible. Here, a faster, more objective algorithm for RFF estimation is evaluated in a large and diverse sample of individuals with and without voice disorders. Acoustic recordings were collected from 154 individuals with voice disorders and 36 age- and sex-matched controls with typical voices. These recordings were split into training and 2 testing sets. Using an algorithm tuned to the training set, semi-automated RFF estimates in the testing sets were compared to manual RFF estimates derived from 3 trained technicians. The semi-automated RFF estimations were highly correlated ( r = 0.82-0.91) with the manual RFF estimates. Fast and more objective estimation of RFF makes large-scale RFF analysis feasible. This algorithm allows for future work to optimize RFF measures and expand their potential for clinical voice assessment.

  1. Improved automated monitoring and new analysis algorithm for circadian phototaxis rhythms in Chlamydomonas.

    PubMed

    Gaskill, Christa; Forbes-Stovall, Jennifer; Kessler, Bruce; Young, Mike; Rinehart, Claire A; Jacobshagen, Sigrid

    2010-04-01

    Automated monitoring of circadian rhythms is an efficient way of gaining insight into oscillation parameters like period and phase for the underlying pacemaker of the circadian clock. Measurement of the circadian rhythm of phototaxis (swimming towards light) exhibited by the green alga Chlamydomonas reinhardtii has been automated by directing a narrow and dim light beam through a culture at regular intervals and determining the decrease in light transmittance due to the accumulation of cells in the beam. In this study, the monitoring process was optimized by constructing a new computer-controlled measuring machine that limits the test beam to wavelengths reported to be specific for phototaxis and by choosing an algal strain, which does not need background illumination between test light cycles for proper expression of the rhythm. As a result, period and phase of the rhythm are now unaffected by the time a culture is placed into the machine. Analysis of the rhythm data was also optimized through a new algorithm, whose robustness was demonstrated using virtual rhythms with various noises. The algorithm differs in particular from other reported algorithms by maximizing the fit of the data to a sinusoidal curve that dampens exponentially. The algorithm was also used to confirm the reproducibility of rhythm monitoring by the machine. Machine and algorithm can now be used for a multitude of circadian clock studies that require unambiguous period and phase determinations such as light pulse experiments to identify the photoreceptor(s) that reset the circadian clock in C. reinhardtii. Copyright 2010 Elsevier Masson SAS. All rights reserved.

  2. Algorithms for the Automated Power Systems Management. [for planetary exploration missions

    NASA Technical Reports Server (NTRS)

    Moser, R. L.; Imamura, M. S.

    1979-01-01

    The system breadboard for the demonstration of Automated Power Systems Management (APSM) functions has been designed, fabricated, and is in the final stages of verification and testing. APSM functions fall into categories of fault detection and correction, commanding system and subsystem test and diagnoses, relay position monitoring, data acquisition and processing, and power management. All these functions are accomplished through software residing in 8-bit microprocessors dedicated to each group of Viking Orbiter power breadboard subassemblies and a 16-bit microprocessor serving as the central processor for the power system. This paper describes key monitoring, diagnostic, and control algorithms used in the APSM breadboard.

  3. Algorithms for the Automated Power Systems Management. [for planetary exploration missions

    NASA Technical Reports Server (NTRS)

    Moser, R. L.; Imamura, M. S.

    1979-01-01

    The system breadboard for the demonstration of Automated Power Systems Management (APSM) functions has been designed, fabricated, and is in the final stages of verification and testing. APSM functions fall into categories of fault detection and correction, commanding system and subsystem test and diagnoses, relay position monitoring, data acquisition and processing, and power management. All these functions are accomplished through software residing in 8-bit microprocessors dedicated to each group of Viking Orbiter power breadboard subassemblies and a 16-bit microprocessor serving as the central processor for the power system. This paper describes key monitoring, diagnostic, and control algorithms used in the APSM breadboard.

  4. Fast automated yeast cell counting algorithm using bright-field and fluorescence microscopic images

    PubMed Central

    2013-01-01

    Background The faithful determination of the concentration and viability of yeast cells is important for biological research as well as industry. To this end, it is important to develop an automated cell counting algorithm that can provide not only fast but also accurate and precise measurement of yeast cells. Results With the proposed method, we measured the precision of yeast cell measurements by using 0%, 25%, 50%, 75% and 100% viability samples. As a result, the actual viability measured with the proposed yeast cell counting algorithm is significantly correlated to the theoretical viability (R2 = 0.9991). Furthermore, we evaluated the performance of our algorithm in various computing platforms. The results showed that the proposed algorithm could be feasible to use with low-end computing platforms without loss of its performance. Conclusions Our yeast cell counting algorithm can rapidly provide the total number and the viability of yeast cells with exceptional accuracy and precision. Therefore, we believe that our method can become beneficial for a wide variety of academic field and industries such as biotechnology, pharmaceutical and alcohol production. PMID:24215650

  5. Applying hybrid algorithms for text matching to automated biomedical vocabulary mapping.

    PubMed

    Nachimuthu, Senthil K; Lau, Lee Min

    2005-01-01

    Several biomedical vocabularies are often used by clinical applications due to their different domain(s) of coverage, intended use, etc. Mapping them to a reference terminology is essential for inter-systems interoperability. Manual vocabulary mapping is labor-intensive and allows room for inconsistencies. It requires manual searching for synonyms, abbreviation expansions, variations, etc., placing additional burden on the mappers. Furthermore, local vocabularies may use non-standard words and abbreviations, posing additional problems. However, much of this process can be automated to provide decision-support, allowing mappers to focus on steps that absolutely need their expertise. We developed hybrid algorithms comprising of rules, permutations, sequence alignment and cost algorithms that utilize the UMLS SPECIALIST Lexicon, a custom knowledgebase and a search engine to automatically find probable matches, allowing mappers to select the best match from this list. We discuss the techniques, results from assisting to map a local codeset, and scope for generalizability.

  6. Deadlock-free genetic scheduling algorithm for automated manufacturing systems based on deadlock control policy.

    PubMed

    Xing, KeYi; Han, LiBin; Zhou, MengChu; Wang, Feng

    2012-06-01

    Deadlock-free control and scheduling are vital for optimizing the performance of automated manufacturing systems (AMSs) with shared resources and route flexibility. Based on the Petri net models of AMSs, this paper embeds the optimal deadlock avoidance policy into the genetic algorithm and develops a novel deadlock-free genetic scheduling algorithm for AMSs. A possible solution of the scheduling problem is coded as a chromosome representation that is a permutation with repetition of parts. By using the one-step look-ahead method in the optimal deadlock control policy, the feasibility of a chromosome is checked, and infeasible chromosomes are amended into feasible ones, which can be easily decoded into a feasible deadlock-free schedule. The chromosome representation and polynomial complexity of checking and amending procedures together support the cooperative aspect of genetic search for scheduling problems strongly.

  7. Hypotheses generation as supervised link discovery with automated class labeling on large-scale biomedical concept networks.

    PubMed

    Katukuri, Jayasimha Reddy; Xie, Ying; Raghavan, Vijay V; Gupta, Ashish

    2012-06-11

    Computational approaches to generate hypotheses from biomedical literature have been studied intensively in recent years. Nevertheless, it still remains a challenge to automatically discover novel, cross-silo biomedical hypotheses from large-scale literature repositories. In order to address this challenge, we first model a biomedical literature repository as a comprehensive network of biomedical concepts and formulate hypotheses generation as a process of link discovery on the concept network. We extract the relevant information from the biomedical literature corpus and generate a concept network and concept-author map on a cluster using Map-Reduce frame-work. We extract a set of heterogeneous features such as random walk based features, neighborhood features and common author features. The potential number of links to consider for the possibility of link discovery is large in our concept network and to address the scalability problem, the features from a concept network are extracted using a cluster with Map-Reduce framework. We further model link discovery as a classification problem carried out on a training data set automatically extracted from two network snapshots taken in two consecutive time duration. A set of heterogeneous features, which cover both topological and semantic features derived from the concept network, have been studied with respect to their impacts on the accuracy of the proposed supervised link discovery process. A case study of hypotheses generation based on the proposed method has been presented in the paper.

  8. Hypotheses generation as supervised link discovery with automated class labeling on large-scale biomedical concept networks

    PubMed Central

    2012-01-01

    Computational approaches to generate hypotheses from biomedical literature have been studied intensively in recent years. Nevertheless, it still remains a challenge to automatically discover novel, cross-silo biomedical hypotheses from large-scale literature repositories. In order to address this challenge, we first model a biomedical literature repository as a comprehensive network of biomedical concepts and formulate hypotheses generation as a process of link discovery on the concept network. We extract the relevant information from the biomedical literature corpus and generate a concept network and concept-author map on a cluster using Map-Reduce frame-work. We extract a set of heterogeneous features such as random walk based features, neighborhood features and common author features. The potential number of links to consider for the possibility of link discovery is large in our concept network and to address the scalability problem, the features from a concept network are extracted using a cluster with Map-Reduce framework. We further model link discovery as a classification problem carried out on a training data set automatically extracted from two network snapshots taken in two consecutive time duration. A set of heterogeneous features, which cover both topological and semantic features derived from the concept network, have been studied with respect to their impacts on the accuracy of the proposed supervised link discovery process. A case study of hypotheses generation based on the proposed method has been presented in the paper. PMID:22759614

  9. Automated coronary artery calcium scoring from non-contrast CT using a patient-specific algorithm

    NASA Astrophysics Data System (ADS)

    Ding, Xiaowei; Slomka, Piotr J.; Diaz-Zamudio, Mariana; Germano, Guido; Berman, Daniel S.; Terzopoulos, Demetri; Dey, Damini

    2015-03-01

    Non-contrast cardiac CT is used worldwide to assess coronary artery calcium (CAC), a subclinical marker of coronary atherosclerosis. Manual quantification of regional CAC scores includes identifying candidate regions, followed by thresholding and connected component labeling. We aimed to develop and validate a fully-automated, algorithm for both overall and regional measurement of CAC scores from non-contrast CT using a hybrid multi-atlas registration, active contours and knowledge-based region separation algorithm. A co-registered segmented CT atlas was created from manually segmented non-contrast CT data from 10 patients (5 men, 5 women) and stored offline. For each patient scan, the heart region, left ventricle, right ventricle, ascending aorta and aortic root are located by multi-atlas registration followed by active contours refinement. Regional coronary artery territories (left anterior descending artery, left circumflex artery and right coronary artery) are separated using a knowledge-based region separation algorithm. Calcifications from these coronary artery territories are detected by region growing at each lesion. Global and regional Agatston scores and volume scores were calculated in 50 patients. Agatston scores and volume scores calculated by the algorithm and the expert showed excellent correlation (Agatston score: r = 0.97, p < 0.0001, volume score: r = 0.97, p < 0.0001) with no significant differences by comparison of individual data points (Agatston score: p = 0.30, volume score: p = 0.33). The total time was <60 sec on a standard computer. Our results show that fast accurate and automated quantification of CAC scores from non-contrast CT is feasible.

  10. Automated cell colony counting and analysis using the circular Hough image transform algorithm (CHiTA)

    NASA Astrophysics Data System (ADS)

    Bewes, J. M.; Suchowerska, N.; McKenzie, D. R.

    2008-11-01

    We present an automated cell colony counting method that is flexible, robust and capable of providing more in-depth clonogenic analysis than existing manual and automated approaches. The full form of the Hough transform without approximation has been implemented, for the first time. Improvements in computing speed have facilitated this approach. Colony identification was achieved by pre-processing the raw images of the colonies in situ in the flask, including images of the flask edges, by erosion, dilation and Gaussian smoothing processes. Colony edges were then identified by intensity gradient field discrimination. Our technique eliminates the need for specialized hardware for image capture and enables the use of a standard desktop scanner for distortion-free image acquisition. Additional parameters evaluated included regional colony counts, average colony area, nearest neighbour distances and radial distribution. This spatial and qualitative information extends the utility of the clonogenic assay, allowing analysis of spatially-variant cytotoxic effects. To test the automated system, two flask types and three cell lines with different morphology, cell size and plating density were examined. A novel Monte Carlo method of simulating cell colony images, as well as manual counting, were used to quantify algorithm accuracy. The method was able to identify colonies with unusual morphology, to successfully resolve merged colonies and to correctly count colonies adjacent to flask edges.

  11. Automated EEG artifact elimination by applying machine learning algorithms to ICA-based features

    NASA Astrophysics Data System (ADS)

    Radüntz, Thea; Scouten, Jon; Hochmuth, Olaf; Meffert, Beate

    2017-08-01

    Objective. Biological and non-biological artifacts cause severe problems when dealing with electroencephalogram (EEG) recordings. Independent component analysis (ICA) is a widely used method for eliminating various artifacts from recordings. However, evaluating and classifying the calculated independent components (IC) as artifact or EEG is not fully automated at present. Approach. In this study, we propose a new approach for automated artifact elimination, which applies machine learning algorithms to ICA-based features. Main results. We compared the performance of our classifiers with the visual classification results given by experts. The best result with an accuracy rate of 95% was achieved using features obtained by range filtering of the topoplots and IC power spectra combined with an artificial neural network. Significance. Compared with the existing automated solutions, our proposed method is not limited to specific types of artifacts, electrode configurations, or number of EEG channels. The main advantages of the proposed method is that it provides an automatic, reliable, real-time capable, and practical tool, which avoids the need for the time-consuming manual selection of ICs during artifact removal.

  12. Models for identification of erroneous atom-to-atom mapping of reactions performed by automated algorithms.

    PubMed

    Muller, Christophe; Marcou, Gilles; Horvath, Dragos; Aires-de-Sousa, João; Varnek, Alexandre

    2012-12-21

    Machine learning (SVM and JRip rule learner) methods have been used in conjunction with the Condensed Graph of Reaction (CGR) approach to identify errors in the atom-to-atom mapping of chemical reactions produced by an automated mapping tool by ChemAxon. The modeling has been performed on the three first enzymatic classes of metabolic reactions from the KEGG database. Each reaction has been converted into a CGR representing a pseudomolecule with conventional (single, double, aromatic, etc.) bonds and dynamic bonds characterizing chemical transformations. The ChemAxon tool was used to automatically detect the matching atom pairs in reagents and products. These automated mappings were analyzed by the human expert and classified as "correct" or "wrong". ISIDA fragment descriptors generated for CGRs for both correct and wrong mappings were used as attributes in machine learning. The learned models have been validated in n-fold cross-validation on the training set followed by a challenge to detect correct and wrong mappings within an external test set of reactions, never used for learning. Results show that both SVM and JRip models detect most of the wrongly mapped reactions. We believe that this approach could be used to identify erroneous atom-to-atom mapping performed by any automated algorithm.

  13. Thermal depth profiling of vascular lesions: automated regularization of reconstruction algorithms

    NASA Astrophysics Data System (ADS)

    Verkruysse, Wim; Choi, Bernard; Zhang, Jenny R.; Kim, Jeehyun; Nelson, J. Stuart

    2008-03-01

    Pulsed photo-thermal radiometry (PPTR) is a non-invasive, non-contact diagnostic technique used to locate cutaneous chromophores such as melanin (epidermis) and hemoglobin (vascular structures). Clinical utility of PPTR is limited because it typically requires trained user intervention to regularize the inversion solution. Herein, the feasibility of automated regularization was studied. A second objective of this study was to depart from modeling port wine stain PWS, a vascular skin lesion frequently studied with PPTR, as strictly layered structures since this may influence conclusions regarding PPTR reconstruction quality. Average blood vessel depths, diameters and densities derived from histology of 30 PWS patients were used to generate 15 randomized lesion geometries for which we simulated PPTR signals. Reconstruction accuracy for subjective regularization was compared with that for automated regularization methods. The objective regularization approach performed better. However, the average difference was much smaller than the variation between the 15 simulated profiles. Reconstruction quality depended more on the actual profile to be reconstructed than on the reconstruction algorithm or regularization method. Similar, or better, accuracy reconstructions can be achieved with an automated regularization procedure which enhances prospects for user friendly implementation of PPTR to optimize laser therapy on an individual patient basis.

  14. Automated quantification of the phagocytosis of Aspergillus fumigatus conidia by a novel image analysis algorithm

    PubMed Central

    Kraibooj, Kaswara; Schoeler, Hanno; Svensson, Carl-Magnus; Brakhage, Axel A.; Figge, Marc Thilo

    2015-01-01

    Studying the pathobiology of the fungus Aspergillus fumigatus has gained a lot of attention in recent years. This is due to the fact that this fungus is a human pathogen that can cause severe diseases, like invasive pulmonary aspergillosis in immunocompromised patients. Because alveolar macrophages belong to the first line of defense against the fungus, here, we conduct an image-based study on the host-pathogen interaction between murine alveolar macrophages and A. fumigatus. This is achieved by an automated image analysis approach that uses a combination of thresholding, watershed segmentation and feature-based object classification. In contrast to previous approaches, our algorithm allows for the segmentation of individual macrophages in the images and this enables us to compute the distribution of phagocytosed and macrophage-adherent conidia over all macrophages. The novel automated image-based analysis provides access to all cell-cell interactions in the assay and thereby represents a framework that enables comprehensive computation of diverse characteristic parameters and comparative investigation for different strains. We here apply automated image analysis to confocal laser scanning microscopy images of the two wild-type strains ATCC 46645 and CEA10 of A. fumigatus and investigate the ability of macrophages to phagocytose the respective conidia. It is found that the CEA10 strain triggers a stronger response of the macrophages as revealed by a higher phagocytosis ratio and a larger portion of the macrophages being active in the phagocytosis process. PMID:26106370

  15. Efficacy and cost-effectiveness of an automated screening algorithm in an inpatient clinical trial.

    PubMed

    Beauharnais, Catherine C; Larkin, Mary E; Zai, Adrian H; Boykin, Emily C; Luttrell, Jennifer; Wexler, Deborah J

    2012-04-01

    Screening and recruitment for clinical trials can be costly and time-consuming. Inpatient trials present additional challenges because enrollment is time sensitive based on length of stay. We hypothesized that using an automated prescreening algorithm to identify eligible subjects would increase screening efficiency and enrollment and be cost-effective compared to manual review of a daily admission list. Using a before-and-after design, we compared time spent screening, number of patients screened, enrollment rate, and cost-effectiveness of each screening method in an inpatient diabetes trial conducted at Massachusetts General Hospital. Manual chart review (CR) involved reviewing a daily list of admitted patients to identify eligible subjects. The automated prescreening (APS) method used an algorithm to generate a daily list of patients with glucose levels ≥ 180 mg/dL, an insulin order, and/or admission diagnosis of diabetes mellitus. The census generated was then manually screened to confirm eligibility and eliminate patients who met our exclusion criteria. We determined rates of screening and enrollment and cost-effectiveness of each method based on study sample size. Total screening time (prescreening and screening) decreased from 4 to 2 h, allowing subjects to be approached earlier in the course of the hospital stay. The average number of patients prescreened per day increased from 13 ± 4 to 30 ± 16 (P < 0.0001). Rate of enrollment increased from 0.17 to 0.32 patients per screening day. Developing the computer algorithm added a fixed cost of US$3000 to the study. Based on our screening and enrollment rates, the algorithm was cost-neutral after enrolling 12 patients. Larger sample sizes further favored screening with an algorithm. By contrast, higher recruitment rates favored individual CR. Because of the before-and-after design of this study, it is possible that unmeasured factors contributed to increased enrollment. Using a computer algorithm to identify

  16. Machine-Learning Algorithms to Automate Morphological and Functional Assessments in 2D Echocardiography.

    PubMed

    Narula, Sukrit; Shameer, Khader; Salem Omar, Alaa Mabrouk; Dudley, Joel T; Sengupta, Partho P

    2016-11-29

    Machine-learning models may aid cardiac phenotypic recognition by using features of cardiac tissue deformation. This study investigated the diagnostic value of a machine-learning framework that incorporates speckle-tracking echocardiographic data for automated discrimination of hypertrophic cardiomyopathy (HCM) from physiological hypertrophy seen in athletes (ATH). Expert-annotated speckle-tracking echocardiographic datasets obtained from 77 ATH and 62 HCM patients were used for developing an automated system. An ensemble machine-learning model with 3 different machine-learning algorithms (support vector machines, random forests, and artificial neural networks) was developed and a majority voting method was used for conclusive predictions with further K-fold cross-validation. Feature selection using an information gain (IG) algorithm revealed that volume was the best predictor for differentiating between HCM ands. ATH (IG = 0.24) followed by mid-left ventricular segmental (IG = 0.134) and average longitudinal strain (IG = 0.131). The ensemble machine-learning model showed increased sensitivity and specificity compared with early-to-late diastolic transmitral velocity ratio (p < 0.01), average early diastolic tissue velocity (e') (p < 0.01), and strain (p = 0.04). Because ATH were younger, adjusted analysis was undertaken in younger HCM patients and compared with ATH with left ventricular wall thickness >13 mm. In this subgroup analysis, the automated model continued to show equal sensitivity, but increased specificity relative to early-to-late diastolic transmitral velocity ratio, e', and strain. Our results suggested that machine-learning algorithms can assist in the discrimination of physiological versus pathological patterns of hypertrophic remodeling. This effort represents a step toward the development of a real-time, machine-learning-based system for automated interpretation of echocardiographic images, which may help novice readers with

  17. Large-scale image region documentation for fully automated image biomarker algorithm development and evaluation.

    PubMed

    Reeves, Anthony P; Xie, Yiting; Liu, Shuang

    2017-04-01

    With the advent of fully automated image analysis and modern machine learning methods, there is a need for very large image datasets having documented segmentations for both computer algorithm training and evaluation. This paper presents a method and implementation for facilitating such datasets that addresses the critical issue of size scaling for algorithm validation and evaluation; current evaluation methods that are usually used in academic studies do not scale to large datasets. This method includes protocols for the documentation of many regions in very large image datasets; the documentation may be incrementally updated by new image data and by improved algorithm outcomes. This method has been used for 5 years in the context of chest health biomarkers from low-dose chest CT images that are now being used with increasing frequency in lung cancer screening practice. The lung scans are segmented into over 100 different anatomical regions, and the method has been applied to a dataset of over 20,000 chest CT images. Using this framework, the computer algorithms have been developed to achieve over 90% acceptable image segmentation on the complete dataset.

  18. Defining a Patient Population With Cirrhosis: An Automated Algorithm With Natural Language Processing.

    PubMed

    Chang, Edward K; Yu, Christine Y; Clarke, Robin; Hackbarth, Andrew; Sanders, Timothy; Esrailian, Eric; Hommes, Daniel W; Runyon, Bruce A

    The objective of this study was to use natural language processing (NLP) as a supplement to International Classification of Diseases, Ninth Revision (ICD-9) and laboratory values in an automated algorithm to better define and risk-stratify patients with cirrhosis. Identification of patients with cirrhosis by manual data collection is time-intensive and laborious, whereas using ICD-9 codes can be inaccurate. NLP, a novel computerized approach to analyzing electronic free text, has been used to automatically identify patient cohorts with gastrointestinal pathologies such as inflammatory bowel disease. This methodology has not yet been used in cirrhosis. This retrospective cohort study was conducted at the University of California, Los Angeles Health, an academic medical center. A total of 5343 University of California, Los Angeles primary care patients with ICD-9 codes for chronic liver disease were identified during March 2013 to January 2015. An algorithm incorporating NLP of radiology reports, ICD-9 codes, and laboratory data determined whether these patients had cirrhosis. Of the 5343 patients, 168 patient charts were manually reviewed at random as a gold standard comparison. Positive predictive value (PPV), negative predictive value (NPV), sensitivity, and specificity of the algorithm and each of its steps were calculated. The algorithm's PPV, NPV, sensitivity, and specificity were 91.78%, 96.84%, 95.71%, and 93.88%, respectively. The NLP portion was the most important component of the algorithm with PPV, NPV, sensitivity, and specificity of 98.44%, 93.27%, 90.00%, and 98.98%, respectively. NLP is a powerful tool that can be combined with administrative and laboratory data to identify patients with cirrhosis within a population.

  19. Automated intensity descent algorithm for interpretation of complex high-resolution mass spectra.

    PubMed

    Chen, Li; Sze, Siu Kwan; Yang, He

    2006-07-15

    This paper describes a new automated intensity descent algorithm for analysis of complex high-resolution mass spectra. The algorithm has been successfully applied to interpret Fourier transform mass spectra of proteins; however, it should be generally applicable to complex high-resolution mass spectra of large molecules recorded by other instruments. The algorithm locates all possible isotopic clusters by a novel peak selection method and a robust cluster subtraction technique according to the order of descending peak intensity after global noise level estimation and baseline correction. The peak selection method speeds up charge state determination and isotopic cluster identification. A Lorentzian-based peak subtraction technique resolves overlapping clusters in high peak density regions. A noise flag value is introduced to minimize false positive isotopic clusters. Moreover, correlation coefficients and matching errors between the identified isotopic multiplets and the averagine isotopic abundance distribution are the criteria for real isotopic clusters. The best fitted averagine isotopic abundance distribution of each isotopic cluster determines the charge state and the monoisotopic mass. Three high-resolution mass spectra were interpreted by the program. The results show that the algorithm is fast in computational speed, robust in identification of overlapping clusters, and efficient in minimization of false positives. In approximately 2 min, the program identified 611 isotopic clusters for a plasma ECD spectrum of carbonic anhydrase. Among them, 50 new identified isotopic clusters, which were missed previously by other methods, have been discovered in the high peak density regions or as weak clusters by this algorithm. As a result, 18 additional new bond cleavages have been identified from the 50 new clusters of carbonic anhydrase.

  20. Automated Reconstruction Algorithm for Identification of 3D Architectures of Cribriform Ductal Carcinoma In Situ

    PubMed Central

    Norton, Kerri-Ann; Namazi, Sameera; Barnard, Nicola; Fujibayashi, Mariko; Bhanot, Gyan; Ganesan, Shridar; Iyatomi, Hitoshi; Ogawa, Koichi; Shinbrot, Troy

    2012-01-01

    Ductal carcinoma in situ (DCIS) is a pre-invasive carcinoma of the breast that exhibits several distinct morphologies but the link between morphology and patient outcome is not clear. We hypothesize that different mechanisms of growth may still result in similar 2D morphologies, which may look different in 3D. To elucidate the connection between growth and 3D morphology, we reconstruct the 3D architecture of cribriform DCIS from resected patient material. We produce a fully automated algorithm that aligns, segments, and reconstructs 3D architectures from microscopy images of 2D serial sections from human specimens. The alignment algorithm is based on normalized cross correlation, the segmentation algorithm uses histogram equilization, Otsu's thresholding, and morphology techniques to segment the duct and cribra. The reconstruction method combines these images in 3D. We show that two distinct 3D architectures are indeed found in samples whose 2D histological sections are similarly identified as cribriform DCIS. These differences in architecture support the hypothesis that luminal spaces may form due to different mechanisms, either isolated cell death or merging fronds, leading to the different architectures. We find that out of 15 samples, 6 were found to have ‘bubble-like’ cribra, 6 were found to have ‘tube-like’ criba and 3 were ‘unknown.’ We propose that the 3D architectures found, ‘bubbles’ and ‘tubes’, account for some of the heterogeneity of the disease and may be prognostic indicators of different patient outcomes. PMID:22970156

  1. Automated Algorithm for J-Tpeak and Tpeak-Tend Assessment of Drug-Induced Proarrhythmia Risk

    SciTech Connect

    Johannesen, Lars; Vicente, Jose; Hosseini, Meisam; Strauss, David G.

    2016-12-30

    Prolongation of the heart rate corrected QT (QTc) interval is a sensitive marker of torsade de pointes risk; however it is not specific as QTc prolonging drugs that block inward currents are often not associated with torsade. Recent work demonstrated that separate analysis of the heart rate corrected J-Tpeakc (J-Tpeakc) and Tpeak-Tend intervals can identify QTc prolonging drugs with inward current block and is being proposed as a part of a new cardiac safety paradigm for new drugs (the “CiPA” initiative). In this work, we describe an automated measurement methodology for assessment of the J-Tpeakc and Tpeak-Tend intervals using the vector magnitude lead. The automated measurement methodology was developed using data from one clinical trial and was evaluated using independent data from a second clinical trial. Comparison between the automated and the prior semi-automated measurements shows that the automated algorithm reproduces the semi-automated measurements with a mean difference of single-deltas <1 ms and no difference in intra-time point variability (p for all > 0.39). In addition, the time-profile of the baseline and placebo-adjusted changes are within 1 ms for 63% of the time-points (86% within 2 ms). Importantly, the automated results lead to the same conclusions about the electrophysiological mechanisms of the studied drugs. We have developed an automated algorithm for assessment of J-Tpeakc and Tpeak-Tend intervals that can be applied in clinical drug trials. Under the CiPA initiative this ECG assessment would determine if there are unexpected ion channel effects in humans compared to preclinical studies. In conclusion, the algorithm is being released as open-source software.

  2. Automated Algorithm for J-Tpeak and Tpeak-Tend Assessment of Drug-Induced Proarrhythmia Risk

    DOE PAGES

    Johannesen, Lars; Vicente, Jose; Hosseini, Meisam; ...

    2016-12-30

    Prolongation of the heart rate corrected QT (QTc) interval is a sensitive marker of torsade de pointes risk; however it is not specific as QTc prolonging drugs that block inward currents are often not associated with torsade. Recent work demonstrated that separate analysis of the heart rate corrected J-Tpeakc (J-Tpeakc) and Tpeak-Tend intervals can identify QTc prolonging drugs with inward current block and is being proposed as a part of a new cardiac safety paradigm for new drugs (the “CiPA” initiative). In this work, we describe an automated measurement methodology for assessment of the J-Tpeakc and Tpeak-Tend intervals using themore » vector magnitude lead. The automated measurement methodology was developed using data from one clinical trial and was evaluated using independent data from a second clinical trial. Comparison between the automated and the prior semi-automated measurements shows that the automated algorithm reproduces the semi-automated measurements with a mean difference of single-deltas <1 ms and no difference in intra-time point variability (p for all > 0.39). In addition, the time-profile of the baseline and placebo-adjusted changes are within 1 ms for 63% of the time-points (86% within 2 ms). Importantly, the automated results lead to the same conclusions about the electrophysiological mechanisms of the studied drugs. We have developed an automated algorithm for assessment of J-Tpeakc and Tpeak-Tend intervals that can be applied in clinical drug trials. Under the CiPA initiative this ECG assessment would determine if there are unexpected ion channel effects in humans compared to preclinical studies. In conclusion, the algorithm is being released as open-source software.« less

  3. Automated Algorithm for J-Tpeak and Tpeak-Tend Assessment of Drug-Induced Proarrhythmia Risk

    PubMed Central

    Johannesen, Lars; Vicente, Jose; Hosseini, Meisam; Strauss, David G.

    2016-01-01

    Background Prolongation of the heart rate corrected QT (QTc) interval is a sensitive marker of torsade de pointes risk; however it is not specific as QTc prolonging drugs that block inward currents are often not associated with torsade. Recent work demonstrated that separate analysis of the heart rate corrected J-Tpeakc (J-Tpeakc) and Tpeak-Tend intervals can identify QTc prolonging drugs with inward current block and is being proposed as a part of a new cardiac safety paradigm for new drugs (the “CiPA” initiative). Methods In this work, we describe an automated measurement methodology for assessment of the J-Tpeakc and Tpeak-Tend intervals using the vector magnitude lead. The automated measurement methodology was developed using data from one clinical trial and was evaluated using independent data from a second clinical trial. Results Comparison between the automated and the prior semi-automated measurements shows that the automated algorithm reproduces the semi-automated measurements with a mean difference of single-deltas <1 ms and no difference in intra-time point variability (p for all > 0.39). In addition, the time-profile of the baseline and placebo-adjusted changes are within 1 ms for 63% of the time-points (86% within 2 ms). Importantly, the automated results lead to the same conclusions about the electrophysiological mechanisms of the studied drugs. Conclusions We have developed an automated algorithm for assessment of J-Tpeakc and Tpeak-Tend intervals that can be applied in clinical drug trials. Under the CiPA initiative this ECG assessment would determine if there are unexpected ion channel effects in humans compared to preclinical studies. The algorithm is being released as open-source software. Trial Registration NCT02308748 and NCT01873950 PMID:28036330

  4. Automated Analysis of 1p/19q Status by FISH in Oligodendroglial Tumors: Rationale and Proposal of an Algorithm

    PubMed Central

    Duval, Céline; de Tayrac, Marie; Michaud, Karine; Cabillic, Florian; Paquet, Claudie; Gould, Peter Vincent; Saikali, Stéphan

    2015-01-01

    Objective To propose a new algorithm facilitating automated analysis of 1p and 19q status by FISH technique in oligodendroglial tumors with software packages available in the majority of institutions using this technique. Methods We documented all green/red (G/R) probe signal combinations in a retrospective series of 53 oligodendroglial tumors according to literature guidelines (Algorithm 1) and selected only the most significant combinations for a new algorithm (Algorithm 2). This second algorithm was then validated on a prospective internal series of 45 oligodendroglial tumors and on an external series of 36 gliomas. Results Algorithm 2 utilizes 24 G/R combinations which represent less than 40% of combinations observed with Algorithm 1. The new algorithm excludes some common G/R combinations (1/1, 3/2) and redefines the place of others (defining 1/2 as compatible with normal and 3/3, 4/4 and 5/5 as compatible with imbalanced chromosomal status). The new algorithm uses the combination + ratio method of signal probe analysis to give the best concordance between manual and automated analysis on samples of 100 tumor cells (91% concordance for 1p and 89% concordance for 19q) and full concordance on samples of 200 tumor cells. This highlights the value of automated analysis as a means to identify cases in which a larger number of tumor cells should be studied by manual analysis. Validation of this algorithm on a second series from another institution showed a satisfactory concordance (89%, κ = 0.8). Conclusion Our algorithm can be easily implemented on all existing FISH analysis software platforms and should facilitate multicentric evaluation and standardization of 1p/19q assessment in gliomas with reduction of the professional and technical time required. PMID:26135922

  5. Genius: a genetic algorithm for automated structure elucidation from 13C NMR spectra.

    PubMed

    Meiler, Jens; Will, Martin

    2002-03-06

    The automated structure elucidation of organic molecules from experimentally obtained properties is extended by an entirely new approach. A genetic algorithm is implemented that uses molecular constitution structures as individuals. With this approach, the structure of organic molecules can be optimized to meet experimental criteria, if in addition a fast and accurate method for the prediction of the used physical or chemical features is available. This is demonstrated using 13C NMR spectrum as readily obtainable information. By means of artificial neural networks a fast and accurate method for calculating the 13C NMR spectrum of the generated structures exists. The method is implemented and tested successfully for organic molecules with up to 18 non-hydrogen atoms.

  6. The AUDANA algorithm for automated protein 3D structure determination from NMR NOE data.

    PubMed

    Lee, Woonghee; Petit, Chad M; Cornilescu, Gabriel; Stark, Jaime L; Markley, John L

    2016-06-01

    We introduce AUDANA (Automated Database-Assisted NOE Assignment), an algorithm for determining three-dimensional structures of proteins from NMR data that automates the assignment of 3D-NOE spectra, generates distance constraints, and conducts iterative high temperature molecular dynamics and simulated annealing. The protein sequence, chemical shift assignments, and NOE spectra are the only required inputs. Distance constraints generated automatically from ambiguously assigned NOE peaks are validated during the structure calculation against information from an enlarged version of the freely available PACSY database that incorporates information on protein structures deposited in the Protein Data Bank (PDB). This approach yields robust sets of distance constraints and 3D structures. We evaluated the performance of AUDANA with input data for 14 proteins ranging in size from 6 to 25 kDa that had 27-98 % sequence identity to proteins in the database. In all cases, the automatically calculated 3D structures passed stringent validation tests. Structures were determined with and without database support. In 9/14 cases, database support improved the agreement with manually determined structures in the PDB and in 11/14 cases, database support lowered the r.m.s.d. of the family of 20 structural models.

  7. Automated decision algorithm applied to a field experiment with multiple research objectives: The DC3 campaign

    NASA Astrophysics Data System (ADS)

    Hanlon, C. J.; Small, A.; Bose, S.; Young, G. S.; Verlinde, J.

    2013-12-01

    In airborne field campaigns, investigators confront complex decision challenges concerning when and where to deploy aircraft to meet scientific objectives within constraints of time and budgeted flight hours. An automated flight decision recommendation system was developed to assist investigators leading the Deep Convective Clouds and Chemistry (DC3) campaign in spring--summer 2012. In making flight decisions, DC3 investigators needed to integrate two distinct, potentially competing objectives: to maximize the total harvest of data collected, and also to maintain an approximate balance of data collected from each of three geographic study regions. Choices needed to satisfy several constraint conditions including, most prominently, a limit on the total number of flight hours, and a bound on the number of calendar days in the field. An automated recommendation system was developed by translating these objectives and bounds into a formal problem of constrained optimization. In this formalization, a key step involved the mathematical representation of investigators' scientific preferences over the set of possible data collection outcomes. Competing objectives were integrated into a single metric by means of a utility function, which served to quantify the value of alternative data portfolios. Flight recommendations were generated to maximize the expected utility of each daily decision, conditioned on that day's forecast. A calibrated forecast probability of flight success in each study region was generated according to a forecasting system trained on numerical weather prediction model output, as well as expected climatological probability of flight success on future days. System performance was evaluated by comparing the data yielded by the actual DC3 campaign, compared with the yield that would have been realized had the algorithmic recommendations been followed. It was found that the algorithmic system would have achieved 19%--59% greater utility than the decisions

  8. A reproducible automated segmentation algorithm for corneal epithelium cell images from in vivo laser scanning confocal microscopy.

    PubMed

    Bullet, Julien; Gaujoux, Thomas; Borderie, Vincent; Bloch, Isabelle; Laroche, Laurent

    2014-06-01

    To evaluate an automated process to find borders of corneal basal epithelial cells in pictures obtained from in vivo laser scanning confocal microscopy (Heidelberg Retina Tomograph III with Rostock corneal module). On a sample of 20 normal corneal epithelial pictures, images were segmented through an automated four-step segmentation algorithm. Steps of the algorithm included noise reduction through a fast Fourier transform (FFT) band-pass filter, image binarization with a mean value threshold, watershed segmentation algorithm on distance map to separate fused cells and Voronoi diagram segmentation algorithm (which gives a final mask of cell borders). Cells were then automatically counted using this border mask. On the original image either with contrast enhancement or noise reduction, cells were manually counted by a trained operator. The average cell density was 7722.5 cells/mm(2) as assessed by automated analysis and 7732.5 cells/mm(2) as assessed by manual analysis (p = 0.93). Correlation between automated and manual analysis was strong (r = 0.974 [0.934-0.990], p < 0.001). Bland-Altman method gives a mean difference in density of 10 cells/mm(2) and a limits of agreement ranging from -971 to +991 cells/mm(2) . Visually, the algorithm correctly found almost all borders. This automated segmentation algorithm is worth for assessing corneal epithelial basal cell density and morphometry. This procedure is fully reproducible, with no operator-induced variability. © 2014 Acta Ophthalmologica Scandinavica Foundation. Published by John Wiley & Sons Ltd.

  9. Automated condition-invariable neurite segmentation and synapse classification using textural analysis-based machine-learning algorithms

    PubMed Central

    Kandaswamy, Umasankar; Rotman, Ziv; Watt, Dana; Schillebeeckx, Ian; Cavalli, Valeria; Klyachko, Vitaly

    2013-01-01

    High-resolution live-cell imaging studies of neuronal structure and function are characterized by large variability in image acquisition conditions due to background and sample variations as well as low signal-to-noise ratio. The lack of automated image analysis tools that can be generalized for varying image acquisition conditions represents one of the main challenges in the field of biomedical image analysis. Specifically, segmentation of the axonal/dendritic arborizations in brightfield or fluorescence imaging studies is extremely labor-intensive and still performed mostly manually. Here we describe a fully automated machine-learning approach based on textural analysis algorithms for segmenting neuronal arborizations in high-resolution brightfield images of live cultured neurons. We compare performance of our algorithm to manual segmentation and show that it combines 90% accuracy, with similarly high levels of specificity and sensitivity. Moreover, the algorithm maintains high performance levels under a wide range of image acquisition conditions indicating that it is largely condition-invariable. We further describe an application of this algorithm to fully automated synapse localization and classification in fluorescence imaging studies based on synaptic activity. Textural analysis-based machine-learning approach thus offers a high performance condition-invariable tool for automated neurite segmentation. PMID:23261652

  10. Automated condition-invariable neurite segmentation and synapse classification using textural analysis-based machine-learning algorithms.

    PubMed

    Kandaswamy, Umasankar; Rotman, Ziv; Watt, Dana; Schillebeeckx, Ian; Cavalli, Valeria; Klyachko, Vitaly A

    2013-02-15

    High-resolution live-cell imaging studies of neuronal structure and function are characterized by large variability in image acquisition conditions due to background and sample variations as well as low signal-to-noise ratio. The lack of automated image analysis tools that can be generalized for varying image acquisition conditions represents one of the main challenges in the field of biomedical image analysis. Specifically, segmentation of the axonal/dendritic arborizations in brightfield or fluorescence imaging studies is extremely labor-intensive and still performed mostly manually. Here we describe a fully automated machine-learning approach based on textural analysis algorithms for segmenting neuronal arborizations in high-resolution brightfield images of live cultured neurons. We compare performance of our algorithm to manual segmentation and show that it combines 90% accuracy, with similarly high levels of specificity and sensitivity. Moreover, the algorithm maintains high performance levels under a wide range of image acquisition conditions indicating that it is largely condition-invariable. We further describe an application of this algorithm to fully automated synapse localization and classification in fluorescence imaging studies based on synaptic activity. Textural analysis-based machine-learning approach thus offers a high performance condition-invariable tool for automated neurite segmentation. Copyright © 2012 Elsevier B.V. All rights reserved.

  11. Comparative study of maximum likelihood and spectral angle mapper algorithms used for automated detection of melanoma.

    PubMed

    Ibraheem, I

    2015-02-01

    Melanoma is a leading fatal illness responsible for 80% of deaths from skin cancer. It originates in the pigment-producing melanocytes in the basal layer of the epidermis. Melanocytes produce the melanin (the dark pigment), which is responsible for the color of skin. As all cancers, melanoma is caused by damage to the DNA of the cells, which causes the cell to grow out of control, leading to a tumor, which is much more dangerous if it cannot be found or detected early. Only biopsy can determine exact malformation diagnosis, although it can rise metastasizing. When a melanoma is suspected, the usual standard procedure is to perform a biopsy and to subsequently analyze the suspicious tissue under the microscope. In this paper, we provide a new approach using methods known as 'imaging spectroscopy' or 'spectral imaging' for early detection of melanoma using two different supervised classifier algorithms, maximum likelihood (ML) and spectral angle mapper (SAM). SAM rests on the spectral 'angular distances' and the conventional classifier ML rests on the spectral distance concept. The results show that the ML classifier was more efficient for pixel classification than SAM. However, SAM was more suitable for object classification. © 2014 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  12. Alignment, segmentation and 3-D reconstruction of serial sections based on automated algorithm

    NASA Astrophysics Data System (ADS)

    Bian, Weiguo; Tang, Shaojie; Xu, Qiong; Lian, Qin; Wang, Jin; Li, Dichen

    2012-12-01

    A well-defined three-dimensional (3-D) reconstruction of bone-cartilage transitional structures is crucial for the osteochondral restoration. This paper presents an accurate, computationally efficient and fully-automated algorithm for the alignment and segmentation of two-dimensional (2-D) serial to construct the 3-D model of bone-cartilage transitional structures. Entire system includes the following five components: (1) image harvest, (2) image registration, (3) image segmentation, (4) 3-D reconstruction and visualization, and (5) evaluation. A computer program was developed in the environment of Matlab for the automatic alignment and segmentation of serial sections. Automatic alignment algorithm based on the position's cross-correlation of the anatomical characteristic feature points of two sequential sections. A method combining an automatic segmentation and an image threshold processing was applied to capture the regions and structures of interest. SEM micrograph and 3-D model reconstructed directly in digital microscope were used to evaluate the reliability and accuracy of this strategy. The morphology of 3-D model constructed by serial sections is consistent with the results of SEM micrograph and 3-D model of digital microscope.

  13. An Automated Reference Frame Selection (ARFS) Algorithm for Cone Imaging with Adaptive Optics Scanning Light Ophthalmoscopy

    PubMed Central

    Salmon, Alexander E.; Cooper, Robert F.; Langlo, Christopher S.; Baghaie, Ahmadreza; Dubra, Alfredo; Carroll, Joseph

    2017-01-01

    Purpose To develop an automated reference frame selection (ARFS) algorithm to replace the subjective approach of manually selecting reference frames for processing adaptive optics scanning light ophthalmoscope (AOSLO) videos of cone photoreceptors. Methods Relative distortion was measured within individual frames before conducting image-based motion tracking and sorting of frames into distinct spatial clusters. AOSLO images from nine healthy subjects were processed using ARFS and human-derived reference frames, then aligned to undistorted AO-flood images by nonlinear registration and the registration transformations were compared. The frequency at which humans selected reference frames that were rejected by ARFS was calculated in 35 datasets from healthy subjects, and subjects with achromatopsia, albinism, or retinitis pigmentosa. The level of distortion in this set of human-derived reference frames was assessed. Results The average transformation vector magnitude required for registration of AOSLO images to AO-flood images was significantly reduced from 3.33 ± 1.61 pixels when using manual reference frame selection to 2.75 ± 1.60 pixels (mean ± SD) when using ARFS (P = 0.0016). Between 5.16% and 39.22% of human-derived frames were rejected by ARFS. Only 2.71% to 7.73% of human-derived frames were ranked in the top 5% of least distorted frames. Conclusion ARFS outperforms expert observers in selecting minimally distorted reference frames in AOSLO image sequences. The low success rate in human frame choice illustrates the difficulty in subjectively assessing image distortion. Translational Relevance Manual reference frame selection represented a significant barrier to a fully automated image-processing pipeline (including montaging, cone identification, and metric extraction). The approach presented here will aid in the clinical translation of AOSLO imaging. PMID:28392976

  14. National Automated Surveillance of Hospital-Acquired Bacteremia in Denmark Using a Computer Algorithm.

    PubMed

    Gubbels, Sophie; Nielsen, Jens; Voldstedlund, Marianne; Kristensen, Brian; Schønheyder, Henrik C; Ellermann-Eriksen, Svend; Engberg, Jørgen H; Møller, Jens K; Østergaard, Christian; Mølbak, Kåre

    2017-03-09

    BACKGROUND In 2015, Denmark launched an automated surveillance system for hospital-acquired infections, the Hospital-Acquired Infections Database (HAIBA). OBJECTIVE To describe the algorithm used in HAIBA, to determine its concordance with point prevalence surveys (PPSs), and to present trends for hospital-acquired bacteremia SETTING Private and public hospitals in Denmark METHODS A hospital-acquired bacteremia case was defined as at least 1 positive blood culture with at least 1 pathogen (bacterium or fungus) taken between 48 hours after admission and 48 hours after discharge, using the Danish Microbiology Database and the Danish National Patient Registry. PPSs performed in 2012 and 2013 were used for comparison. RESULTS National trends showed an increase in HA bacteremia cases between 2010 and 2014. Incidence was higher for men than women (9.6 vs 5.4 per 10,000 risk days) and was highest for those aged 61-80 years (9.5 per 10,000 risk days). The median daily prevalence was 3.1% (range, 2.1%-4.7%). Regional incidence varied from 6.1 to 8.1 per 10,000 risk days. The microorganisms identified were typical for HA bacteremia. Comparison of HAIBA with PPS showed a sensitivity of 36% and a specificity of 99%. HAIBA was less sensitive for patients in hematology departments and intensive care units. Excluding these departments improved the sensitivity of HAIBA to 44%. CONCLUSIONS Although the estimated sensitivity of HAIBA compared with PPS is low, a PPS is not a gold standard. Given the many advantages of automated surveillance, HAIBA allows monitoring of HA bacteremia across the healthcare system, supports prioritizing preventive measures, and holds promise for evaluating interventions. Infect Control Hosp Epidemiol 2017;1-8.

  15. Development of a Reliable Automated Algorithm for the Morphometric Analysis of Human Corneal Endothelium.

    PubMed

    Scarpa, Fabio; Ruggeri, Alfredo

    2016-09-01

    Corneal images acquired by in vivo microscopy provide important clinical information on the health state of the corneal endothelium. However, the reliable estimation of the clinical morphometric parameters requires the accurate detection of cell contours in a large number of cells. Thus, for the practical application of this analysis in clinical settings, an automated method is needed. We propose the automatic segmentation of corneal endothelial cells contour through an innovative technique based on a genetic algorithm, which combines information about the typical regularity of endothelial cells shape with the pixels intensity of the actual image. The developed procedure is applied to 30 images acquired with the SP-3000P Topcon specular microscope. Automatic assessment of the clinical parameters is then performed by estimating endothelial cell density (ECD, number of cells per unit area), pleomorphism (fraction of hexagonal cells), and polymegethism (fractional standard deviation of cell areas). Ground truth values for these clinical parameters were obtained from cell contours manually drawn by 2 experts. The mean percent absolute difference between the manual and the automated estimation was 0.6% for ECD, 3.1% for pleomorphism, and 5.3% for polymegethism. Comparable differences were obtained between the estimations provided by the 2 experts (0.5% for ECD, 2.6% for pleomorphism, and 2.9% for polymegethism). No statistically significant difference (P-value > 0.2) was found between automatic and manual assessments of each clinical parameter (power ≥ 77%). The proposed totally automatic method seems capable of obtaining a reliable estimation of the relevant morphometric parameters used in clinical practice.

  16. Development and validation of an automated operational modal analysis algorithm for vibration-based monitoring and tensile load estimation

    NASA Astrophysics Data System (ADS)

    Rainieri, Carlo; Fabbrocino, Giovanni

    2015-08-01

    In the last few decades large research efforts have been devoted to the development of methods for automated detection of damage and degradation phenomena at an early stage. Modal-based damage detection techniques are well-established methods, whose effectiveness for Level 1 (existence) and Level 2 (location) damage detection is demonstrated by several studies. The indirect estimation of tensile loads in cables and tie-rods is another attractive application of vibration measurements. It provides interesting opportunities for cheap and fast quality checks in the construction phase, as well as for safety evaluations and structural maintenance over the structure lifespan. However, the lack of automated modal identification and tracking procedures has been for long a relevant drawback to the extensive application of the above-mentioned techniques in the engineering practice. An increasing number of field applications of modal-based structural health and performance assessment are appearing after the development of several automated output-only modal identification procedures in the last few years. Nevertheless, additional efforts are still needed to enhance the robustness of automated modal identification algorithms, control the computational efforts and improve the reliability of modal parameter estimates (in particular, damping). This paper deals with an original algorithm for automated output-only modal parameter estimation. Particular emphasis is given to the extensive validation of the algorithm based on simulated and real datasets in view of continuous monitoring applications. The results point out that the algorithm is fairly robust and demonstrate its ability to provide accurate and precise estimates of the modal parameters, including damping ratios. As a result, it has been used to develop systems for vibration-based estimation of tensile loads in cables and tie-rods. Promising results have been achieved for non-destructive testing as well as continuous

  17. Bayesian supervised dimensionality reduction.

    PubMed

    Gönen, Mehmet

    2013-12-01

    Dimensionality reduction is commonly used as a preprocessing step before training a supervised learner. However, coupled training of dimensionality reduction and supervised learning steps may improve the prediction performance. In this paper, we introduce a simple and novel Bayesian supervised dimensionality reduction method that combines linear dimensionality reduction and linear supervised learning in a principled way. We present both Gibbs sampling and variational approximation approaches to learn the proposed probabilistic model for multiclass classification. We also extend our formulation toward model selection using automatic relevance determination in order to find the intrinsic dimensionality. Classification experiments on three benchmark data sets show that the new model significantly outperforms seven baseline linear dimensionality reduction algorithms on very low dimensions in terms of generalization performance on test data. The proposed model also obtains the best results on an image recognition task in terms of classification and retrieval performances.

  18. Automated SNP genotype clustering algorithm to improve data completeness in high-throughput SNP genotyping datasets from custom arrays.

    PubMed

    Smith, Edward M; Littrell, Jack; Olivier, Michael

    2007-12-01

    High-throughput SNP genotyping platforms use automated genotype calling algorithms to assign genotypes. While these algorithms work efficiently for individual platforms, they are not compatible with other platforms, and have individual biases that result in missed genotype calls. Here we present data on the use of a second complementary SNP genotype clustering algorithm. The algorithm was originally designed for individual fluorescent SNP genotyping assays, and has been optimized to permit the clustering of large datasets generated from custom-designed Affymetrix SNP panels. In an analysis of data from a 3K array genotyped on 1,560 samples, the additional analysis increased the overall number of genotypes by over 45,000, significantly improving the completeness of the experimental data. This analysis suggests that the use of multiple genotype calling algorithms may be advisable in high-throughput SNP genotyping experiments. The software is written in Perl and is available from the corresponding author.

  19. An efficient automated algorithm to detect ocular surface temperature on sequence of thermograms using snake and target tracing function.

    PubMed

    Tan, Jen Hong; Ng, E Y K; Acharya U, Rajendra

    2011-10-01

    Functional infrared (IR) imaging is widely adopted in medical field nowadays, with more emphasis on breast cancer and ocular abnormalities. In this article, an algorithm is presented to accurately locate the eye and cornea in ocular thermographic sequences, which were recorded utilizing functional infrared imaging. The localization is achieved by snake algorithm coupled with a newly proposed target tracing function. The target tracing function enables automated localization, allows the absence of any manual assistance before the algorithm runs. Genetic algorithm is used to perform the search for global minimum on the function to produce desired localization. On all the cases we have studied, in average the region encircled by the algorithm covers 92% of the true ocular region. As for the non-ocular region covered, it only accounts for less than 5% of the encircled region.

  20. Automated beam placement for breast radiotherapy using a support vector machine based algorithm

    SciTech Connect

    Zhao Xuan; Kong, Dewen; Jozsef, Gabor; Chang, Jenghwa; Wong, Edward K.; Formenti, Silvia C.; Wang Yao

    2012-05-15

    Purpose: To develop an automated beam placement technique for whole breast radiotherapy using tangential beams. We seek to find optimal parameters for tangential beams to cover the whole ipsilateral breast (WB) and minimize the dose to the organs at risk (OARs). Methods: A support vector machine (SVM) based method is proposed to determine the optimal posterior plane of the tangential beams. Relative significances of including/avoiding the volumes of interests are incorporated into the cost function of the SVM. After finding the optimal 3-D plane that separates the whole breast (WB) and the included clinical target volumes (CTVs) from the OARs, the gantry angle, collimator angle, and posterior jaw size of the tangential beams are derived from the separating plane equation. Dosimetric measures of the treatment plans determined by the automated method are compared with those obtained by applying manual beam placement by the physicians. The method can be further extended to use multileaf collimator (MLC) blocking by optimizing posterior MLC positions. Results: The plans for 36 patients (23 prone- and 13 supine-treated) with left breast cancer were analyzed. Our algorithm reduced the volume of the heart that receives >500 cGy dose (V5) from 2.7 to 1.7 cm{sup 3} (p = 0.058) on average and the volume of the ipsilateral lung that receives >1000 cGy dose (V10) from 55.2 to 40.7 cm{sup 3} (p = 0.0013). The dose coverage as measured by volume receiving >95% of the prescription dose (V95%) of the WB without a 5 mm superficial layer decreases by only 0.74% (p = 0.0002) and the V95% for the tumor bed with 1.5 cm margin remains unchanged. Conclusions: This study has demonstrated the feasibility of using a SVM-based algorithm to determine optimal beam placement without a physician's intervention. The proposed method reduced the dose to OARs, especially for supine treated patients, without any relevant degradation of dose homogeneity and coverage in general.

  1. Automated Algorithms for Quantum-Level Accuracy in Atomistic Simulations: LDRD Final Report.

    SciTech Connect

    Thompson, Aidan Patrick; Schultz, Peter Andrew; Crozier, Paul; Moore, Stan Gerald; Swiler, Laura Painton; Stephens, John Adam; Trott, Christian Robert; Foiles, Stephen Martin; Tucker, Garritt J.

    2014-09-01

    This report summarizes the result of LDRD project 12-0395, titled "Automated Algorithms for Quantum-level Accuracy in Atomistic Simulations." During the course of this LDRD, we have developed an interatomic potential for solids and liquids called Spectral Neighbor Analysis Poten- tial (SNAP). The SNAP potential has a very general form and uses machine-learning techniques to reproduce the energies, forces, and stress tensors of a large set of small configurations of atoms, which are obtained using high-accuracy quantum electronic structure (QM) calculations. The local environment of each atom is characterized by a set of bispectrum components of the local neighbor density projected on to a basis of hyperspherical harmonics in four dimensions. The SNAP coef- ficients are determined using weighted least-squares linear regression against the full QM training set. This allows the SNAP potential to be fit in a robust, automated manner to large QM data sets using many bispectrum components. The calculation of the bispectrum components and the SNAP potential are implemented in the LAMMPS parallel molecular dynamics code. Global optimization methods in the DAKOTA software package are used to seek out good choices of hyperparameters that define the overall structure of the SNAP potential. FitSnap.py, a Python-based software pack- age interfacing to both LAMMPS and DAKOTA is used to formulate the linear regression problem, solve it, and analyze the accuracy of the resultant SNAP potential. We describe a SNAP potential for tantalum that accurately reproduces a variety of solid and liquid properties. Most significantly, in contrast to existing tantalum potentials, SNAP correctly predicts the Peierls barrier for screw dislocation motion. We also present results from SNAP potentials generated for indium phosphide (InP) and silica (SiO 2 ). We describe efficient algorithms for calculating SNAP forces and energies in molecular dynamics simulations using massively parallel computers

  2. Performance evaluation of an automated single-channel sleep–wake detection algorithm

    PubMed Central

    Kaplan, Richard F; Wang, Ying; Loparo, Kenneth A; Kelly, Monica R; Bootzin, Richard R

    2014-01-01

    Background A need exists, from both a clinical and a research standpoint, for objective sleep measurement systems that are both easy to use and can accurately assess sleep and wake. This study evaluates the output of an automated sleep–wake detection algorithm (Z-ALG) used in the Zmachine (a portable, single-channel, electroencephalographic [EEG] acquisition and analysis system) against laboratory polysomnography (PSG) using a consensus of expert visual scorers. Methods Overnight laboratory PSG studies from 99 subjects (52 females/47 males, 18–60 years, median age 32.7 years), including both normal sleepers and those with a variety of sleep disorders, were assessed. PSG data obtained from the differential mastoids (A1–A2) were assessed by Z-ALG, which determines sleep versus wake every 30 seconds using low-frequency, intermediate-frequency, and high-frequency and time domain EEG features. PSG data were independently scored by two to four certified PSG technologists, using standard Rechtschaffen and Kales guidelines, and these score files were combined on an epoch-by-epoch basis, using a majority voting rule, to generate a single score file per subject to compare against the Z-ALG output. Both epoch-by-epoch and standard sleep indices (eg, total sleep time, sleep efficiency, latency to persistent sleep, and wake after sleep onset) were compared between the Z-ALG output and the technologist consensus score files. Results Overall, the sensitivity and specificity for detecting sleep using the Z-ALG as compared to the technologist consensus are 95.5% and 92.5%, respectively, across all subjects, and the positive predictive value and the negative predictive value for detecting sleep are 98.0% and 84.2%, respectively. Overall κ agreement is 0.85 (approaching the level of agreement observed among sleep technologists). These results persist when the sleep disorder subgroups are analyzed separately. Conclusion This study demonstrates that the Z-ALG automated sleep

  3. On the implementation of an automated acoustic output optimization algorithm for subharmonic aided pressure estimation

    PubMed Central

    Dave, J. K.; Halldorsdottir, V. G.; Eisenbrey, J. R.; Merton, D. A.; Liu, J. B.; Machado, P.; Zhao, H.; Park, S.; Dianis, S.; Chalek, C. L.; Thomenius, K. E.; Brown, D. B.; Forsberg, F.

    2013-01-01

    Incident acoustic output (IAO) dependent subharmonic signal amplitudes from ultrasound contrast agents can be categorized into occurrence, growth or saturation stages. Subharmonic aided pressure estimation (SHAPE) is a technique that utilizes growth stage subharmonic signal amplitudes for hydrostatic pressure estimation. In this study, we developed an automated IAO optimization algorithm to identify the IAO level eliciting growth stage subharmonic signals and also studied the effect of pulse length on SHAPE. This approach may help eliminate the problems of acquiring and analyzing the data offline at all IAO levels as was done in previous studies and thus, pave the way for real-time clinical pressure monitoring applications. The IAO optimization algorithm was implemented on a Logiq 9 (GE Healthcare, Milwaukee, WI) scanner interfaced with a computer. The optimization algorithm stepped the ultrasound scanner from 0 to 100 % IAO. A logistic equation fitting function was applied with the criterion of minimum least squared error between the fitted subharmonic amplitudes and the measured subharmonic amplitudes as a function of the IAO levels and the optimum IAO level was chosen corresponding to the inflection point calculated from the fitted data. The efficacy of the optimum IAO level was investigated for in vivo SHAPE to monitor portal vein (PV) pressures in 5 canines and was compared with the performance of IAO levels, below and above the optimum IAO level, for 4, 8 and 16 transmit cycles. The canines received a continuous infusion of Sonazoid microbubbles (1.5 μl/kg/min; GE Healthcare, Oslo, Norway). PV pressures were obtained using a surgically introduced pressure catheter (Millar Instruments, Inc., Houston, TX) and were recorded before and after increasing PV pressures. The experiments showed that optimum IAO levels for SHAPE in the canines ranged from 6 to 40 %. The best correlation between changes in PV pressures and in subharmonic amplitudes (r = -0.76; p = 0

  4. Assessment of an Automated Touchdown Detection Algorithm for the Orion Crew Module

    NASA Technical Reports Server (NTRS)

    Gay, Robert S.

    2011-01-01

    Orion Crew Module (CM) touchdown detection is critical to activating the post-landing sequence that safe?s the Reaction Control Jets (RCS), ensures that the vehicle remains upright, and establishes communication with recovery forces. In order to accommodate safe landing of an unmanned vehicle or incapacitated crew, an onboard automated detection system is required. An Orion-specific touchdown detection algorithm was developed and evaluated to differentiate landing events from in-flight events. The proposed method will be used to initiate post-landing cutting of the parachute riser lines, to prevent CM rollover, and to terminate RCS jet firing prior to submersion. The RCS jets continue to fire until touchdown to maintain proper CM orientation with respect to the flight path and to limit impact loads, but have potentially hazardous consequences if submerged while firing. The time available after impact to cut risers and initiate the CM Up-righting System (CMUS) is measured in minutes, whereas the time from touchdown to RCS jet submersion is a function of descent velocity, sea state conditions, and is often less than one second. Evaluation of the detection algorithms was performed for in-flight events (e.g. descent under chutes) using hi-fidelity rigid body analyses in the Decelerator Systems Simulation (DSS), whereas water impacts were simulated using a rigid finite element model of the Orion CM in LS-DYNA. Two touchdown detection algorithms were evaluated with various thresholds: Acceleration magnitude spike detection, and Accumulated velocity changed (over a given time window) spike detection. Data for both detection methods is acquired from an onboard Inertial Measurement Unit (IMU) sensor. The detection algorithms were tested with analytically generated in-flight and landing IMU data simulations. The acceleration spike detection proved to be faster while maintaining desired safety margin. Time to RCS jet submersion was predicted analytically across a series of

  5. ACQUA: Automated Cyanobacterial Quantification Algorithm for toxic filamentous genera using spline curves, pattern recognition and machine learning.

    PubMed

    Gandola, Emanuele; Antonioli, Manuela; Traficante, Alessio; Franceschini, Simone; Scardi, Michele; Congestri, Roberta

    2016-05-01

    Toxigenic cyanobacteria are one of the main health risks associated with water resources worldwide, as their toxins can affect humans and fauna exposed via drinking water, aquaculture and recreation. Microscopy monitoring of cyanobacteria in water bodies and massive growth systems is a routine operation for cell abundance and growth estimation. Here we present ACQUA (Automated Cyanobacterial Quantification Algorithm), a new fully automated image analysis method designed for filamentous genera in Bright field microscopy. A pre-processing algorithm has been developed to highlight filaments of interest from background signals due to other phytoplankton and dust. A spline-fitting algorithm has been designed to recombine interrupted and crossing filaments in order to perform accurate morphometric analysis and to extract the surface pattern information of highlighted objects. In addition, 17 specific pattern indicators have been developed and used as input data for a machine-learning algorithm dedicated to the recognition between five widespread toxic or potentially toxic filamentous genera in freshwater: Aphanizomenon, Cylindrospermopsis, Dolichospermum, Limnothrix and Planktothrix. The method was validated using freshwater samples from three Italian volcanic lakes comparing automated vs. manual results. ACQUA proved to be a fast and accurate tool to rapidly assess freshwater quality and to characterize cyanobacterial assemblages in aquatic environments.

  6. Discovery of feature-based hot spots using supervised clustering

    NASA Astrophysics Data System (ADS)

    Ding, Wei; Stepinski, Tomasz F.; Parmar, Rachana; Jiang, Dan; Eick, Christoph F.

    2009-07-01

    Feature-based hot spots are localized regions where the attributes of objects attain high values. There is considerable interest in automatic identification of feature-based hot spots. This paper approaches the problem of finding feature-based hot spots from a data mining perspective, and describes a method that relies on supervised clustering to produce a list of hot spot regions. Supervised clustering uses a fitness function rewarding isolation of the hot spots to optimally subdivide the dataset. The clusters in the optimal division are ranked using the interestingness of clusters that encapsulate their utility for being hot spots. Hot spots are associated with the top ranked clusters. The effectiveness of supervised clustering as a hot spot identification method is evaluated for four conceptually different clustering algorithms using a dataset describing the spatial distribution of ground ice on Mars. Clustering solutions are visualized by specially developed raster approximations. Further assessment of the ability of different algorithms to yield hot spots is performed using raster approximations. Density-based clustering algorithm is found to be the most effective for hot spot identification. The results of the hot spot discovery by supervised clustering are comparable to those obtained using the G* statistic, but the new method offers a high degree of automation, making it an ideal tool for mining large datasets for the existence of potential hot spots.

  7. Application of a Natural Language Processing Algorithm to Asthma Ascertainment. An Automated Chart Review.

    PubMed

    Wi, Chung-Il; Sohn, Sunghwan; Rolfes, Mary C; Seabright, Alicia; Ryu, Euijung; Voge, Gretchen; Bachman, Kay A; Park, Miguel A; Kita, Hirohito; Croghan, Ivana T; Liu, Hongfang; Juhn, Young J

    2017-08-15

    Difficulty of asthma ascertainment and its associated methodologic heterogeneity have created significant barriers to asthma care and research. We evaluated the validity of an existing natural language processing (NLP) algorithm for asthma criteria to enable an automated chart review using electronic medical records (EMRs). The study was designed as a retrospective birth cohort study using a random sample of 500 subjects from the 1997-2007 Mayo Birth Cohort who were born at Mayo Clinic and enrolled in primary pediatric care at Mayo Clinic Rochester. Performance of NLP-based asthma ascertainment using predetermined asthma criteria was assessed by determining both criterion validity (chart review of EMRs by abstractor as a gold standard) and construct validity (association with known risk factors for asthma, such as allergic rhinitis). After excluding three subjects whose respiratory symptoms could be attributed to other conditions (e.g., tracheomalacia), among the remaining eligible 497 subjects, 51% were male, 77% white persons, and the median age at last follow-up date was 11.5 years. The asthma prevalence was 31% in the study cohort. Sensitivity, specificity, positive predictive value, and negative predictive value for NLP algorithm in predicting asthma status were 97%, 95%, 90%, and 98%, respectively. The risk factors for asthma (e.g., allergic rhinitis) that were identified either by NLP or the abstractor were the same. Asthma ascertainment through NLP should be considered in the era of EMRs because it can enable large-scale clinical studies in a more time-efficient manner and improve the recognition and care of childhood asthma in practice.

  8. Seasonal cultivated and fallow cropland mapping using MODIS-based automated cropland classification algorithm

    USGS Publications Warehouse

    Wu, Zhuoting; Thenkabail, Prasad S.; Mueller, Rick; Zakzeski, Audra; Melton, Forrest; Johnson, Lee; Rosevelt, Carolyn; Dwyer, John; Jones, Jeanine; Verdin, James P.

    2014-01-01

    Increasing drought occurrences and growing populations demand accurate, routine, and consistent cultivated and fallow cropland products to enable water and food security analysis. The overarching goal of this research was to develop and test automated cropland classification algorithm (ACCA) that provide accurate, consistent, and repeatable information on seasonal cultivated as well as seasonal fallow cropland extents and areas based on the Moderate Resolution Imaging Spectroradiometer remote sensing data. Seasonal ACCA development process involves writing series of iterative decision tree codes to separate cultivated and fallow croplands from noncroplands, aiming to accurately mirror reliable reference data sources. A pixel-by-pixel accuracy assessment when compared with the U.S. Department of Agriculture (USDA) cropland data showed, on average, a producer’s accuracy of 93% and a user’s accuracy of 85% across all months. Further, ACCA-derived cropland maps agreed well with the USDA Farm Service Agency crop acreage-reported data for both cultivated and fallow croplands with R-square values over 0.7 and field surveys with an accuracy of ≥95% for cultivated croplands and ≥76% for fallow croplands. Our results demonstrated the ability of ACCA to generate cropland products, such as cultivated and fallow cropland extents and areas, accurately, automatically, and repeatedly throughout the growing season.

  9. System Performance of an Integrated Airborne Spacing Algorithm with Ground Automation

    NASA Technical Reports Server (NTRS)

    Swieringa, Kurt A.; Wilson, Sara R.; Baxley, Brian T.

    2016-01-01

    The National Aeronautics and Space Administration's (NASA's) first Air Traffic Management (ATM) Technology Demonstration (ATD-1) was created to facilitate the transition of mature ATM technologies from the laboratory to operational use. The technologies selected for demonstration are the Traffic Management Advisor with Terminal Metering (TMA-TM), which provides precise time-based scheduling in the Terminal airspace; Controller Managed Spacing (CMS), which provides controllers with decision support tools to enable precise schedule conformance; and Interval Management (IM), which consists of flight deck automation that enables aircraft to achieve or maintain precise spacing behind another aircraft. Recent simulations and IM algorithm development at NASA have focused on trajectory-based IM operations where aircraft equipped with IM avionics are expected to achieve a spacing goal, assigned by air traffic controllers, at the final approach fix. The recently published IM Minimum Operational Performance Standards describe five types of IM operations. This paper discusses the results and conclusions of a human-in-the-loop simulation that investigated three of those IM operations. The results presented in this paper focus on system performance and integration metrics. Overall, the IM operations conducted in this simulation integrated well with ground-based decisions support tools and certain types of IM operational were able to provide improved spacing precision at the final approach fix; however, some issues were identified that should be addressed prior to implementing IM procedures into real-world operations.

  10. Automated Transient Recovery Algorithm using Discrete Zernike Polynomials on Image-Subtracted Data

    NASA Astrophysics Data System (ADS)

    Ackley, Kendall; Eikenberry, Stephen S.; Klimenko, Sergey

    2016-01-01

    We present an unsupervised algorithm for the automated identification of astrophysical transients recovered through image subtraction techniques. We use a set of discrete Zernike polynomials to decompose and characterize residual energy discovered in the final subtracted image, identifying candidate sources which appear point-like in nature. This work is motivated for use in collaboration with Advanced gravitational wave (GW) interferometers, such as Advanced LIGO and Virgo, where multiwavelength electromagnetic (EM) emission is expected in parallel with gravitational radiation from compact binary object mergers of neutron stars (NS-NS) and stellar-mass black holes (NS-BH). Imaging an EM counterpart coincident with a GW trigger will help to constrain the multi-dimensional GW parameter space as well as aid in the resolution of long-standing astrophysical mysteries, such as the true nature of the progenitor relationship between short-duration GRBs and massive compact binary mergers. We are working on making our method an open-source package optimized for low-latency response for community use during the upcoming era of GW astronomy.

  11. Modelling molecule-surface interactions--an automated quantum-classical approach using a genetic algorithm.

    PubMed

    Herbers, Claudia R; Johnston, Karen; van der Vegt, Nico F A

    2011-06-14

    We present an automated and efficient method to develop force fields for molecule-surface interactions. A genetic algorithm (GA) is used to parameterise a classical force field so that the classical adsorption energy landscape of a molecule on a surface matches the corresponding landscape from density functional theory (DFT) calculations. The procedure performs a sophisticated search in the parameter phase space and converges very quickly. The method is capable of fitting a significant number of structures and corresponding adsorption energies. Water on a ZnO(0001) surface was chosen as a benchmark system but the method is implemented in a flexible way and can be applied to any system of interest. In the present case, pairwise Lennard Jones (LJ) and Coulomb potentials are used to describe the molecule-surface interactions. In the course of the fitting procedure, the LJ parameters are refined in order to reproduce the adsorption energy landscape. The classical model is capable of describing a wide range of energies, which is essential for a realistic description of a fluid-solid interface.

  12. Seasonal cultivated and fallow cropland mapping using MODIS-based automated cropland classification algorithm

    NASA Astrophysics Data System (ADS)

    Wu, Zhuoting; Thenkabail, Prasad S.; Mueller, Rick; Zakzeski, Audra; Melton, Forrest; Johnson, Lee; Rosevelt, Carolyn; Dwyer, John; Jones, Jeanine; Verdin, James P.

    2014-01-01

    Increasing drought occurrences and growing populations demand accurate, routine, and consistent cultivated and fallow cropland products to enable water and food security analysis. The overarching goal of this research was to develop and test automated cropland classification algorithm (ACCA) that provide accurate, consistent, and repeatable information on seasonal cultivated as well as seasonal fallow cropland extents and areas based on the Moderate Resolution Imaging Spectroradiometer remote sensing data. Seasonal ACCA development process involves writing series of iterative decision tree codes to separate cultivated and fallow croplands from noncroplands, aiming to accurately mirror reliable reference data sources. A pixel-by-pixel accuracy assessment when compared with the U.S. Department of Agriculture (USDA) cropland data showed, on average, a producer's accuracy of 93% and a user's accuracy of 85% across all months. Further, ACCA-derived cropland maps agreed well with the USDA Farm Service Agency crop acreage-reported data for both cultivated and fallow croplands with R-square values over 0.7 and field surveys with an accuracy of ≥95% for cultivated croplands and ≥76% for fallow croplands. Our results demonstrated the ability of ACCA to generate cropland products, such as cultivated and fallow cropland extents and areas, accurately, automatically, and repeatedly throughout the growing season.

  13. Combining semi-automated image analysis techniques with machine learning algorithms to accelerate large-scale genetic studies.

    PubMed

    Atkinson, Jonathan A; Lobet, Guillaume; Noll, Manuel; Meyer, Patrick E; Griffiths, Marcus; Wells, Darren M

    2017-10-01

    Genetic analyses of plant root systems require large datasets of extracted architectural traits. To quantify such traits from images of root systems, researchers often have to choose between automated tools (that are prone to error and extract only a limited number of architectural traits) or semi-automated ones (that are highly time consuming). We trained a Random Forest algorithm to infer architectural traits from automatically extracted image descriptors. The training was performed on a subset of the dataset, then applied to its entirety. This strategy allowed us to (i) decrease the image analysis time by 73% and (ii) extract meaningful architectural traits based on image descriptors. We also show that these traits are sufficient to identify the quantitative trait loci that had previously been discovered using a semi-automated method. We have shown that combining semi-automated image analysis with machine learning algorithms has the power to increase the throughput of large-scale root studies. We expect that such an approach will enable the quantification of more complex root systems for genetic studies. We also believe that our approach could be extended to other areas of plant phenotyping. © The Authors 2017. Published by Oxford University Press.

  14. Supervised Autonomy

    ERIC Educational Resources Information Center

    Sexton, Patrick; Levy, Linda S.; Willeford, K. Sean; Barnum, Mary G.; Gardner, Greg; Guyer, M. Susan; Fincher, A. Louise

    2009-01-01

    Objective: The primary objective of this paper is to present the evolution, purpose, and definition of direct supervision in the athletic training clinical education. The secondary objective is to briefly present the factors that may negatively affect the quality of direct supervision to allow remediation and provide higher quality clinical…

  15. Supervised Autonomy

    ERIC Educational Resources Information Center

    Sexton, Patrick; Levy, Linda S.; Willeford, K. Sean; Barnum, Mary G.; Gardner, Greg; Guyer, M. Susan; Fincher, A. Louise

    2009-01-01

    Objective: The primary objective of this paper is to present the evolution, purpose, and definition of direct supervision in the athletic training clinical education. The secondary objective is to briefly present the factors that may negatively affect the quality of direct supervision to allow remediation and provide higher quality clinical…

  16. Comparing algorithms for automated vessel segmentation in computed tomography scans of the lung: the VESSEL12 study

    PubMed Central

    Rudyanto, Rina D.; Kerkstra, Sjoerd; van Rikxoort, Eva M.; Fetita, Catalin; Brillet, Pierre-Yves; Lefevre, Christophe; Xue, Wenzhe; Zhu, Xiangjun; Liang, Jianming; Öksüz, İlkay; Ünay, Devrim; Kadipaşaogandcaron;lu, Kamuran; Estépar, Raúl San José; Ross, James C.; Washko, George R.; Prieto, Juan-Carlos; Hoyos, Marcela Hernández; Orkisz, Maciej; Meine, Hans; Hüllebrand, Markus; Stöcker, Christina; Mir, Fernando Lopez; Naranjo, Valery; Villanueva, Eliseo; Staring, Marius; Xiao, Changyan; Stoel, Berend C.; Fabijanska, Anna; Smistad, Erik; Elster, Anne C.; Lindseth, Frank; Foruzan, Amir Hossein; Kiros, Ryan; Popuri, Karteek; Cobzas, Dana; Jimenez-Carretero, Daniel; Santos, Andres; Ledesma-Carbayo, Maria J.; Helmberger, Michael; Urschler, Martin; Pienn, Michael; Bosboom, Dennis G.H.; Campo, Arantza; Prokop, Mathias; de Jong, Pim A.; Ortiz-de-Solorzano, Carlos; Muñoz-Barrutia, Arrate; van Ginneken, Bram

    2016-01-01

    The VESSEL12 (VESsel SEgmentation in the Lung) challenge objectively compares the performance of different algorithms to identify vessels in thoracic computed tomography (CT) scans. Vessel segmentation is fundamental in computer aided processing of data generated by 3D imaging modalities. As manual vessel segmentation is prohibitively time consuming, any real world application requires some form of automation. Several approaches exist for automated vessel segmentation, but judging their relative merits is difficult due to a lack of standardized evaluation. We present an annotated reference dataset containing 20 CT scans and propose nine categories to perform a comprehensive evaluation of vessel segmentation algorithms from both academia and industry. Twenty algorithms participated in the VESSEL12 challenge, held at International Symposium on Biomedical Imaging (ISBI) 2012. All results have been published at the VESSEL12 website http://vessel12.grand-challenge.org. The challenge remains ongoing and open to new participants. Our three contributions are: (1) an annotated reference dataset available online for evaluation of new algorithms; (2) a quantitative scoring system for objective comparison of algorithms; and (3) performance analysis of the strengths and weaknesses of the various vessel segmentation methods in the presence of various lung diseases. PMID:25113321

  17. Comparing algorithms for automated vessel segmentation in computed tomography scans of the lung: the VESSEL12 study.

    PubMed

    Rudyanto, Rina D; Kerkstra, Sjoerd; van Rikxoort, Eva M; Fetita, Catalin; Brillet, Pierre-Yves; Lefevre, Christophe; Xue, Wenzhe; Zhu, Xiangjun; Liang, Jianming; Öksüz, Ilkay; Ünay, Devrim; Kadipaşaoğlu, Kamuran; Estépar, Raúl San José; Ross, James C; Washko, George R; Prieto, Juan-Carlos; Hoyos, Marcela Hernández; Orkisz, Maciej; Meine, Hans; Hüllebrand, Markus; Stöcker, Christina; Mir, Fernando Lopez; Naranjo, Valery; Villanueva, Eliseo; Staring, Marius; Xiao, Changyan; Stoel, Berend C; Fabijanska, Anna; Smistad, Erik; Elster, Anne C; Lindseth, Frank; Foruzan, Amir Hossein; Kiros, Ryan; Popuri, Karteek; Cobzas, Dana; Jimenez-Carretero, Daniel; Santos, Andres; Ledesma-Carbayo, Maria J; Helmberger, Michael; Urschler, Martin; Pienn, Michael; Bosboom, Dennis G H; Campo, Arantza; Prokop, Mathias; de Jong, Pim A; Ortiz-de-Solorzano, Carlos; Muñoz-Barrutia, Arrate; van Ginneken, Bram

    2014-10-01

    The VESSEL12 (VESsel SEgmentation in the Lung) challenge objectively compares the performance of different algorithms to identify vessels in thoracic computed tomography (CT) scans. Vessel segmentation is fundamental in computer aided processing of data generated by 3D imaging modalities. As manual vessel segmentation is prohibitively time consuming, any real world application requires some form of automation. Several approaches exist for automated vessel segmentation, but judging their relative merits is difficult due to a lack of standardized evaluation. We present an annotated reference dataset containing 20 CT scans and propose nine categories to perform a comprehensive evaluation of vessel segmentation algorithms from both academia and industry. Twenty algorithms participated in the VESSEL12 challenge, held at International Symposium on Biomedical Imaging (ISBI) 2012. All results have been published at the VESSEL12 website http://vessel12.grand-challenge.org. The challenge remains ongoing and open to new participants. Our three contributions are: (1) an annotated reference dataset available online for evaluation of new algorithms; (2) a quantitative scoring system for objective comparison of algorithms; and (3) performance analysis of the strengths and weaknesses of the various vessel segmentation methods in the presence of various lung diseases.

  18. An automated land-use mapping comparison of the Bayesian maximum likelihood and linear discriminant analysis algorithms

    NASA Technical Reports Server (NTRS)

    Tom, C. H.; Miller, L. D.

    1984-01-01

    The Bayesian maximum likelihood parametric classifier has been tested against the data-based formulation designated 'linear discrimination analysis', using the 'GLIKE' decision and "CLASSIFY' classification algorithms in the Landsat Mapping System. Identical supervised training sets, USGS land use/land cover classes, and various combinations of Landsat image and ancilliary geodata variables, were used to compare the algorithms' thematic mapping accuracy on a single-date summer subscene, with a cellularized USGS land use map of the same time frame furnishing the ground truth reference. CLASSIFY, which accepts a priori class probabilities, is found to be more accurate than GLIKE, which assumes equal class occurrences, for all three mapping variable sets and both levels of detail. These results may be generalized to direct accuracy, time, cost, and flexibility advantages of linear discriminant analysis over Bayesian methods.

  19. An automated land-use mapping comparison of the Bayesian maximum likelihood and linear discriminant analysis algorithms

    NASA Technical Reports Server (NTRS)

    Tom, C. H.; Miller, L. D.

    1984-01-01

    The Bayesian maximum likelihood parametric classifier has been tested against the data-based formulation designated 'linear discrimination analysis', using the 'GLIKE' decision and "CLASSIFY' classification algorithms in the Landsat Mapping System. Identical supervised training sets, USGS land use/land cover classes, and various combinations of Landsat image and ancilliary geodata variables, were used to compare the algorithms' thematic mapping accuracy on a single-date summer subscene, with a cellularized USGS land use map of the same time frame furnishing the ground truth reference. CLASSIFY, which accepts a priori class probabilities, is found to be more accurate than GLIKE, which assumes equal class occurrences, for all three mapping variable sets and both levels of detail. These results may be generalized to direct accuracy, time, cost, and flexibility advantages of linear discriminant analysis over Bayesian methods.

  20. Accuracy and validation of an automated electronic algorithm to identify patients with atrial fibrillation at risk for stroke.

    PubMed

    Navar-Boggan, Ann Marie; Rymer, Jennifer A; Piccini, Jonathan P; Shatila, Wassim; Ring, Lauren; Stafford, Judith A; Al-Khatib, Sana M; Peterson, Eric D

    2015-01-01

    There is no universally accepted algorithm for identifying atrial fibrillation (AF) patients and stroke risk using electronic data for use in performance measures. Patients with AF seen in clinic were identified based on International Classification of Diseases, Ninth Revision(ICD-9) codes. CHADS(2) and CHA(2)DS(s)-Vasc scores were derived from a broad, 10-year algorithm using IICD-9 codes dating back 10 years and a restrictive, 1-year algorithm that required a diagnosis within the past year. Accuracy of claims-based AF diagnoses and of each stroke risk classification algorithm were evaluated using chart reviews for 300 patients. These algorithms were applied to assess system-wide anticoagulation rates. Between 6/1/2011, and 5/31/2012, we identified 6,397 patients with AF. Chart reviews confirmed AF or atrial flutter in 95.7%. A 1-year algorithm using CHA(2)DS(2)-Vasc score ≥2 to identify patients at risk for stroke maximized positive predictive value (97.5% [negative predictive value 65.1%]). The PPV of the 10-year algorithm using CHADS(2) was 88.0%; 12% those identified as high-risk had CHADS(2) scores <2. Anticoagulation rates were identical using 1-year and 10-year algorithms for patients with CHADS(2) scores ≥2 (58.5% on anticoagulation) and CHA(2)DS(2)-Vasc scores ≥2 (56.0% on anticoagulation). Automated methods can be used to identify patients with prevalent AF indicated for anticoagulation but may have misclassification up to 12%, which limits the utility of relying on administrative data alone for quality assessment. Misclassification is minimized by requiring comorbidity diagnoses within the prior year and using a CHA(2)DS(2)-Vasc based algorithm. Despite differences in accuracy between algorithms, system-wide anticoagulation rates assessed were similar regardless of algorithm used. Copyright © 2014 Elsevier Inc. All rights reserved.

  1. Automated Detection of P. falciparum Using Machine Learning Algorithms with Quantitative Phase Images of Unstained Cells.

    PubMed

    Park, Han Sang; Rinehart, Matthew T; Walzer, Katelyn A; Chi, Jen-Tsan Ashley; Wax, Adam

    2016-01-01

    Malaria detection through microscopic examination of stained blood smears is a diagnostic challenge that heavily relies on the expertise of trained microscopists. This paper presents an automated analysis method for detection and staging of red blood cells infected by the malaria parasite Plasmodium falciparum at trophozoite or schizont stage. Unlike previous efforts in this area, this study uses quantitative phase images of unstained cells. Erythrocytes are automatically segmented using thresholds of optical phase and refocused to enable quantitative comparison of phase images. Refocused images are analyzed to extract 23 morphological descriptors based on the phase information. While all individual descriptors are highly statistically different between infected and uninfected cells, each descriptor does not enable separation of populations at a level satisfactory for clinical utility. To improve the diagnostic capacity, we applied various machine learning techniques, including linear discriminant classification (LDC), logistic regression (LR), and k-nearest neighbor classification (NNC), to formulate algorithms that combine all of the calculated physical parameters to distinguish cells more effectively. Results show that LDC provides the highest accuracy of up to 99.7% in detecting schizont stage infected cells compared to uninfected RBCs. NNC showed slightly better accuracy (99.5%) than either LDC (99.0%) or LR (99.1%) for discriminating late trophozoites from uninfected RBCs. However, for early trophozoites, LDC produced the best accuracy of 98%. Discrimination of infection stage was less accurate, producing high specificity (99.8%) but only 45.0%-66.8% sensitivity with early trophozoites most often mistaken for late trophozoite or schizont stage and late trophozoite and schizont stage most often confused for each other. Overall, this methodology points to a significant clinical potential of using quantitative phase imaging to detect and stage malaria infection

  2. Automated Detection of P. falciparum Using Machine Learning Algorithms with Quantitative Phase Images of Unstained Cells

    PubMed Central

    Park, Han Sang; Rinehart, Matthew T.; Walzer, Katelyn A.; Chi, Jen-Tsan Ashley; Wax, Adam

    2016-01-01

    Malaria detection through microscopic examination of stained blood smears is a diagnostic challenge that heavily relies on the expertise of trained microscopists. This paper presents an automated analysis method for detection and staging of red blood cells infected by the malaria parasite Plasmodium falciparum at trophozoite or schizont stage. Unlike previous efforts in this area, this study uses quantitative phase images of unstained cells. Erythrocytes are automatically segmented using thresholds of optical phase and refocused to enable quantitative comparison of phase images. Refocused images are analyzed to extract 23 morphological descriptors based on the phase information. While all individual descriptors are highly statistically different between infected and uninfected cells, each descriptor does not enable separation of populations at a level satisfactory for clinical utility. To improve the diagnostic capacity, we applied various machine learning techniques, including linear discriminant classification (LDC), logistic regression (LR), and k-nearest neighbor classification (NNC), to formulate algorithms that combine all of the calculated physical parameters to distinguish cells more effectively. Results show that LDC provides the highest accuracy of up to 99.7% in detecting schizont stage infected cells compared to uninfected RBCs. NNC showed slightly better accuracy (99.5%) than either LDC (99.0%) or LR (99.1%) for discriminating late trophozoites from uninfected RBCs. However, for early trophozoites, LDC produced the best accuracy of 98%. Discrimination of infection stage was less accurate, producing high specificity (99.8%) but only 45.0%-66.8% sensitivity with early trophozoites most often mistaken for late trophozoite or schizont stage and late trophozoite and schizont stage most often confused for each other. Overall, this methodology points to a significant clinical potential of using quantitative phase imaging to detect and stage malaria infection

  3. SequenceL: Automated Parallel Algorithms Derived from CSP-NT Computational Laws

    NASA Technical Reports Server (NTRS)

    Cooke, Daniel; Rushton, Nelson

    2013-01-01

    With the introduction of new parallel architectures like the cell and multicore chips from IBM, Intel, AMD, and ARM, as well as the petascale processing available for highend computing, a larger number of programmers will need to write parallel codes. Adding the parallel control structure to the sequence, selection, and iterative control constructs increases the complexity of code development, which often results in increased development costs and decreased reliability. SequenceL is a high-level programming language that is, a programming language that is closer to a human s way of thinking than to a machine s. Historically, high-level languages have resulted in decreased development costs and increased reliability, at the expense of performance. In recent applications at JSC and in industry, SequenceL has demonstrated the usual advantages of high-level programming in terms of low cost and high reliability. SequenceL programs, however, have run at speeds typically comparable with, and in many cases faster than, their counterparts written in C and C++ when run on single-core processors. Moreover, SequenceL is able to generate parallel executables automatically for multicore hardware, gaining parallel speedups without any extra effort from the programmer beyond what is required to write the sequen tial/singlecore code. A SequenceL-to-C++ translator has been developed that automatically renders readable multithreaded C++ from a combination of a SequenceL program and sample data input. The SequenceL language is based on two fundamental computational laws, Consume-Simplify- Produce (CSP) and Normalize-Trans - pose (NT), which enable it to automate the creation of parallel algorithms from high-level code that has no annotations of parallelism whatsoever. In our anecdotal experience, SequenceL development has been in every case less costly than development of the same algorithm in sequential (that is, single-core, single process) C or C++, and an order of magnitude less

  4. Supervised Machine Learning Algorithms Can Classify Open-Text Feedback of Doctor Performance With Human-Level Accuracy

    PubMed Central

    2017-01-01

    Background Machine learning techniques may be an effective and efficient way to classify open-text reports on doctor’s activity for the purposes of quality assurance, safety, and continuing professional development. Objective The objective of the study was to evaluate the accuracy of machine learning algorithms trained to classify open-text reports of doctor performance and to assess the potential for classifications to identify significant differences in doctors’ professional performance in the United Kingdom. Methods We used 1636 open-text comments (34,283 words) relating to the performance of 548 doctors collected from a survey of clinicians’ colleagues using the General Medical Council Colleague Questionnaire (GMC-CQ). We coded 77.75% (1272/1636) of the comments into 5 global themes (innovation, interpersonal skills, popularity, professionalism, and respect) using a qualitative framework. We trained 8 machine learning algorithms to classify comments and assessed their performance using several training samples. We evaluated doctor performance using the GMC-CQ and compared scores between doctors with different classifications using t tests. Results Individual algorithm performance was high (range F score=.68 to .83). Interrater agreement between the algorithms and the human coder was highest for codes relating to “popular” (recall=.97), “innovator” (recall=.98), and “respected” (recall=.87) codes and was lower for the “interpersonal” (recall=.80) and “professional” (recall=.82) codes. A 10-fold cross-validation demonstrated similar performance in each analysis. When combined together into an ensemble of multiple algorithms, mean human-computer interrater agreement was .88. Comments that were classified as “respected,” “professional,” and “interpersonal” related to higher doctor scores on the GMC-CQ compared with comments that were not classified (P<.05). Scores did not vary between doctors who were rated as popular or

  5. Supervised Machine Learning Algorithms Can Classify Open-Text Feedback of Doctor Performance With Human-Level Accuracy.

    PubMed

    Gibbons, Chris; Richards, Suzanne; Valderas, Jose Maria; Campbell, John

    2017-03-15

    Machine learning techniques may be an effective and efficient way to classify open-text reports on doctor's activity for the purposes of quality assurance, safety, and continuing professional development. The objective of the study was to evaluate the accuracy of machine learning algorithms trained to classify open-text reports of doctor performance and to assess the potential for classifications to identify significant differences in doctors' professional performance in the United Kingdom. We used 1636 open-text comments (34,283 words) relating to the performance of 548 doctors collected from a survey of clinicians' colleagues using the General Medical Council Colleague Questionnaire (GMC-CQ). We coded 77.75% (1272/1636) of the comments into 5 global themes (innovation, interpersonal skills, popularity, professionalism, and respect) using a qualitative framework. We trained 8 machine learning algorithms to classify comments and assessed their performance using several training samples. We evaluated doctor performance using the GMC-CQ and compared scores between doctors with different classifications using t tests. Individual algorithm performance was high (range F score=.68 to .83). Interrater agreement between the algorithms and the human coder was highest for codes relating to "popular" (recall=.97), "innovator" (recall=.98), and "respected" (recall=.87) codes and was lower for the "interpersonal" (recall=.80) and "professional" (recall=.82) codes. A 10-fold cross-validation demonstrated similar performance in each analysis. When combined together into an ensemble of multiple algorithms, mean human-computer interrater agreement was .88. Comments that were classified as "respected," "professional," and "interpersonal" related to higher doctor scores on the GMC-CQ compared with comments that were not classified (P<.05). Scores did not vary between doctors who were rated as popular or innovative and those who were not rated at all (P>.05). Machine learning

  6. Development of Automated Scoring Algorithms for Complex Performance Assessments: A Comparison of Two Approaches.

    ERIC Educational Resources Information Center

    Clauser, Brian E.; Margolis, Melissa J.; Clyman, Stephen G.; Ross, Linette P.

    1997-01-01

    Research on automated scoring is extended by comparing alternative automated systems for scoring a computer simulation of physicians' patient management skills. A regression-based system is more highly correlated with experts' evaluations than a system that uses complex rules to map performances into score levels, but both approaches are feasible.…

  7. Design and demonstration of automated data analysis algorithms for ultrasonic inspection of complex composite panels with bonds

    NASA Astrophysics Data System (ADS)

    Aldrin, John C.; Forsyth, David S.; Welter, John T.

    2016-02-01

    To address the data review burden and improve the reliability of the ultrasonic inspection of large composite structures, automated data analysis (ADA) algorithms have been developed to make calls on indications that satisfy the detection criteria and minimize false calls. The original design followed standard procedures for analyzing signals for time-of-flight indications and backwall amplitude dropout. However, certain complex panels with varying shape, ply drops and the presence of bonds can complicate this interpretation process. In this paper, enhancements to the automated data analysis algorithms are introduced to address these challenges. To estimate the thickness of the part and presence of bonds without prior information, an algorithm tracks potential backwall or bond-line signals, and evaluates a combination of spatial, amplitude, and time-of-flight metrics to identify bonded sections. Once part boundaries, thickness transitions and bonded regions are identified, feature extraction algorithms are applied to multiple sets of through-thickness and backwall C-scan images, for evaluation of both first layer through thickness and layers under bonds. ADA processing results are presented for a variety of complex test specimens with inserted materials and other test discontinuities. Lastly, enhancements to the ADA software interface are presented, which improve the software usability for final data review by the inspectors and support the certification process.

  8. Validation of automated cloud top phase algorithms: distinguishing between cirrus clouds and snow in a priori analyses of AVHRR imagery

    NASA Astrophysics Data System (ADS)

    Hutchison, Keith D.; Etherton, Brian J.; Topping, Phillip C.

    1997-06-01

    Quantitative assessments the performance of automated cloud analysis algorithms require the creation of highly accurate, manual cloud, no cloud (CNC) images from multispectral meteorological satellite data. In general, the methodology to create these analyses for the evaluation of cloud detection algorithms is relatively straightforward, although the task becomes more complicated when little spectral signature is evident between a cloud and its background, as appears to be the case in advanced very high resolution radiometer (AVHRR) imagery when thin cirrus is present over snow-covered surfaces. In addition, complex procedures are needed to help the analyst distinguish between water and ice cloud tops to construct the manual cloud top phase analyses and to ensure that inaccuracies in automated cloud detection are not propagated into the results of the cloud classification algorithm. Procedures are described that enhance the researcher's ability to (1) distinguish between thin cirrus clouds and snow-covered surfaces in daytime AVHRR imagery, (2) construct accurate cloud top phase manual analyses, and (3) quantitatively validate the performance of both automated cloud detection and cloud top phase classification algorithms. The methodology uses all AVHRR spectral bands, including a band derived from the daytime 3.7-micrometers channel, which has proven most valuable for discriminating between thin cirrus clouds and snow. It is concluded that while the 1.6-micrometers band is needed to distinguish between snow and water clouds in daytime data, the 3.7-micrometers channel remains essential during the daytime to differentiate between thin ice clouds and snow.Unfortunately this capability that may be lost if the 3.7-micrometers data switches to a nighttime-only transmission with the launch of future National Oceanographic and Atmospheric Administration satellites.

  9. ANIE: A Mathematical Algorithm for Automated Indexing of Planar Deformation Features in Shocked Quartz

    NASA Astrophysics Data System (ADS)

    Huber, M. S.; Ferrière, L.; Losiak, A.; Koeberl, C.

    2011-03-01

    A mathematical method of indexing planar deformation features in quartz and a Microsoft Excel macro for automated indexing is presented, allowing for more rapid and accurate results than the previously used manual method.

  10. Automated infrasound signal detection algorithms implemented in MatSeis - Infra Tool.

    SciTech Connect

    Hart, Darren

    2004-07-01

    MatSeis's infrasound analysis tool, Infra Tool, uses frequency slowness processing to deconstruct the array data into three outputs per processing step: correlation, azimuth and slowness. Until now, an experienced analyst trained to recognize a pattern observed in outputs from signal processing manually accomplished infrasound signal detection. Our goal was to automate the process of infrasound signal detection. The critical aspect of infrasound signal detection is to identify consecutive processing steps where the azimuth is constant (flat) while the time-lag correlation of the windowed waveform is above background value. These two statements describe the arrival of a correlated set of wavefronts at an array. The Hough Transform and Inverse Slope methods are used to determine the representative slope for a specified number of azimuth data points. The representative slope is then used in conjunction with associated correlation value and azimuth data variance to determine if and when an infrasound signal was detected. A format for an infrasound signal detection output file is also proposed. The detection output file will list the processed array element names, followed by detection characteristics for each method. Each detection is supplied with a listing of frequency slowness processing characteristics: human time (YYYY/MM/DD HH:MM:SS.SSS), epochal time, correlation, fstat, azimuth (deg) and trace velocity (km/s). As an example, a ground truth event was processed using the four-element DLIAR infrasound array located in New Mexico. The event is known as the Watusi chemical explosion, which occurred on 2002/09/28 at 21:25:17 with an explosive yield of 38,000 lb TNT equivalent. Knowing the source and array location, the array-to-event distance was computed to be approximately 890 km. This test determined the station-to-event azimuth (281.8 and 282.1 degrees) to within 1.6 and 1.4 degrees for the Inverse Slope and Hough Transform detection algorithms, respectively, and

  11. Retrospective Derivation and Validation of an Automated Electronic Search Algorithm to Identify Post Operative Cardiovascular and Thromboembolic Complications

    PubMed Central

    Tien, M.; Kashyap, R.; Wilson, G. A.; Hernandez-Torres, V.; Jacob, A. K.; Schroeder, D. R.

    2015-01-01

    Summary Background With increasing numbers of hospitals adopting electronic medical records, electronic search algorithms for identifying postoperative complications can be invaluable tools to expedite data abstraction and clinical research to improve patient outcomes. Objectives To derive and validate an electronic search algorithm to identify postoperative thromboembolic and cardiovascular complications such as deep venous thrombosis, pulmonary embolism, or myocardial infarction within 30 days of total hip or knee arthroplasty. Methods A total of 34 517 patients undergoing total hip or knee arthroplasty between January 1, 1996 and December 31, 2013 were identified. Using a derivation cohort of 418 patients, several iterations of a free-text electronic search were developed and refined for each complication. Subsequently, the automated search algorithm was validated on an independent cohort of 2 857 patients, and the sensitivity and specificities were compared to the results of manual chart review. Results In the final derivation subset, the automated search algorithm achieved a sensitivity of 91% and specificity of 85% for deep vein thrombosis, a sensitivity of 96% and specificity of 100% for pulmonary embolism, and a sensitivity of 100% and specificity of 95% for myocardial infarction. When applied to the validation cohort, the search algorithm achieved a sensitivity of 97% and specificity of 99% for deep vein thrombosis, a sensitivity of 97% and specificity of 100% for pulmonary embolism, and a sensitivity of 100% and specificity of 99% for myocardial infarction. Conclusions The derivation and validation of an electronic search strategy can accelerate the data abstraction process for research, quality improvement, and enhancement of patient care, while maintaining superb reliability compared to manual review. PMID:26448798

  12. Retrospective Derivation and Validation of an Automated Electronic Search Algorithm to Identify Post Operative Cardiovascular and Thromboembolic Complications.

    PubMed

    Tien, M; Kashyap, R; Wilson, G A; Hernandez-Torres, V; Jacob, A K; Schroeder, D R; Mantilla, C B

    2015-01-01

    With increasing numbers of hospitals adopting electronic medical records, electronic search algorithms for identifying postoperative complications can be invaluable tools to expedite data abstraction and clinical research to improve patient outcomes. To derive and validate an electronic search algorithm to identify postoperative thromboembolic and cardiovascular complications such as deep venous thrombosis, pulmonary embolism, or myocardial infarction within 30 days of total hip or knee arthroplasty. A total of 34 517 patients undergoing total hip or knee arthroplasty between January 1, 1996 and December 31, 2013 were identified. Using a derivation cohort of 418 patients, several iterations of a free-text electronic search were developed and refined for each complication. Subsequently, the automated search algorithm was validated on an independent cohort of 2 857 patients, and the sensitivity and specificities were compared to the results of manual chart review. In the final derivation subset, the automated search algorithm achieved a sensitivity of 91% and specificity of 85% for deep vein thrombosis, a sensitivity of 96% and specificity of 100% for pulmonary embolism, and a sensitivity of 100% and specificity of 95% for myocardial infarction. When applied to the validation cohort, the search algorithm achieved a sensitivity of 97% and specificity of 99% for deep vein thrombosis, a sensitivity of 97% and specificity of 100% for pulmonary embolism, and a sensitivity of 100% and specificity of 99% for myocardial infarction. The derivation and validation of an electronic search strategy can accelerate the data abstraction process for research, quality improvement, and enhancement of patient care, while maintaining superb reliability compared to manual review.

  13. Classification of acute decompensated heart failure: an automated algorithm compared with a physician reviewer panel: the Atherosclerosis Risk in Communities study.

    PubMed

    Loehr, Laura R; Agarwal, Sunil K; Baggett, Chris; Wruck, Lisa M; Chang, Patricia P; Solomon, Scott D; Shahar, Eyal; Ni, Hanyu; Rosamond, Wayne D; Heiss, Gerardo

    2013-07-01

    An algorithm to classify heart failure (HF) end points inclusive of contemporary measures of biomarkers and echocardiography was recently proposed by an international expert panel. Our objective was to assess agreement of HF classification by this contemporaneous algorithm with that by a standardized physician reviewer panel, when applied to data abstracted from community-based hospital records. During 2005-2007, all hospitalizations were identified from 4 US communities under surveillance as part of the Atherosclerosis Risk in Communities (ARIC) study. Potential HF hospitalizations were sampled by International Classification of Diseases discharge codes and demographics from men and women aged ≥ 55 years. The HF classification algorithm was automated and applied to 2729 (n=13854 weighted hospitalizations) hospitalizations in which either brain natriuretic peptide measures or ejection fraction were documented (mean age, 75 years). There were 1403 (54%; n=7534 weighted) events classified as acute decompensated HF by the automated algorithm, and 1748 (68%; n=9276 weighted) such events by the ARIC reviewer panel. The chance-corrected agreement between acute decompensated HF by physician reviewer panel and the automated algorithm was moderate (κ=0.39). Sensitivity and specificity of the automated algorithm with ARIC reviewer panel as the referent standard were 0.68 (95% confidence interval, 0.67-0.69) and 0.75 (95% confidence interval, 0.74-0.76), respectively. Although the automated classification improved efficiency and decreased costs, its accuracy in classifying HF hospitalizations was modest compared with a standardized physician reviewer panel.

  14. An algorithm for automated ROI definition in water or epoxy-filled NEMA NU-2 image quality phantoms.

    PubMed

    Pierce, Larry A; Byrd, Darrin W; Elston, Brian F; Karp, Joel S; Sunderland, John J; Kinahan, Paul E

    2016-01-01

    Drawing regions of interest (ROIs) in positron emission tomography/computed tomography (PET/CT) scans of the National Electrical Manufacturers Association (NEMA) NU-2 Image Quality (IQ) phantom is a time-consuming process that allows for interuser variability in the measurements. In order to reduce operator effort and allow batch processing of IQ phantom images, we propose a fast, robust, automated algorithm for performing IQ phantom sphere localization and analysis. The algorithm is easily altered to accommodate different configurations of the IQ phantom. The proposed algorithm uses information from both the PET and CT image volumes in order to overcome the challenges of detecting the smallest spheres in the PET volume. This algorithm has been released as an open-source plug-in to the Osirix medical image viewing software package. We test the algorithm under various noise conditions, positions within the scanner, air bubbles in the phantom spheres, and scanner misalignment conditions. The proposed algorithm shows runtimes between 3 and 4 min and has proven to be robust under all tested conditions, with expected sphere localization deviations of less than 0.2 mm and variations of PET ROI mean and maximum values on the order of 0.5% and 2%, respectively, over multiple PET acquisitions. We conclude that the proposed algorithm is stable when challenged with a variety of physical and imaging anomalies, and that the algorithm can be a valuable tool for those who use the NEMA NU-2 IQ phantom for PET/CT scanner acceptance testing and QA/QC. PACS number: 87.57.C.

  15. An algorithm for automated ROI definition in water or epoxy-filled NEMA NU-2 image quality phantoms.

    PubMed

    Pierce Ii, Larry A; Byrd, Darrin W; Elston, Brian F; Karp, Joel S; Sunderland, John J; Kinahan, Paul E

    2016-01-08

    Drawing regions of interest (ROIs) in positron emission tomography/computed tomography (PET/CT) scans of the National Electrical Manufacturers Association (NEMA) NU-2 Image Quality (IQ) phantom is a time-consuming process that allows for interuser variability in the measurements. In order to reduce operator effort and allow batch processing of IQ phantom images, we propose a fast, robust, automated algorithm for performing IQ phantom sphere localization and analysis. The algorithm is easily altered to accommodate different configurations of the IQ phantom. The proposed algorithm uses information from both the PET and CT image volumes in order to overcome the challenges of detecting the smallest spheres in the PET volume. This algorithm has been released as an open-source plug-in to the Osirix medical image viewing software package. We test the algorithm under various noise conditions, positions within the scanner, air bubbles in the phantom spheres, and scanner misalignment conditions. The proposed algorithm shows run-times between 3 and 4 min and has proven to be robust under all tested conditions, with expected sphere localization deviations of less than 0.2 mm and variations of PET ROI mean and maximum values on the order of 0.5% and 2%, respectively, over multiple PET acquisitions. We conclude that the proposed algorithm is stable when challenged with a variety of physical and imaging anomalies, and that the algorithm can be a valuable tool for those who use the NEMA NU-2 IQ phantom for PET/CT scanner acceptance testing and QA/QC.

  16. An algorithm for automated ROI definition in water or epoxy-filled NEMA NU-2 image quality phantoms

    PubMed Central

    Pierce, Larry A.; Byrd, Darrin W.; Elston, Brian F.; Karp, Joel S.; Sunderland, John J.; Kinahan, Paul E.

    2016-01-01

    Drawing regions of interest (ROIs) in positron emission tomography/computed tomography (PET/CT) scans of the National Electrical Manufacturers Association (NEMA) NU-2 Image Quality (IQ) phantom is a time-consuming process that allows for inter-user variability in the measurements. In order to reduce operator effort and allow batch processing of IQ phantom images, we propose a fast, robust, automated algorithm for performing IQ phantom sphere localization and analysis. The algorithm is easily altered to accommodate different configurations of the IQ phantom. The proposed algorithm uses information from both the PET and CT image volumes in order to overcome the challenges of detecting the smallest spheres in the PET volume. This algorithm has been released as an open-source plugin to the Osirix medical image viewing software package. We test the algorithm under various noise conditions, positions within the scanner, air bubbles in the phantom spheres, and scanner misalignment conditions. The proposed algorithm shows runtimes between 3 and 4 minutes, and has proven to be robust under all tested conditions, with expected sphere localization deviations of less than 0.2 mm and variations of PET ROI mean and max values on the order of 0.5% and 2% respectively over multiple PET acquisitions. We conclude that the proposed algorithm is stable when challenged with a variety of physical and imaging anomalies, and that the algorithm can be a valuable tool for those who use the NEMA NU-2 IQ phantom for PET/CT scanner acceptance testing and QA/QC. PMID:26894356

  17. An Automated Cropland Classification Algorithm (ACCA) for Tajikistan by combining Landsat, MODIS, and secondary data

    USGS Publications Warehouse

    Thenkabail, Prasad S.; Wu, Zhuoting

    2012-01-01

    The overarching goal of this research was to develop and demonstrate an automated Cropland Classification Algorithm (ACCA) that will rapidly, routinely, and accurately classify agricultural cropland extent, areas, and characteristics (e.g., irrigated vs. rainfed) over large areas such as a country or a region through combination of multi-sensor remote sensing and secondary data. In this research, a rule-based ACCA was conceptualized, developed, and demonstrated for the country of Tajikistan using mega file data cubes (MFDCs) involving data from Landsat Global Land Survey (GLS), Landsat Enhanced Thematic Mapper Plus (ETM+) 30 m, Moderate Resolution Imaging Spectroradiometer (MODIS) 250 m time-series, a suite of secondary data (e.g., elevation, slope, precipitation, temperature), and in situ data. First, the process involved producing an accurate reference (or truth) cropland layer (TCL), consisting of cropland extent, areas, and irrigated vs. rainfed cropland areas, for the entire country of Tajikistan based on MFDC of year 2005 (MFDC2005). The methods involved in producing TCL included using ISOCLASS clustering, Tasseled Cap bi-spectral plots, spectro-temporal characteristics from MODIS 250 m monthly normalized difference vegetation index (NDVI) maximum value composites (MVC) time-series, and textural characteristics of higher resolution imagery. The TCL statistics accurately matched with the national statistics of Tajikistan for irrigated and rainfed croplands, where about 70% of croplands were irrigated and the rest rainfed. Second, a rule-based ACCA was developed to replicate the TCL accurately (~80% producer’s and user’s accuracies or within 20% quantity disagreement involving about 10 million Landsat 30 m sized cropland pixels of Tajikistan). Development of ACCA was an iterative process involving series of rules that are coded, refined, tweaked, and re-coded till ACCA derived croplands (ACLs) match accurately with TCLs. Third, the ACCA derived cropland

  18. Profiling a Caenorhabditis elegans behavioral parametric dataset with a supervised K-means clustering algorithm identifies genetic networks regulating locomotion.

    PubMed

    Zhang, Shijie; Jin, Wei; Huang, Ying; Su, Wei; Yang, Jiong; Feng, Zhaoyang

    2011-04-30

    Defining genetic networks underlying animal behavior in a high throughput manner is an important but challenging task that has not yet been achieved for any organism. Using Caenorhabditis elegans, we collected quantitative parametric data related to various aspects of locomotion from wild type and 31 mutant worm strains with single mutations in genes functioning in sensory reception, neurotransmission, G-protein signaling, neuromuscular control or other facets of motor regulation. We applied unsupervised and constrained K-means clustering algorithms to the data and found that the genes that clustered together due to the behavioral similarity of their mutants encoded proteins in the same signaling networks. This approach provides a framework to identify genes and genetic networks underlying worm neuromotor function in a high-throughput manner. A publicly accessible database harboring the visual and quantitative behavioral data collected in this study adds valuable information to the rapidly growing C. elegans databanks that can be employed in a similar context. Copyright © 2011 Elsevier B.V. All rights reserved.

  19. Weakly supervised glasses removal

    NASA Astrophysics Data System (ADS)

    Wang, Zhicheng; Zhou, Yisu; Wen, Lijie

    2015-03-01

    Glasses removal is an important task on face recognition, in this paper, we provide a weakly supervised method to remove eyeglasses from an input face image automatically. We choose sparse coding as face reconstruction method, and optical flow to find exact shape of glasses. We combine the two processes iteratively to remove glasses more accurately. The experimental results reveal that our method works much better than these algorithms alone, and it can remove various glasses to obtain natural looking glassless facial images.

  20. A clarification of the terms used in comparing semi-automated particle selection algorithms in cryo-EM.

    PubMed

    Langlois, Robert; Frank, Joachim

    2011-09-01

    Many cyro-EM datasets are heterogeneous stemming from molecules undergoing conformational changes. The need to characterize each of the substrates with sufficient resolution entails a large increase in the data flow and motivates the development of more effective automated particle selection algorithms. Concepts and procedures from the machine-learning field are increasingly employed toward this end. However, a review of recent literature has revealed a discrepancy in terminology of the performance scores used to compare particle selection algorithms, and this has subsequently led to ambiguities in the meaning of claimed performance. In an attempt to curtail the perpetuation of this confusion and to disentangle past mistakes, we review the performance of published particle selection efforts with a set of explicitly defined performance scores using the terminology established and accepted within the field of machine learning. Copyright © 2011 Elsevier Inc. All rights reserved.

  1. Quantitative mapping of hemodynamics in the lung, brain, and dorsal window chamber-grown tumors using a novel, automated algorithm.

    PubMed

    Fontanella, Andrew N; Schroeder, Thies; Hochman, Daryl W; Chen, Raymond E; Hanna, Gabi; Haglund, Michael M; Rajaram, Narasimhan; Frees, Amy E; Secomb, Timothy W; Palmer, Gregory M; Dewhirst, Mark W

    2013-11-01

    Hemodynamic properties of vascular beds are of great interest in a variety of clinical and laboratory settings. However, there presently exists no automated, accurate, technically simple method for generating blood velocity maps of complex microvessel networks. Here, we present a novel algorithm that addresses the problem of acquiring quantitative maps by applying pixel-by-pixel cross-correlation to video data. Temporal signals at every spatial coordinate are compared with signals at neighboring points, generating a series of correlation maps from which speed and direction are calculated. User-assisted definition of vessel geometries is not required, and sequential data are analyzed automatically, without user bias. Velocity measurements were validated against the dual-slit method and against in vitro capillary flow with known velocities. The algorithm was tested in three different biological models in order to demonstrate its versatility. The hemodynamic maps presented here demonstrate an accurate, quantitative method of analyzing dynamic vascular systems. © 2013 John Wiley & Sons Ltd.

  2. Quantitative mapping of hemodynamics in the lung, brain, and dorsal window chamber-grown tumors using a novel, automated algorithm

    PubMed Central

    Fontanella, Andrew N.; Schroeder, Thies; Hochman, Daryl W.; Chen, Raymond E.; Hanna, Gabi; Haglund, Michael M.; Secomb, Timothy W.; Palmer, Gregory M.; Dewhirst, Mark W.

    2013-01-01

    Hemodynamic properties of vascular beds are of great interest in a variety of clinical and laboratory settings. However, there presently exists no automated, accurate, technically simple method for generating blood velocity maps of complex microvessel networks. Here we present a novel algorithm that addresses this problem by applying pixel-by-pixel cross-correlation to video data. Temporal signals at every spatial coordinate are compared with signals at neighboring points, generating a series of correlation maps from which speed and direction are calculated. User assisted definition of vessel geometries is not required, and sequential data are analyzed automatically, without user bias. Velocity measurements are validated against the dual-slit method and against capillary flow with known velocities. The algorithm is tested in three different biological models. Along with simultaneously acquired hemoglobin saturation and vascular geometry information, the hemodynamic maps presented here demonstrate an accurate, quantitative method of analyzing dynamic vascular systems. PMID:23781901

  3. SU-E-T-497: Semi-Automated in Vivo Radiochromic Film Dosimetry Using a Novel Image Processing Algorithm

    SciTech Connect

    Reyhan, M; Yue, N

    2014-06-01

    Purpose: To validate an automated image processing algorithm designed to detect the center of radiochromic film used for in vivo film dosimetry against the current gold standard of manual selection. Methods: An image processing algorithm was developed to automatically select the region of interest (ROI) in *.tiff images that contain multiple pieces of radiochromic film (0.5x1.3cm{sup 2}). After a user has linked a calibration file to the processing algorithm and selected a *.tiff file for processing, an ROI is automatically detected for all films by a combination of thresholding and erosion, which removes edges and any additional markings for orientation. Calibration is applied to the mean pixel values from the ROIs and a *.tiff image is output displaying the original image with an overlay of the ROIs and the measured doses. Validation of the algorithm was determined by comparing in vivo dose determined using the current gold standard (manually drawn ROIs) versus automated ROIs for n=420 scanned films. Bland-Altman analysis, paired t-test, and linear regression were performed to demonstrate agreement between the processes. Results: The measured doses ranged from 0.2-886.6cGy. Bland-Altman analysis of the two techniques (automatic minus manual) revealed a bias of -0.28cGy and a 95% confidence interval of (5.5cGy,-6.1cGy). These values demonstrate excellent agreement between the two techniques. Paired t-test results showed no statistical differences between the two techniques, p=0.98. Linear regression with a forced zero intercept demonstrated that Automatic=0.997*Manual, with a Pearson correlation coefficient of 0.999. The minimal differences between the two techniques may be explained by the fact that the hand drawn ROIs were not identical to the automatically selected ones. The average processing time was 6.7seconds in Matlab on an IntelCore2Duo processor. Conclusion: An automated image processing algorithm has been developed and validated, which will help

  4. An automated image segmentation and classification algorithm for immunohistochemically stained tumor cell nuclei

    NASA Astrophysics Data System (ADS)

    Yeo, Hangu; Sheinin, Vadim; Sheinin, Yuri

    2009-02-01

    As medical image data sets are digitized and the number of data sets is increasing exponentially, there is a need for automated image processing and analysis technique. Most medical imaging methods require human visual inspection and manual measurement which are labor intensive and often produce inconsistent results. In this paper, we propose an automated image segmentation and classification method that identifies tumor cell nuclei in medical images and classifies these nuclei into two categories, stained and unstained tumor cell nuclei. The proposed method segments and labels individual tumor cell nuclei, separates nuclei clusters, and produces stained and unstained tumor cell nuclei counts. The representative fields of view have been chosen by a pathologist from a known diagnosis (clear cell renal cell carcinoma), and the automated results are compared with the hand-counted results by a pathologist.

  5. Automated multiple trajectory planning algorithm for the placement of stereo-electroencephalography (SEEG) electrodes in epilepsy treatment.

    PubMed

    Sparks, Rachel; Zombori, Gergely; Rodionov, Roman; Nowell, Mark; Vos, Sjoerd B; Zuluaga, Maria A; Diehl, Beate; Wehner, Tim; Miserocchi, Anna; McEvoy, Andrew W; Duncan, John S; Ourselin, Sebastien

    2017-01-01

    About one-third of individuals with focal epilepsy continue to have seizures despite optimal medical management. These patients are potentially curable with neurosurgery if the epileptogenic zone (EZ) can be identified and resected. Stereo-electroencephalography (SEEG) to record epileptic activity with intracranial depth electrodes may be required to identify the EZ. Each SEEG electrode trajectory, the path between the entry on the skull and the cerebral target, must be planned carefully to avoid trauma to blood vessels and conflicts between electrodes. In current clinical practice trajectories are determined manually, typically taking 2-3 h per patient (15 min per electrode). Manual planning (MP) aims to achieve an implantation plan with good coverage of the putative EZ, an optimal spatial resolution, and 3D distribution of electrodes. Computer-assisted planning tools can reduce planning time by quantifying trajectory suitability. We present an automated multiple trajectory planning (MTP) algorithm to compute implantation plans. MTP uses dynamic programming to determine a set of plans. From this set a depth-first search algorithm finds a suitable plan. We compared our MTP algorithm to (a) MP and (b) an automated single trajectory planning (STP) algorithm on 18 patient plans containing 165 electrodes. MTP changed all 165 trajectories compared to MP. Changes resulted in lower risk (122), increased grey matter sampling (99), shorter length (92), and surgically preferred entry angles (113). MTP changed 42 % (69/165) trajectories compared to STP. Every plan had between 1 to 8 (median 3.5) trajectories changed to resolve electrode conflicts, resulting in surgically preferred plans. MTP is computationally efficient, determining implantation plans containing 7-12 electrodes within 1 min, compared to 2-3 h for MP.

  6. Supervised Surfing.

    ERIC Educational Resources Information Center

    Higginbotham, Julie S.

    1996-01-01

    Acceptable-use policies help ensure appropriate Internet access in a New York State school district. The district's four elementary schools and one each junior high and high school are relying on a two-pronged strategy. A policy version for staff, and one for students and parents, couples tight supervision with a carefully crafted acceptable-use…

  7. Application of Novel Software Algorithms to Spectral-Domain Optical Coherence Tomography for Automated Detection of Diabetic Retinopathy.

    PubMed

    Adhi, Mehreen; Semy, Salim K; Stein, David W; Potter, Daniel M; Kuklinski, Walter S; Sleeper, Harry A; Duker, Jay S; Waheed, Nadia K

    2016-05-01

    To present novel software algorithms applied to spectral-domain optical coherence tomography (SD-OCT) for automated detection of diabetic retinopathy (DR). Thirty-one diabetic patients (44 eyes) and 18 healthy, nondiabetic controls (20 eyes) who underwent volumetric SD-OCT imaging and fundus photography were retrospectively identified. A retina specialist independently graded DR stage. Trained automated software generated a retinal thickness score signifying macular edema and a cluster score signifying microaneurysms and/or hard exudates for each volumetric SD-OCT. Of 44 diabetic eyes, 38 had DR and six eyes did not have DR. Leave-one-out cross-validation using a linear discriminant at missed detection/false alarm ratio of 3.00 computed software sensitivity and specificity of 92% and 69%, respectively, for DR detection when compared to clinical assessment. Novel software algorithms applied to commercially available SD-OCT can successfully detect DR and may have potential as a viable screening tool for DR in future. [Ophthalmic Surg Lasers Imaging Retina. 2016;47:410-417.]. Copyright 2016, SLACK Incorporated.

  8. EpHLA: an innovative and user-friendly software automating the HLAMatchmaker algorithm for antibody analysis.

    PubMed

    Sousa, Luiz Cláudio Demes da Mata; Filho, Herton Luiz Alves Sales; Von Glehn, Cristina de Queiroz Carrascosa; da Silva, Adalberto Socorro; Neto, Pedro de Alcântara dos Santos; de Castro, José Adail Fonseca; do Monte, Semíramis Jamil Hadad

    2011-12-01

    The global challenge for solid organ transplantation programs is to distribute organs to the highly sensitized recipients. The purpose of this work is to describe and test the functionality of the EpHLA software, a program that automates the analysis of acceptable and unacceptable HLA epitopes on the basis of the HLAMatchmaker algorithm. HLAMatchmaker considers small configurations of polymorphic residues referred to as eplets as essential components of HLA-epitopes. Currently, the analyses require the creation of temporary files and the manual cut and paste of laboratory tests results between electronic spreadsheets, which is time-consuming and prone to administrative errors. The EpHLA software was developed in Object Pascal programming language and uses the HLAMatchmaker algorithm to generate histocompatibility reports. The automated generation of reports requires the integration of files containing the results of laboratory tests (HLA typing, anti-HLA antibody signature) and public data banks (NMDP, IMGT). The integration and the access to this data were accomplished by means of the framework called eDAFramework. The eDAFramework was developed in Object Pascal and PHP and it provides data access functionalities for software developed in these languages. The tool functionality was successfully tested in comparison to actual, manually derived reports of patients from a renal transplantation program with related donors. We successfully developed software, which enables the automated definition of the epitope specificities of HLA antibodies. This new tool will benefit the management of recipient/donor pairs selection for highly sensitized patients. Copyright © 2011 Elsevier B.V. All rights reserved.

  9. Ice crystal characterization in cirrus clouds: a sun-tracking camera system and automated detection algorithm for halo displays

    NASA Astrophysics Data System (ADS)

    Forster, Linda; Seefeldner, Meinhard; Wiegner, Matthias; Mayer, Bernhard

    2017-07-01

    Halo displays in the sky contain valuable information about ice crystal shape and orientation: e.g., the 22° halo is produced by randomly oriented hexagonal prisms while parhelia (sundogs) indicate oriented plates. HaloCam, a novel sun-tracking camera system for the automated observation of halo displays is presented. An initial visual evaluation of the frequency of halo displays for the ACCEPT (Analysis of the Composition of Clouds with Extended Polarization Techniques) field campaign from October to mid-November 2014 showed that sundogs were observed more often than 22° halos. Thus, the majority of halo displays was produced by oriented ice crystals. During the campaign about 27 % of the cirrus clouds produced 22° halos, sundogs or upper tangent arcs. To evaluate the HaloCam observations collected from regular measurements in Munich between January 2014 and June 2016, an automated detection algorithm for 22° halos was developed, which can be extended to other halo types as well. This algorithm detected 22° halos about 2 % of the time for this dataset. The frequency of cirrus clouds during this time period was estimated by co-located ceilometer measurements using temperature thresholds of the cloud base. About 25 % of the detected cirrus clouds occurred together with a 22° halo, which implies that these clouds contained a certain fraction of smooth, hexagonal ice crystals. HaloCam observations complemented by radiative transfer simulations and measurements of aerosol and cirrus cloud optical thickness (AOT and COT) provide a possibility to retrieve more detailed information about ice crystal roughness. This paper demonstrates the feasibility of a completely automated method to collect and evaluate a long-term database of halo observations and shows the potential to characterize ice crystal properties.

  10. Does the Location of Bruch's Membrane Opening Change Over Time? Longitudinal Analysis Using San Diego Automated Layer Segmentation Algorithm (SALSA).

    PubMed

    Belghith, Akram; Bowd, Christopher; Medeiros, Felipe A; Hammel, Naama; Yang, Zhiyong; Weinreb, Robert N; Zangwill, Linda M

    2016-02-01

    We determined if the Bruch's membrane opening (BMO) location changes over time in healthy eyes and eyes with progressing glaucoma, and validated an automated segmentation algorithm for identifying the BMO in Cirrus high-definition coherence tomography (HD-OCT) images. We followed 95 eyes (35 progressing glaucoma and 60 healthy) for an average of 3.7 ± 1.1 years. A stable group of 50 eyes had repeated tests over a short period. In each B-scan of the stable group, the BMO points were delineated manually and automatically to assess the reproducibility of both segmentation methods. Moreover, the BMO location variation over time was assessed longitudinally on the aligned images in 3D space point by point in x, y, and z directions. Mean visual field mean deviation at baseline of the progressing glaucoma group was -7.7 dB. Mixed-effects models revealed small nonsignificant changes in BMO location over time for all directions in healthy eyes (the smallest P value was 0.39) and in the progressing glaucoma eyes (the smallest P value was 0.30). In the stable group, the overall intervisit-intraclass correlation coefficient (ICC) and coefficient of variation (CV) were 98.4% and 2.1%, respectively, for the manual segmentation and 98.1% and 1.9%, respectively, for the automated algorithm. Bruch's membrane opening location was stable in normal and progressing glaucoma eyes with follow-up between 3 and 4 years indicating that it can be used as reference point in monitoring glaucoma progression. The BMO location estimation with Cirrus HD-OCT using manual and automated segmentation showed excellent reproducibility.

  11. SemiBoost: boosting for semi-supervised learning.

    PubMed

    Mallapragada, Pavan Kumar; Jin, Rong; Jain, Anil K; Liu, Yi

    2009-11-01

    Semi-supervised learning has attracted a significant amount of attention in pattern recognition and machine learning. Most previous studies have focused on designing special algorithms to effectively exploit the unlabeled data in conjunction with labeled data. Our goal is to improve the classification accuracy of any given supervised learning algorithm by using the available unlabeled examples. We call this as the Semi-supervised improvement problem, to distinguish the proposed approach from the existing approaches. We design a metasemi-supervised learning algorithm that wraps around the underlying supervised algorithm and improves its performance using unlabeled data. This problem is particularly important when we need to train a supervised learning algorithm with a limited number of labeled examples and a multitude of unlabeled examples. We present a boosting framework for semi-supervised learning, termed as SemiBoost. The key advantages of the proposed semi-supervised learning approach are: 1) performance improvement of any supervised learning algorithm with a multitude of unlabeled data, 2) efficient computation by the iterative boosting algorithm, and 3) exploiting both manifold and cluster assumption in training classification models. An empirical study on 16 different data sets and text categorization demonstrates that the proposed framework improves the performance of several commonly used supervised learning algorithms, given a large number of unlabeled examples. We also show that the performance of the proposed algorithm, SemiBoost, is comparable to the state-of-the-art semi-supervised learning algorithms.

  12. Automating "Word of Mouth" to Recommend Classes to Students: An Application of Social Information Filtering Algorithms

    ERIC Educational Resources Information Center

    Booker, Queen Esther

    2009-01-01

    An approach used to tackle the problem of helping online students find the classes they want and need is a filtering technique called "social information filtering," a general approach to personalized information filtering. Social information filtering essentially automates the process of "word-of-mouth" recommendations: items are recommended to a…

  13. Automating "Word of Mouth" to Recommend Classes to Students: An Application of Social Information Filtering Algorithms

    ERIC Educational Resources Information Center

    Booker, Queen Esther

    2009-01-01

    An approach used to tackle the problem of helping online students find the classes they want and need is a filtering technique called "social information filtering," a general approach to personalized information filtering. Social information filtering essentially automates the process of "word-of-mouth" recommendations: items are recommended to a…

  14. Idiopathic Pulmonary Fibrosis in United States Automated Claims. Incidence, Prevalence, and Algorithm Validation.

    PubMed

    Esposito, Daina B; Lanes, Stephan; Donneyong, Macarius; Holick, Crystal N; Lasky, Joseph A; Lederer, David; Nathan, Steven D; O'Quinn, Sean; Parker, Joseph; Tran, Trung N

    2015-11-15

    Estimates of idiopathic pulmonary fibrosis (IPF) incidence and prevalence from electronic databases without case validation may be inaccurate. Develop claims algorithms to identify IPF and assess their positive predictive value (PPV) to estimate incidence and prevalence in the United States. We developed three algorithms to identify IPF cases in the HealthCore Integrated Research Database. Sensitive and specific algorithms were developed based on literature review and consultation with clinical experts. PPVs were assessed using medical records. A third algorithm used logistic regression modeling to generate an IPF score and was validated using a separate set of medical records. We estimated incidence and prevalence of IPF using the sensitive algorithm corrected for the PPV. We identified 4,598 patients using the sensitive algorithm and 2,052 patients using the specific algorithm. After medical record review, the PPVs of these algorithms using the treating clinician's diagnosis were 44.4 and 61.7%, respectively. For the IPF score, the PPV was 76.2%. Using the clinical adjudicator's diagnosis, the PPVs were 54 and 57.6%, respectively, and for the IPF score, the PPV was 83.3%. The incidence and period prevalences of IPF, corrected for the PPV, were 14.6 per 100,000 person-years and 58.7 per 100,000 persons, respectively. Sensitive algorithms without correction for false positive errors overestimated incidence and prevalence of IPF. An IPF score offered the greatest PPV, but it requires further validation.

  15. Semi-automated algorithm for localization of dermal/epidermal junction in reflectance confocal microscopy images of human skin

    NASA Astrophysics Data System (ADS)

    Kurugol, Sila; Dy, Jennifer G.; Rajadhyaksha, Milind; Gossage, Kirk W.; Weissmann, Jesse; Brooks, Dana H.

    2011-03-01

    The examination of the dermis/epidermis junction (DEJ) is clinically important for skin cancer diagnosis. Reflectance confocal microscopy (RCM) is an emerging tool for detection of skin cancers in vivo. However, visual localization of the DEJ in RCM images, with high accuracy and repeatability, is challenging, especially in fair skin, due to low contrast, heterogeneous structure and high inter- and intra-subject variability. We recently proposed a semi-automated algorithm to localize the DEJ in z-stacks of RCM images of fair skin, based on feature segmentation and classification. Here we extend the algorithm to dark skin. The extended algorithm first decides the skin type and then applies the appropriate DEJ localization method. In dark skin, strong backscatter from the pigment melanin causes the basal cells above the DEJ to appear with high contrast. To locate those high contrast regions, the algorithm operates on small tiles (regions) and finds the peaks of the smoothed average intensity depth profile of each tile. However, for some tiles, due to heterogeneity, multiple peaks in the depth profile exist and the strongest peak might not be the basal layer peak. To select the correct peak, basal cells are represented with a vector of texture features. The peak with most similar features to this feature vector is selected. The results show that the algorithm detected the skin types correctly for all 17 stacks tested (8 fair, 9 dark). The DEJ detection algorithm achieved an average distance from the ground truth DEJ surface of around 4.7μm for dark skin and around 7-14μm for fair skin.

  16. Supervised multimedia categorization

    NASA Astrophysics Data System (ADS)

    Aldershoff, Frank; Salden, Alfons H.; Iacob, Sorin M.; Kempen, Masja

    2003-01-01

    Static multimedia on the Web can already be hardly structured manually. Although unavoidable and necessary, manual annotation of dynamic multimedia becomes even less feasible when multimedia quickly changes in complexity, i.e. in volume, modality, and usage context. The latter context could be set by learning or other purposes of the multimedia material. This multimedia dynamics calls for categorisation systems that index, query and retrieve multimedia objects on the fly in a similar way as a human expert would. We present and demonstrate such a supervised dynamic multimedia object categorisation system. Our categorisation system comes about by continuously gauging it to a group of human experts who annotate raw multimedia for a certain domain ontology given a usage context. Thus effectively our system learns the categorisation behaviour of human experts. By inducing supervised multi-modal content and context-dependent potentials our categorisation system associates field strengths of raw dynamic multimedia object categorisations with those human experts would assign. After a sufficient long period of supervised machine learning we arrive at automated robust and discriminative multimedia categorisation. We demonstrate the usefulness and effectiveness of our multimedia categorisation system in retrieving semantically meaningful soccer-video fragments, in particular by taking advantage of multimodal and domain specific information and knowledge supplied by human experts.

  17. The Pandora multi-algorithm approach to automated pattern recognition in LAr TPC detectors

    NASA Astrophysics Data System (ADS)

    Marshall, J. S.; Blake, A. S. T.; Thomson, M. A.; Escudero, L.; de Vries, J.; Weston, J.; MicroBooNE collaboration

    2017-09-01

    The development and operation of Liquid Argon Time Projection Chambers (LAr TPCs) for neutrino physics has created a need for new approaches to pattern recognition, in order to fully exploit the superb imaging capabilities offered by this technology. The Pandora Software Development Kit provides functionality to aid the process of designing, implementing and running pattern recognition algorithms. It promotes the use of a multi-algorithm approach to pattern recognition: individual algorithms each address a specific task in a particular topology; a series of many tens of algorithms then carefully builds-up a picture of the event. The input to the Pandora pattern recognition is a list of 2D Hits. The output from the chain of over 70 algorithms is a hierarchy of reconstructed 3D Particles, each with an identified particle type, vertex and direction.

  18. CASA: an efficient automated assignment of protein mainchain NMR data using an ordered tree search algorithm.

    PubMed

    Wang, Jianyong; Wang, Tianzhi; Zuiderweg, Erik R P; Crippen, Gordon M

    2005-12-01

    Rapid analysis of protein structure, interaction, and dynamics requires fast and automated assignments of 3D protein backbone triple-resonance NMR spectra. We introduce a new depth-first ordered tree search method of automated assignment, CASA, which uses hand-edited peak-pick lists of a flexible number of triple resonance experiments. The computer program was tested on 13 artificially simulated peak lists for proteins up to 723 residues, as well as on the experimental data for four proteins. Under reasonable tolerances, it generated assignments that correspond to the ones reported in the literature within a few minutes of CPU time. The program was also tested on the proteins analyzed by other methods, with both simulated and experimental peaklists, and it could generate good assignments in all relevant cases. The robustness was further tested under various situations.

  19. Modeling pilot interaction with automated digital avionics systems: Guidance and control algorithms for contour and nap-of-the-Earth flight

    NASA Technical Reports Server (NTRS)

    Hess, Ronald A.

    1990-01-01

    A collection of technical papers are presented that cover modeling pilot interaction with automated digital avionics systems and guidance and control algorithms for contour and nap-of-the-earth flight. The titles of the papers presented are as follows: (1) Automation effects in a multiloop manual control system; (2) A qualitative model of human interaction with complex dynamic systems; (3) Generalized predictive control of dynamic systems; (4) An application of generalized predictive control to rotorcraft terrain-following flight; (5) Self-tuning generalized predictive control applied to terrain-following flight; and (6) Precise flight path control using a predictive algorithm.

  20. Validation study of automated dermal/epidermal junction localization algorithm in reflectance confocal microscopy images of skin

    NASA Astrophysics Data System (ADS)

    Kurugol, Sila; Rajadhyaksha, Milind; Dy, Jennifer G.; Brooks, Dana H.

    2012-02-01

    Reflectance confocal microscopy (RCM) has seen increasing clinical application for noninvasive diagnosis of skin cancer. Identifying the location of the dermal-epidermal junction (DEJ) in the image stacks is key for effective clinical imaging. For example, one clinical imaging procedure acquires a dense stack of 0.5x0.5mm FOV images and then, after manual determination of DEJ depth, collects a 5x5mm mosaic at that depth for diagnosis. However, especially in lightly pigmented skin, RCM images have low contrast at the DEJ which makes repeatable, objective visual identification challenging. We have previously published proof of concept for an automated algorithm for DEJ detection in both highly- and lightly-pigmented skin types based on sequential feature segmentation and classification. In lightly-pigmented skin the change of skin texture with depth was detected by the algorithm and used to locate the DEJ. Here we report on further validation of our algorithm on a more extensive collection of 24 image stacks (15 fair skin, 9 dark skin). We compare algorithm performance against classification by three clinical experts. We also evaluate inter-expert consistency among the experts. The average correlation across experts was 0.81 for lightly pigmented skin, indicating the difficulty of the problem. The algorithm achieved epidermis/dermis misclassification rates smaller than 10% (based on 25x25 mm tiles) and average distance from the expert labeled boundaries of ~6.4 μm for fair skin and ~5.3 μm for dark skin, well within average cell size and less than 2x the instrument resolution in the optical axis.

  1. Image processing algorithm for automated monitoring of metal transfer in double-electrode GMAW

    NASA Astrophysics Data System (ADS)

    Wang, Zhen Zhou; Zhang, Yu Ming

    2007-07-01

    Controlled metal transfer in gas metal arc welding (GMAW) implies controllable weld quality. To understand, analyse and control the metal transfer process, the droplet should be monitored and tracked. To process the metal transfer images in double-electrode GMAW (DE-GMAW), a novel modification of GMAW, a brightness-based algorithm is proposed to locate the droplet and compute the droplet size automatically. Although this algorithm can locate the droplet with adequate accuracy, its accuracy in droplet size computation needs improvements. To this end, the correlation among adjacent images due to the droplet development is taken advantage of to improve the algorithm. Experimental results verified that the improved algorithm can automatically locate the droplets and compute the droplet size with an adequate accuracy.

  2. A statistical-based scheduling algorithm in automated data path synthesis

    NASA Technical Reports Server (NTRS)

    Jeon, Byung Wook; Lursinsap, Chidchanok

    1992-01-01

    In this paper, we propose a new heuristic scheduling algorithm based on the statistical analysis of the cumulative frequency distribution of operations among control steps. It has a tendency of escaping from local minima and therefore reaching a globally optimal solution. The presented algorithm considers the real world constraints such as chained operations, multicycle operations, and pipelined data paths. The result of the experiment shows that it gives optimal solutions, even though it is greedy in nature.

  3. Robust algorithms for automated chemical shift calibration of 1D 1H NMR spectra of blood serum.

    PubMed

    Pearce, Jake T M; Athersuch, Toby J; Ebbels, Timothy M D; Lindon, John C; Nicholson, Jeremy K; Keun, Hector C

    2008-09-15

    In biofluid NMR spectroscopy, the frequency of each resonance is typically calibrated by addition of a reference compound such as 3-(trimethylsilyl)-propionic acid- d 4 (TSP) to the sample. However biofluids such as serum cannot be referenced to TSP, due to shifts resonance caused by binding to macromolecules in solution. In order to overcome this limitation we have developed algorithms, based on analysis of derivative spectra, to locate and calibrate (1)H NMR spectra to the alpha-glucose anomeric doublet. We successfully used these algorithms to calibrate 77 serum (1)H NMR spectra and demonstrate the greater reproducibility of the calculated chemical-shift corrections ( r = 0.97) than those generated by manual alignment ( r = 0.8-0.88). Hence we show that these algorithms provide robust and reproducible methods of calibrating (1)H NMR of serum, plasma, or any biofluid in which glucose is abundant. Precise automated calibration of complex biofluid NMR spectra is an important tool in large-scale metabonomic or metabolomic studies, where hundreds or even thousands of spectra may be analyzed in high-resolution by pattern recognition analysis.

  4. Automated Storm Tracking and the Lightning Jump Algorithm Using GOES-R Geostationary Lightning Mapper (GLM) Proxy Data

    NASA Technical Reports Server (NTRS)

    Schultz, Elise; Schultz, Christopher Joseph; Carey, Lawrence D.; Cecil, Daniel J.; Bateman, Monte

    2016-01-01

    This study develops a fully automated lightning jump system encompassing objective storm tracking, Geostationary Lightning Mapper proxy data, and the lightning jump algorithm (LJA), which are important elements in the transition of the LJA concept from a research to an operational based algorithm. Storm cluster tracking is based on a product created from the combination of a radar parameter (vertically integrated liquid, VIL), and lightning information (flash rate density). Evaluations showed that the spatial scale of tracked features or storm clusters had a large impact on the lightning jump system performance, where increasing spatial scale size resulted in decreased dynamic range of the system's performance. This framework will also serve as a means to refine the LJA itself to enhance its operational applicability. Parameters within the system are isolated and the system's performance is evaluated with adjustments to parameter sensitivity. The system's performance is evaluated using the probability of detection (POD) and false alarm ratio (FAR) statistics. Of the algorithm parameters tested, sigma-level (metric of lightning jump strength) and flash rate threshold influenced the system's performance the most. Finally, verification methodologies are investigated. It is discovered that minor changes in verification methodology can dramatically impact the evaluation of the lightning jump system.

  5. Microcounselling Supervision: An Innovative Integrated Supervision Model

    ERIC Educational Resources Information Center

    Russell-Chapin, Lori A.; Ivey, Allen E.

    2004-01-01

    This article introduces a new integrated model of counselling supervision entitled the Microcounselling Supervision Model (MSM). This type of supervision is designed for supervisors and supervisees who favor eclecticism and work from multiple theoretical orientations. MSM successfully combines skills from various theories and supervision models by…

  6. Automated Lead Optimization of MMP-12 Inhibitors Using a Genetic Algorithm.

    PubMed

    Pickett, Stephen D; Green, Darren V S; Hunt, David L; Pardoe, David A; Hughes, Ian

    2011-01-13

    Traditional lead optimization projects involve long synthesis and testing cycles, favoring extensive structure-activity relationship (SAR) analysis and molecular design steps, in an attempt to limit the number of cycles that a project must run to optimize a development candidate. Microfluidic-based chemistry and biology platforms, with cycle times of minutes rather than weeks, lend themselves to unattended autonomous operation. The bottleneck in the lead optimization process is therefore shifted from synthesis or test to SAR analysis and design. As such, the way is open to an algorithm-directed process, without the need for detailed user data analysis. Here, we present results of two synthesis and screening experiments, undertaken using traditional methodology, to validate a genetic algorithm optimization process for future application to a microfluidic system. The algorithm has several novel features that are important for the intended application. For example, it is robust to missing data and can suggest compounds for retest to ensure reliability of optimization. The algorithm is first validated on a retrospective analysis of an in-house library embedded in a larger virtual array of presumed inactive compounds. In a second, prospective experiment with MMP-12 as the target protein, 140 compounds are submitted for synthesis over 10 cycles of optimization. Comparison is made to the results from the full combinatorial library that was synthesized manually and tested independently. The results show that compounds selected by the algorithm are heavily biased toward the more active regions of the library, while the algorithm is robust to both missing data (compounds where synthesis failed) and inactive compounds. This publication places the full combinatorial library and biological data into the public domain with the intention of advancing research into algorithm-directed lead optimization methods.

  7. Automated decision algorithm applied to a field experiment with multiple research objectives: The DC3 campaign

    NASA Astrophysics Data System (ADS)

    Hanlon, Christopher J.; Small, Arthur A.; Bose, Satyajit; Young, George S.; Verlinde, Johannes

    2014-10-01

    Automated decision systems have shown the potential to increase data yields from field experiments in atmospheric science. The present paper describes the construction and performance of a flight decision system designed for a case in which investigators pursued multiple, potentially competing objectives. The Deep Convective Clouds and Chemistry (DC3) campaign in 2012 sought in situ airborne measurements of isolated deep convection in three study regions: northeast Colorado, north Alabama, and a larger region extending from central Oklahoma through northwest Texas. As they confronted daily flight launch decisions, campaign investigators sought to achieve two mission objectives that stood in potential tension to each other: to maximize the total amount of data collected while also collecting approximately equal amounts of data from each of the three study regions. Creating an automated decision system involved understanding how investigators would themselves negotiate the trade-offs between these potentially competing goals, and representing those preferences formally using a utility function that served to rank-order the perceived value of alternative data portfolios. The decision system incorporated a custom-built method for generating probabilistic forecasts of isolated deep convection and estimated climatologies calibrated to historical observations. Monte Carlo simulations of alternative future conditions were used to generate flight decision recommendations dynamically consistent with the expected future progress of the campaign. Results show that a strict adherence to the recommendations generated by the automated system would have boosted the data yield of the campaign by between 10 and 57%, depending on the metrics used to score success, while improving portfolio balance.

  8. Developing and evaluating an automated appendicitis risk stratification algorithm for pediatric patients in the emergency department

    PubMed Central

    Deleger, Louise; Brodzinski, Holly; Zhai, Haijun; Li, Qi; Lingren, Todd; Kirkendall, Eric S; Alessandrini, Evaline; Solti, Imre

    2013-01-01

    Objective To evaluate a proposed natural language processing (NLP) and machine-learning based automated method to risk stratify abdominal pain patients by analyzing the content of the electronic health record (EHR). Methods We analyzed the EHRs of a random sample of 2100 pediatric emergency department (ED) patients with abdominal pain, including all with a final diagnosis of appendicitis. We developed an automated system to extract relevant elements from ED physician notes and lab values and to automatically assign a risk category for acute appendicitis (high, equivocal, or low), based on the Pediatric Appendicitis Score. We evaluated the performance of the system against a manually created gold standard (chart reviews by ED physicians) for recall, specificity, and precision. Results The system achieved an average F-measure of 0.867 (0.869 recall and 0.863 precision) for risk classification, which was comparable to physician experts. Recall/precision were 0.897/0.952 in the low-risk category, 0.855/0.886 in the high-risk category, and 0.854/0.766 in the equivocal-risk category. The information that the system required as input to achieve high F-measure was available within the first 4 h of the ED visit. Conclusions Automated appendicitis risk categorization based on EHR content, including information from clinical notes, shows comparable performance to physician chart reviewers as measured by their inter-annotator agreement and represents a promising new approach for computerized decision support to promote application of evidence-based medicine at the point of care. PMID:24130231

  9. Automated parameter estimation of the Hodgkin-Huxley model using the differential evolution algorithm: application to neuromimetic analog integrated circuits.

    PubMed

    Buhry, Laure; Grassia, Filippo; Giremus, Audrey; Grivel, Eric; Renaud, Sylvie; Saïghi, Sylvain

    2011-10-01

    We propose a new estimation method for the characterization of the Hodgkin-Huxley formalism. This method is an alternative technique to the classical estimation methods associated with voltage clamp measurements. It uses voltage clamp type recordings, but is based on the differential evolution algorithm. The parameters of an ionic channel are estimated simultaneously, such that the usual approximations of classical methods are avoided and all the parameters of the model, including the time constant, can be correctly optimized. In a second step, this new estimation technique is applied to the automated tuning of neuromimetic analog integrated circuits designed by our research group. We present a tuning example of a fast spiking neuron, which reproduces the frequency-current characteristics of the reference data, as well as the membrane voltage behavior. The final goal of this tuning is to interconnect neuromimetic chips as neural networks, with specific cellular properties, for future theoretical studies in neuroscience.

  10. Microprocessor-based integration of microfluidic control for the implementation of automated sensor monitoring and multithreaded optimization algorithms.

    PubMed

    Ezra, Elishai; Maor, Idan; Bavli, Danny; Shalom, Itai; Levy, Gahl; Prill, Sebastian; Jaeger, Magnus S; Nahmias, Yaakov

    2015-08-01

    Microfluidic applications range from combinatorial synthesis to high throughput screening, with platforms integrating analog perfusion components, digitally controlled micro-valves and a range of sensors that demand a variety of communication protocols. Currently, discrete control units are used to regulate and monitor each component, resulting in scattered control interfaces that limit data integration and synchronization. Here, we present a microprocessor-based control unit, utilizing the MS Gadgeteer open framework that integrates all aspects of microfluidics through a high-current electronic circuit that supports and synchronizes digital and analog signals for perfusion components, pressure elements, and arbitrary sensor communication protocols using a plug-and-play interface. The control unit supports an integrated touch screen and TCP/IP interface that provides local and remote control of flow and data acquisition. To establish the ability of our control unit to integrate and synchronize complex microfluidic circuits we developed an equi-pressure combinatorial mixer. We demonstrate the generation of complex perfusion sequences, allowing the automated sampling, washing, and calibrating of an electrochemical lactate sensor continuously monitoring hepatocyte viability following exposure to the pesticide rotenone. Importantly, integration of an optical sensor allowed us to implement automated optimization protocols that require different computational challenges including: prioritized data structures in a genetic algorithm, distributed computational efforts in multiple-hill climbing searches and real-time realization of probabilistic models in simulated annealing. Our system offers a comprehensive solution for establishing optimization protocols and perfusion sequences in complex microfluidic circuits.

  11. An automated algorithm for extracting road edges from terrestrial mobile LiDAR data

    NASA Astrophysics Data System (ADS)

    Kumar, Pankaj; McElhinney, Conor P.; Lewis, Paul; McCarthy, Timothy

    2013-11-01

    Terrestrial mobile laser scanning systems provide rapid and cost effective 3D point cloud data which can be used for extracting features such as the road edge along a route corridor. This information can assist road authorities in carrying out safety risk assessment studies along road networks. The knowledge of the road edge is also a prerequisite for the automatic estimation of most other road features. In this paper, we present an algorithm which has been developed for extracting left and right road edges from terrestrial mobile LiDAR data. The algorithm is based on a novel combination of two modified versions of the parametric active contour or snake model. The parameters involved in the algorithm are selected empirically and are fixed for all the road sections. We have developed a novel way of initialising the snake model based on the navigation information obtained from the mobile mapping vehicle. We tested our algorithm on different types of road sections representing rural, urban and national primary road sections. The successful extraction of road edges from these multiple road section environments validates our algorithm. These findings and knowledge provide valuable insights as well as a prototype road edge extraction tool-set, for both national road authorities and survey companies.

  12. Automated Lake Ice Classification of Dual Polarization RADARSAT-2 SAR Imagery with the Iterative Region Growing using Semantics Algorithm

    NASA Astrophysics Data System (ADS)

    Hoekstra, M.; Duguay, C. R.; Clausi, D. A.

    2016-12-01

    Changes to the timing and duration of ice cover on lakes throughout the northern landscape has been established as a strong indicator of climate change and variability, which is expected to have implications for both human and environmental systems. In addition, monitoring the extent and timing of ice cover is also required to allow for more reliable weather forecasting across lake-rich northern latitudes. Currently the Canadian Ice Service (CIS) monitors over 130 lakes using RADARSAT-2 SAR (synthetic aperture radar) and optical imagery. These images are visually interpreted, with lake ice cover reported weekly as a fraction out of ten. An automated method of classifying ice and water in SAR scenes would allow for more detailed records of lake ice extent to be delivered operationally. The Vision and Image Processing lab at University of Waterloo has developed a software system called MAGIC (MAp Guided Ice Classification) which allows for automated classification of SAR scenes. This tool offers the Iterative Region Growing using Semantics (IRGS) algorithm which has been successfully tested in the classification of SAR scenes of sea ice with up to 96% accuracy. The IRGS algorithm separates homogeneous regions in an image using a hierarchical watershed approach, then merges like regions into classes. These classes are labeled using a support vector machine classifier, employing SAR gray-level co-occurrence backscatter and texture features. In this study, we have used the MAGIC system to classify ice and water in dual-polarization RADARSAT-2 scenes of Great Slave Lake and Lake Winnipeg during both freeze-up and break-up periods. An accuracy assessment has been performed on the classification results, comparing outcomes from MAGIC with user generated reference data and the CIS weekly fraction reported at the time of image acquisition. The results demonstrate the potential of the MAGIC system to quickly and accurately provide detailed lake ice cover information for

  13. A Fully Automated Supraglacial lake area and volume Tracking ("FAST") algorithm: development and application using MODIS imagery of West Greenland

    NASA Astrophysics Data System (ADS)

    Williamson, Andrew; Arnold, Neil; Banwell, Alison; Willis, Ian

    2017-04-01

    Supraglacial lakes (SGLs) on the Greenland Ice Sheet (GrIS) influence ice dynamics if draining rapidly by hydrofracture, which can occur in under 24 hours. MODerate-resolution Imaging Spectroradiometer (MODIS) data are often used to investigate SGLs, including calculating SGL area changes through time, but no existing work presents a method that tracks changes in individual (and total) SGL volume in MODIS imagery over a melt season. Here, we present such a method. First, we tested three automated approaches to derive SGL areas from MODIS imagery by comparing calculated areas for the Paakitsoq and Store Glacier regions in West Greenland with areas derived from Landsat-8 (LS8) images. Second, we applied a physically-based depth-calculation algorithm to the pixels within the SGL boundaries from the best performing method, and validated the resultant depths with those calculated using the same method applied to LS8 imagery. Our results indicated that SGL areas are most accurately generated using dynamic thresholding of MODIS band 1 (red) with a 0.640 threshold value. Calculated SGL area, depth and volume values from MODIS were closely comparable to those derived from LS8. The best performing area- and depth-detection methods were then incorporated into a Fully Automated SGL Tracking ("FAST") algorithm that tracks individual SGLs between successive MODIS images. It identified 43 (Paakitsoq) and 19 (Store Glacier) rapidly draining SGLs during 2014, representing 21% and 15% of the respective total SGL populations, including some clusters of rapidly draining SGLs. We found no relationship between the water volumes contained within these rapidly draining SGLs and the ice thicknesses beneath them, indicating that a critical water volume linearly related to ice thickness cannot explain the incidence of rapid drainage. The FAST algorithm, which we believe to be the most comprehensive SGL tracking algorithm developed to date, has the potential to investigate statistical

  14. A Recursive Multiscale Correlation-Averaging Algorithm for an Automated Distributed Road Condition Monitoring System

    SciTech Connect

    Ndoye, Mandoye; Barker, Alan M; Krogmeier, James; Bullock, Darcy

    2011-01-01

    A signal processing approach is proposed to jointly filter and fuse spatially indexed measurements captured from many vehicles. It is assumed that these measurements are influenced by both sensor noise and measurement indexing uncertainties. Measurements from low-cost vehicle-mounted sensors (e.g., accelerometers and Global Positioning System (GPS) receivers) are properly combined to produce higher quality road roughness data for cost-effective road surface condition monitoring. The proposed algorithms are recursively implemented and thus require only moderate computational power and memory space. These algorithms are important for future road management systems, which will use on-road vehicles as a distributed network of sensing probes gathering spatially indexed measurements for condition monitoring, in addition to other applications, such as environmental sensing and/or traffic monitoring. Our method and the related signal processing algorithms have been successfully tested using field data.

  15. Integration of symbolic and algorithmic hardware and software for the automation of space station subsystems

    NASA Technical Reports Server (NTRS)

    Gregg, Hugh; Healey, Kathleen; Hack, Edmund; Wong, Carla

    1987-01-01

    Expert systems that require access to data bases, complex simulations and real time instrumentation have both symbolic as well as algorithmic computing needs. These needs could both be met using a general computing workstation running both symbolic and algorithmic code, or separate, specialized computers networked together. The later approach was chosen to implement TEXSYS, the thermal expert system, developed to demonstrate the ability of an expert system to autonomously control the thermal control system of the space station. TEXSYS has been implemented on a Symbolics workstation, and will be linked to a microVAX computer that will control a thermal test bed. Integration options are explored and several possible solutions are presented.

  16. A robust linear regression based algorithm for automated evaluation of peptide identifications from shotgun proteomics by use of reversed-phase liquid chromatography retention time

    PubMed Central

    Xu, Hua; Yang, Lanhao; Freitas, Michael A

    2008-01-01

    Background Rejection of false positive peptide matches in database searches of shotgun proteomic experimental data is highly desirable. Several methods have been developed to use the peptide retention time as to refine and improve peptide identifications from database search algorithms. This report describes the implementation of an automated approach to reduce false positives and validate peptide matches. Results A robust linear regression based algorithm was developed to automate the evaluation of peptide identifications obtained from shotgun proteomic experiments. The algorithm scores peptides based on their predicted and observed reversed-phase liquid chromatography retention times. The robust algorithm does not require internal or external peptide standards to train or calibrate the linear regression model used for peptide retention time prediction. The algorithm is generic and can be incorporated into any database search program to perform automated evaluation of the candidate peptide matches based on their retention times. It provides a statistical score for each peptide match based on its retention time. Conclusion Analysis of peptide matches where the retention time score was included resulted in a significant reduction of false positive matches with little effect on the number of true positives. Overall higher sensitivities and specificities were achieved for database searches carried out with MassMatrix, Mascot and X!Tandem after implementation of the retention time based score algorithm. PMID:18713471

  17. Statistical Studies of Flux Transfer Events Using Unsupervised and Supervised Techniques

    NASA Astrophysics Data System (ADS)

    Driscoll, J.; Sipes, T. B.; Karimabadi, H.; Sibeck, D. G.; Korotova, G. I.

    2006-12-01

    We report preliminary results concerning the combined use of unsupervised and supervised techniques to classify Geotail FTEs. Currently, humans identify FTEs on the basis of clear isolated bipolar signatures normal to the nominal magnetopause, magnetic field strength enhancements, and sometimes east/west deflections of the magnetic field in the plane of the magnetopause BM. However, events with decreases or crater-like structures in the magnetic field strength, no east/west deflection, and asymmetric or continuous variations normal to the magnetopause have also been identified as FTEs, making statistical studies of FTEs problematical. Data mining techniques are particularly useful in developing automated search algorithms and generating large event lists for statistical studies. Data mining techniques can be divided into two types, supervised and unsupervised. In supervised algorithms, one teaches the algorithm using examples from labeled data. Considering the case of FTEs, the user would provide examples of FTEs as well as examples of non-FTEs and label (as FTE or non-FTE) the data. Since one has to start with a labeled data set, this may already include a user bias in the selection process. To avoid this issue, it can be useful to employ unsupervised techniques. Unsupervised techniques are analogous to training without a teacher: data are not labeled. There is also hybrid modeling where one makes several models, using unsupervised and supervised techniques and then connects them into a hybrid model.

  18. Fully Automated Complementary DNA Microarray Segmentation using a Novel Fuzzy-based Algorithm

    PubMed Central

    Saberkari, Hamidreza; Bahrami, Sheyda; Shamsi, Mousa; Amoshahy, Mohammad Javad; Ghavifekr, Habib Badri; Sedaaghi, Mohammad Hossein

    2015-01-01

    DNA microarray is a powerful approach to study simultaneously, the expression of 1000 of genes in a single experiment. The average value of the fluorescent intensity could be calculated in a microarray experiment. The calculated intensity values are very close in amount to the levels of expression of a particular gene. However, determining the appropriate position of every spot in microarray images is a main challenge, which leads to the accurate classification of normal and abnormal (cancer) cells. In this paper, first a preprocessing approach is performed to eliminate the noise and artifacts available in microarray cells using the nonlinear anisotropic diffusion filtering method. Then, the coordinate center of each spot is positioned utilizing the mathematical morphology operations. Finally, the position of each spot is exactly determined through applying a novel hybrid model based on the principle component analysis and the spatial fuzzy c-means clustering (SFCM) algorithm. Using a Gaussian kernel in SFCM algorithm will lead to improving the quality in complementary DNA microarray segmentation. The performance of the proposed algorithm has been evaluated on the real microarray images, which is available in Stanford Microarray Databases. Results illustrate that the accuracy of microarray cells segmentation in the proposed algorithm reaches to 100% and 98% for noiseless/noisy cells, respectively. PMID:26284175

  19. Automated spike sorting algorithm based on Laplacian eigenmaps and k-means clustering.

    PubMed

    Chah, E; Hok, V; Della-Chiesa, A; Miller, J J H; O'Mara, S M; Reilly, R B

    2011-02-01

    This study presents a new automatic spike sorting method based on feature extraction by Laplacian eigenmaps combined with k-means clustering. The performance of the proposed method was compared against previously reported algorithms such as principal component analysis (PCA) and amplitude-based feature extraction. Two types of classifier (namely k-means and classification expectation-maximization) were incorporated within the spike sorting algorithms, in order to find a suitable classifier for the feature sets. Simulated data sets and in-vivo tetrode multichannel recordings were employed to assess the performance of the spike sorting algorithms. The results show that the proposed algorithm yields significantly improved performance with mean sorting accuracy of 73% and sorting error of 10% compared to PCA which combined with k-means had a sorting accuracy of 58% and sorting error of 10%.A correction was made to this article on 22 February 2011. The spacing of the title was amended on the abstract page. No changes were made to the article PDF and the print version was unaffected.

  20. Fully Automated Complementary DNA Microarray Segmentation using a Novel Fuzzy-based Algorithm.

    PubMed

    Saberkari, Hamidreza; Bahrami, Sheyda; Shamsi, Mousa; Amoshahy, Mohammad Javad; Ghavifekr, Habib Badri; Sedaaghi, Mohammad Hossein

    2015-01-01

    DNA microarray is a powerful approach to study simultaneously, the expression of 1000 of genes in a single experiment. The average value of the fluorescent intensity could be calculated in a microarray experiment. The calculated intensity values are very close in amount to the levels of expression of a particular gene. However, determining the appropriate position of every spot in microarray images is a main challenge, which leads to the accurate classification of normal and abnormal (cancer) cells. In this paper, first a preprocessing approach is performed to eliminate the noise and artifacts available in microarray cells using the nonlinear anisotropic diffusion filtering method. Then, the coordinate center of each spot is positioned utilizing the mathematical morphology operations. Finally, the position of each spot is exactly determined through applying a novel hybrid model based on the principle component analysis and the spatial fuzzy c-means clustering (SFCM) algorithm. Using a Gaussian kernel in SFCM algorithm will lead to improving the quality in complementary DNA microarray segmentation. The performance of the proposed algorithm has been evaluated on the real microarray images, which is available in Stanford Microarray Databases. Results illustrate that the accuracy of microarray cells segmentation in the proposed algorithm reaches to 100% and 98% for noiseless/noisy cells, respectively.

  1. Algorithms to automate gap-dependent integral tuning for the 2.8-meter long horizontal field undulator with a dynamic force compensation mechanism

    SciTech Connect

    Xu, Joseph Z. Vasserman, Isaac; Strelnikov, Nikita

    2016-07-27

    A 2.8-meter long horizontal field prototype undulator with a dynamic force compensation mechanism has been developed and tested at the Advanced Photon Source (APS) at Argonne National Laboratory (Argonne). The magnetic tuning of the undulator integrals has been automated and accomplished by applying magnetic shims. A detailed description of the algorithms and performance is reported.

  2. Creation and Validation of an Automated Algorithm to Determine Postoperative Ventilator Requirements After Cardiac Surgery.

    PubMed

    Gabel, Eilon; Hofer, Ira S; Satou, Nancy; Grogan, Tristan; Shemin, Richard; Mahajan, Aman; Cannesson, Maxime

    2017-05-01

    In medical practice today, clinical data registries have become a powerful tool for measuring and driving quality improvement, especially among multicenter projects. Registries face the known problem of trying to create dependable and clear metrics from electronic medical records data, which are typically scattered and often based on unreliable data sources. The Society for Thoracic Surgery (STS) is one such example, and it supports manually collected data by trained clinical staff in an effort to obtain the highest-fidelity data possible. As a possible alternative, our team designed an algorithm to test the feasibility of producing computer-derived data for the case of postoperative mechanical ventilation hours. In this article, we study and compare the accuracy of algorithm-derived mechanical ventilation data with manual data extraction. We created a novel algorithm that is able to calculate mechanical ventilation duration for any postoperative patient using raw data from our EPIC electronic medical record. Utilizing nursing documentation of airway devices, documentation of lines, drains, and airways, and respiratory therapist ventilator settings, the algorithm produced results that were then validated against the STS registry. This enabled us to compare our algorithm results with data collected by human chart review. Any discrepancies were then resolved with manual calculation by a research team member. The STS registry contained a total of 439 University of California Los Angeles cardiac cases from April 1, 2013, to March 31, 2014. After excluding 201 patients for not remaining intubated, tracheostomy use, or for having 2 surgeries on the same day, 238 cases met inclusion criteria. Comparing the postoperative ventilation durations between the 2 data sources resulted in 158 (66%) ventilation durations agreeing within 1 hour, indicating a probable correct value for both sources. Among the discrepant cases, the algorithm yielded results that were exclusively

  3. Design and Implementation of the Automated Rendezvous Targeting Algorithms for Orion

    NASA Technical Reports Server (NTRS)

    DSouza, Christopher; Weeks, Michael

    2010-01-01

    The Orion vehicle will be designed to perform several rendezvous missions: rendezvous with the ISS in Low Earth Orbit (LEO), rendezvous with the EDS/Altair in LEO, a contingency rendezvous with the ascent stage of the Altair in Low Lunar Orbit (LLO) and a contingency rendezvous in LLO with the ascent and descent stage in the case of an aborted lunar landing. Therefore, it is not difficult to realize that each of these scenarios imposes different operational, timing, and performance constraints on the GNC system. To this end, a suite of on-board guidance and targeting algorithms have been designed to meet the requirement to perform the rendezvous independent of communications with the ground. This capability is particularly relevant for the lunar missions, some of which may occur on the far side of the moon. This paper will describe these algorithms which are designed to be structured and arranged in such a way so as to be flexible and able to safely perform a wide variety of rendezvous trajectories. The goal of the algorithms is not to merely fly one specific type of canned rendezvous profile. Conversely, it was designed from the start to be general enough such that any type of trajectory profile can be flown.(i.e. a coelliptic profile, a stable orbit rendezvous profile, and a expedited LLO rendezvous profile, etc) all using the same rendezvous suite of algorithms. Each of these profiles makes use of maneuver types which have been designed with dual goals of robustness and performance. They are designed to converge quickly under dispersed conditions and they are designed to perform many of the functions performed on the ground today. The targeting algorithms consist of a phasing maneuver (NC), an altitude adjust maneuver (NH), and plane change maneuver (NPC), a coelliptic maneuver (NSR), a Lambert targeted maneuver, and several multiple-burn targeted maneuvers which combine one of more of these algorithms. The derivation and implementation of each of these

  4. A thesis on the Development of an Automated SWIFT Edge Detection Algorithm

    SciTech Connect

    Trujillo, Christopher J.

    2016-07-28

    Throughout the world, scientists and engineers such as those at Los Alamos National Laboratory, perform research and testing unique only to applications aimed towards advancing technology, and understanding the nature of materials. With this testing, comes a need for advanced methods of data acquisition and most importantly, a means of analyzing and extracting the necessary information from such acquired data. In this thesis, I aim to produce an automated method implementing advanced image processing techniques and tools to analyze SWIFT image datasets for Detonator Technology at Los Alamos National Laboratory. Such an effective method for edge detection and point extraction can prove to be advantageous in analyzing such unique datasets and provide for consistency in producing results.

  5. Towards an intercomparison of automated registration algorithms for multiple source remote sensing data

    NASA Technical Reports Server (NTRS)

    LeMoigne, Jacqueline; Xia, Wei; Chettri, Samir; El-Ghazawi, Tarek; Kaymaz, Emre; Lerner, Bao-Ting; Mareboyana, Manohar; Netanyahu, Nathan; Pierce, John; Raghavan, Srini; hide

    1997-01-01

    The first step in the integration of multiple data is registration, either relative image-to-image registration or absolute geo-registration, to a map or a fixed coordinate system. As the need for automating registration techniques is recognized, we feel that there is a need to survey all the registration methods which may be applicable to Earth and space science problems and to evaluate their performances on a large variety of existing remote sensing data as well as on simulated data of soon-to-be-flown instruments. In this paper we will describe: 1) the operational toolbox which we are developing and which will consist in some of the most important registration techniques; and 2) the quantitative intercomparison of the different methods, which will allow a user to select the desired registration technique based on this evaluation and the visualization of the registration results.

  6. Towards an intercomparison of automated registration algorithms for multiple source remote sensing data

    NASA Technical Reports Server (NTRS)

    LeMoigne, Jacqueline; Xia, Wei; Chettri, Samir; El-Ghazawi, Tarek; Kaymaz, Emre; Lerner, Bao-Ting; Mareboyana, Manohar; Netanyahu, Nathan; Pierce, John; Raghavan, Srini; Tilton, James C.; Campbell, William J.; Cromp, Robert F.

    1997-01-01

    The first step in the integration of multiple data is registration, either relative image-to-image registration or absolute geo-registration, to a map or a fixed coordinate system. As the need for automating registration techniques is recognized, we feel that there is a need to survey all the registration methods which may be applicable to Earth and space science problems and to evaluate their performances on a large variety of existing remote sensing data as well as on simulated data of soon-to-be-flown instruments. In this paper we will describe: 1) the operational toolbox which we are developing and which will consist in some of the most important registration techniques; and 2) the quantitative intercomparison of the different methods, which will allow a user to select the desired registration technique based on this evaluation and the visualization of the registration results.

  7. Automated EEG detection algorithms and clinical semiology in epilepsy: importance of correlations.

    PubMed

    Hogan, R Edward

    2011-12-01

    With advances in technological innovation, electroencephalography has remained the gold standard for classification and localization of epileptic seizures. Like other diagnostic modalities, technological advances have opened new avenues for assessment of data, and hold great promise to improve interpretive capabilities. However, proper overall interpretation and application of electroencephalographic findings relies on valid correlations of associated clinical semiology. This article addresses interpretation of clinical signs and symptoms in the context of the diagnostic predictive value of electroencephalographic, clinical, and electrographic definitions of seizures, and upcoming challenges of interpreting intracranial high-frequency electroencephalographic data. This article is part of a Supplemental Special Issue entitled The Future of Automated Seizure Detection and Prediction. Copyright © 2011 Elsevier Inc. All rights reserved.

  8. Automated analysis of Kokee-Wettzell Intensive VLBI sessions—algorithms, results, and recommendations

    NASA Astrophysics Data System (ADS)

    Kareinen, Niko; Hobiger, Thomas; Haas, Rüdiger

    2015-11-01

    The time-dependent variations in the rotation and orientation of the Earth are represented by a set of Earth Orientation Parameters (EOP). Currently, Very Long Baseline Interferometry (VLBI) is the only technique able to measure all EOP simultaneously and to provide direct observation of universal time, usually expressed as UT1-UTC. To produce estimates for UT1-UTC on a daily basis, 1-h VLBI experiments involving two or three stations are organised by the International VLBI Service for Geodesy and Astrometry (IVS), the IVS Intensive (INT) series. There is an ongoing effort to minimise the turn-around time for the INT sessions in order to achieve near real-time and high quality UT1-UTC estimates. As a step further towards true fully automated real-time analysis of UT1-UTC, we carry out an extensive investigation with INT sessions on the Kokee-Wettzell baseline. Our analysis starts with the first versions of the observational files in S- and X-band and includes an automatic group delay ambiguity resolution and ionospheric calibration. Several different analysis strategies are investigated. In particular, we focus on the impact of external information, such as meteorological and cable delay data provided in the station log-files, and a priori EOP information. The latter is studied by extensive Monte Carlo simulations. Our main findings are that it is easily possible to analyse the INT sessions in a fully automated mode to provide UT1-UTC with very low latency. The information found in the station log-files is important for the accuracy of the UT1-UTC results, provided that the data in the station log-files are reliable. Furthermore, to guarantee UT1-UTC with an accuracy of less than 20 μs, it is necessary to use predicted a priori polar motion data in the analysis that are not older than 12 h.

  9. Automated Algorithms to Identify Geostationary Satellites and Detect Mistagging using Concurrent Spatio-Temporal and Brightness Information

    NASA Astrophysics Data System (ADS)

    Dao, P.; Heinrich-Josties, E.; Boroson, T.

    2016-09-01

    Automated detection of changes of GEO satellites using photometry is fundamentally dependent on near real time association of non-resolved signatures and object identification. Non-statistical algorithms which rely on fixed positional boundaries for associating objects often results in mistags [1]. Photometry has been proposed to reduce the occurrence of mistags. In past attempts to include photometry, (1) the problem of correlation (with the catalog) has been decoupled from the photometry-based detection of change and mistagging and (2) positional information has not been considered simultaneously with photometry. The technique used in this study addresses both problems. It takes advantage of the fusion of both types of information and processes all information concurrently in a single statistics-based framework. This study demonstrates with Las Cumbres Observatory Global Telescope Network (LCOGT) data that metric information, i.e. right ascension, declination, photometry and GP element set, can be used concurrently to confidently associate (identify) GEO objects. All algorithms can easily be put into a framework to process data in near-real-time.

  10. Quantitative analysis of ex vivo colorectal epithelium using an automated feature extraction algorithm for microendoscopy image data

    PubMed Central

    Prieto, Sandra P.; Lai, Keith K.; Laryea, Jonathan A.; Mizell, Jason S.; Muldoon, Timothy J.

    2016-01-01

    Abstract. Qualitative screening for colorectal polyps via fiber bundle microendoscopy imaging has shown promising results, with studies reporting high rates of sensitivity and specificity, as well as low interobserver variability with trained clinicians. A quantitative image quality control and image feature extraction algorithm (QFEA) was designed to lessen the burden of training and provide objective data for improved clinical efficacy of this method. After a quantitative image quality control step, QFEA extracts field-of-view area, crypt area, crypt circularity, and crypt number per image. To develop and validate this QFEA, a training set of microendoscopy images was collected from freshly resected porcine colon epithelium. The algorithm was then further validated on ex vivo image data collected from eight human subjects, selected from clinically normal appearing regions distant from grossly visible tumor in surgically resected colorectal tissue. QFEA has proven flexible in application to both mosaics and individual images, and its automated crypt detection sensitivity ranges from 71 to 94% despite intensity and contrast variation within the field of view. It also demonstrates the ability to detect and quantify differences in grossly normal regions among different subjects, suggesting the potential efficacy of this approach in detecting occult regions of dysplasia. PMID:27335893

  11. An algorithm for automating the registration of USDA segment ground data to LANDSAT MSS data

    NASA Technical Reports Server (NTRS)

    Graham, M. H. (Principal Investigator)

    1981-01-01

    The algorithm is referred to as the Automatic Segment Matching Algorithm (ASMA). The ASMA uses control points or the annotation record of a P-format LANDSAT compter compatible tape as the initial registration to relate latitude and longitude to LANDSAT rows and columns. It searches a given area of LANDSAT data with a 2x2 sliding window and computes gradient values for bands 5 and 7 to match the segment boundaries. The gradient values are held in memory during the shifting (or matching) process. The reconstructed segment array, containing ones (1's) for boundaries and zeros elsewhere are computer compared to the LANDSAT array and the best match computed. Initial testing of the ASMA indicates that it has good potential for replacing the manual technique.

  12. An automated diagnosis system of liver disease using artificial immune and genetic algorithms.

    PubMed

    Liang, Chunlin; Peng, Lingxi

    2013-04-01

    The rise of health care cost is one of the world's most important problems. Disease prediction is also a vibrant research area. Researchers have approached this problem using various techniques such as support vector machine, artificial neural network, etc. This study typically exploits the immune system's characteristics of learning and memory to solve the problem of liver disease diagnosis. The proposed system applies a combination of two methods of artificial immune and genetic algorithm to diagnose the liver disease. The system architecture is based on artificial immune system. The learning procedure of system adopts genetic algorithm to interfere the evolution of antibody population. The experiments use two benchmark datasets in our study, which are acquired from the famous UCI machine learning repository. The obtained diagnosis accuracies are very promising with regard to the other diagnosis system in the literatures. These results suggest that this system may be a useful automatic diagnosis tool for liver disease.

  13. Integration of symbolic and algorithmic hardware and software for the automation of space station subsystems

    NASA Technical Reports Server (NTRS)

    Gregg, Hugh; Healey, Kathleen; Hack, Edmund; Wong, Carla

    1987-01-01

    Traditional expert systems, such as diagnostic and training systems, interact with users only through a keyboard and screen, and are usually symbolic in nature. Expert systems that require access to data bases, complex simulations and real-time instrumentation have both symbolic as well as algorithmic computing needs. These needs could both be met using a general purpose workstation running both symbolic and algorithmic code, or separate, specialized computers networked together. The latter approach was chosen to implement TEXSYS, the thermal expert system, developed by NASA Ames Research Center in conjunction with Johnson Space Center to demonstrate the ability of an expert system to autonomously monitor the thermal control system of the space station. TEXSYS has been implemented on a Symbolics workstation, and will be linked to a microVAX computer that will control a thermal test bed. This paper will explore the integration options, and present several possible solutions.

  14. Integration of symbolic and algorithmic hardware and software for the automation of space station subsystems

    NASA Technical Reports Server (NTRS)

    Gregg, Hugh; Healey, Kathleen; Hack, Edmund; Wong, Carla

    1988-01-01

    Expert systems that require access to data bases, complex simulations and real time instrumentation have both symbolic and algorithmic needs. Both of these needs could be met using a general purpose workstation running both symbolic and algorithmic codes, or separate, specialized computers networked together. The later approach was chosen to implement TEXSYS, the thermal expert system, developed by the NASA Ames Research Center in conjunction with the Johnson Space Center to demonstrate the ability of an expert system to autonomously monitor the thermal control system of the space station. TEXSYS has been implemented on a Symbolics workstation, and will be linked to a microVAX computer that will control a thermal test bed. The integration options and several possible solutions are presented.

  15. On the Automated Segmentation of Epicardial and Mediastinal Cardiac Adipose Tissues Using Classification Algorithms.

    PubMed

    Rodrigues, Érick Oliveira; Cordeiro de Morais, Felipe Fernandes; Conci, Aura

    2015-01-01

    The quantification of fat depots on the surroundings of the heart is an accurate procedure for evaluating health risk factors correlated with several diseases. However, this type of evaluation is not widely employed in clinical practice due to the required human workload. This work proposes a novel technique for the automatic segmentation of cardiac fat pads. The technique is based on applying classification algorithms to the segmentation of cardiac CT images. Furthermore, we extensively evaluate the performance of several algorithms on this task and discuss which provided better predictive models. Experimental results have shown that the mean accuracy for the classification of epicardial and mediastinal fats has been 98.4% with a mean true positive rate of 96.2%. On average, the Dice similarity index, regarding the segmented patients and the ground truth, was equal to 96.8%. Therfore, our technique has achieved the most accurate results for the automatic segmentation of cardiac fats, to date.

  16. MECH: Algorithms and Tools for Automated Assessment of Potential Attack Locations

    DTIC Science & Technology

    2015-10-06

    methodology, a software prototype was developed for a user to use an Android App to access MECH analytics algorithms that run on a server. The prototype...M), Emplacement (E), and Control (C) locations within a Halo-shaped (H) space. By fitting attack strategies into a mathematical optimization...Outcomes of the Executive Summary. Details on how to set parameters for the simulation are described in the user guide of the MECH software

  17. Automated interpretation of disk diffusion antibiotic susceptibility tests with the radial profile analysis algorithm.

    PubMed Central

    Hejblum, G; Jarlier, V; Grosset, J; Aurengo, A

    1993-01-01

    An original algorithm referred to as the radial profile analysis algorithm was implemented on a Macintosh Quadra 700 computer to provide an automatic determination of the inhibition zone diameters of antibiotic susceptibility tests performed with the disk diffusion method. After digitization of the petri plate image, each antibiotic disk is recognized and labeled. Pixels of the local zone around each disk are then used for generating a profile pattern that is subjected to decision rules. The resulting estimate of the inhibition zone diameter is then automatically compared with conventional breakpoints for classifying the tested strain in one of the clinical categories of antibiotic susceptibility. The program is also able to request a human reading for some rare plates difficult to interpret. The algorithm accuracy was tested by comparing the results with a combination of independent human measurements performed on the tested plates. The test sample was composed of 98 strains, and 2,552 tests of 40 distinct antibiotics were subjected to the analysis. The difference between the automatic and human diameter estimates was less than 4 mm in 90% of the tests. The agreement between the automatic and human clinical categorizations amounted to 95.5%, and severe (major and very major) disagreements were found in 5.6% of the tests performed with staphylococci but only 0.3% of the tests with gram-negative rods. We conclude that the radial profile analysis algorithm is a solid backbone for an automatic system dedicated to the clinical interpretation of disk diffusion antibiotic susceptibility tests. Images PMID:8408562

  18. Autoreject: Automated artifact rejection for MEG and EEG data.

    PubMed

    Jas, Mainak; Engemann, Denis A; Bekhti, Yousra; Raimondo, Federico; Gramfort, Alexandre

    2017-06-20

    We present an automated algorithm for unified rejection and repair of bad trials in magnetoencephalography (MEG) and electroencephalography (EEG) signals. Our method capitalizes on cross-validation in conjunction with a robust evaluation metric to estimate the optimal peak-to-peak threshold - a quantity commonly used for identifying bad trials in M/EEG. This approach is then extended to a more sophisticated algorithm which estimates this threshold for each sensor yielding trial-wise bad sensors. Depending on the number of bad sensors, the trial is then repaired by interpolation or by excluding it from subsequent analysis. All steps of the algorithm are fully automated thus lending itself to the name Autoreject. In order to assess the practical significance of the algorithm, we conducted extensive validation and comparisons with state-of-the-art methods on four public datasets containing MEG and EEG recordings from more than 200 subjects. The comparisons include purely qualitative efforts as well as quantitatively benchmarking against human supervised and semi-automated preprocessing pipelines. The algorithm allowed us to automate the preprocessing of MEG data from the Human Connectome Project (HCP) going up to the computation of the evoked responses. The automated nature of our method minimizes the burden of human inspection, hence supporting scalability and reliability demanded by data analysis in modern neuroscience. Copyright © 2017 Elsevier Inc. All rights reserved.

  19. Weakly supervised classification in high energy physics

    NASA Astrophysics Data System (ADS)

    Dery, Lucio Mwinmaarong; Nachman, Benjamin; Rubbo, Francesco; Schwartzman, Ariel

    2017-05-01

    As machine learning algorithms become increasingly sophisticated to exploit subtle features of the data, they often become more dependent on simulations. This paper presents a new approach called weakly supervised classification in which class proportions are the only input into the machine learning algorithm. Using one of the most challenging binary classification tasks in high energy physics — quark versus gluon tagging — we show that weakly supervised classification can match the performance of fully supervised algorithms. Furthermore, by design, the new algorithm is insensitive to any mis-modeling of discriminating features in the data by the simulation. Weakly supervised classification is a general procedure that can be applied to a wide variety of learning problems to boost performance and robustness when detailed simulations are not reliable or not available.

  20. The automated reference toolset: A soil-geomorphic ecological potential matching algorithm

    USGS Publications Warehouse

    Nauman, Travis; Duniway, Michael C.

    2016-01-01

    Ecological inventory and monitoring data need referential context for interpretation. Identification of appropriate reference areas of similar ecological potential for site comparison is demonstrated using a newly developed automated reference toolset (ART). Foundational to identification of reference areas was a soil map of particle size in the control section (PSCS), a theme in US Soil Taxonomy. A 30-m resolution PSCS map of the Colorado Plateau (366,000 km2) was created by interpolating ∼5000 field soil observations using a random forest model and a suite of raster environmental spatial layers representing topography, climate, general ecological community, and satellite imagery ratios. The PSCS map had overall out of bag accuracy of 61.8% (Kappa of 0.54, p < 0.0001), and an independent validation accuracy of 93.2% at a set of 356 field plots along the southern edge of Canyonlands National Park, Utah. The ART process was also tested at these plots, and matched plots with the same ecological sites (ESs) 67% of the time where sites fell within 2-km buffers of each other. These results show that the PSCS and ART have strong application for ecological monitoring and sampling design, as well as assessing impacts of disturbance and land management action using an ecological potential framework. Results also demonstrate that PSCS could be a key mapping layer for the USDA-NRCS provisional ES development initiative.

  1. An automated sawtooth detection algorithm for strongly varying plasma conditions and crash characteristics

    NASA Astrophysics Data System (ADS)

    Gude, A.; Maraschek, M.; Kardaun, O.; the ASDEX Upgrade Team

    2017-09-01

    A sawtooth crash algorithm that can automatically detect irregular sawteeth with strongly varying crash characteristics, including inverted crashes with central signal increase, has been developed. Such sawtooth behaviour is observed in ASDEX Upgrade with its tungsten wall, especially in phases with central ECRH. This application of ECRH for preventing impurity accumulation is envisaged also for ITER. The detection consists of three steps: a sensitive edge detection, a multichannel combination to increase detection performance, and a profile analysis that tests generic sawtooth crash features. The effect of detection parameters on the edge detection results has been investigated using synthetic signals and tested in an application to ASDEX Upgrade soft x-ray data.

  2. SU-E-T-427: Feasibility Study for Evaluation of IMRT Dose Distribution Using Geant4-Based Automated Algorithms

    SciTech Connect

    Choi, H; Shin, W; Testa, M; Min, C; Kim, J

    2015-06-15

    Purpose: For intensity-modulated radiation therapy (IMRT) treatment planning validation using Monte Carlo (MC) simulations, a precise and automated procedure is necessary to evaluate the patient dose distribution. The aim of this study is to develop an automated algorithm for IMRT simulations using DICOM files and to evaluate the patient dose based on 4D simulation using the Geant4 MC toolkit. Methods: The head of a clinical linac (Varian Clinac 2300 IX) was modeled in Geant4 along with particular components such as the flattening filter and the multi-leaf collimator (MLC). Patient information and the position of the MLC were imported from the DICOM-RT interface. For each position of the MLC, a step- and-shoot technique was adopted. PDDs and lateral profiles were simulated in a water phantom (50×50×40 cm{sup 3}) and compared to measurement data. We used a lung phantom and MC-dose calculations were compared to the clinical treatment planning used at the Seoul National University Hospital. Results: In order to reproduce the measurement data, we tuned three free parameters: mean and standard deviation of the primary electron beam energy and the beam spot size. These parameters for 6 MV were found to be 5.6 MeV, 0.2378 MeV and 1 mm FWHM respectively. The average dose difference between measurements and simulations was less than 2% for PDDs and radial profiles. The lung phantom study showed fairly good agreement between MC and planning dose despite some unavoidable statistical fluctuation. Conclusion: The current feasibility study using the lung phantom shows the potential for IMRT dose validation using 4D MC simulations using Geant4 tool kits. This research was supported by Korea Institute of Nuclear safety and Development of Measurement Standards for Medical Radiation funded by Korea research Institute of Standards and Science. (KRISS-2015-15011032)

  3. Automated algorithm for mapping regions of cold-air pooling in complex terrain

    NASA Astrophysics Data System (ADS)

    Lundquist, Jessica D.; Pepin, Nicholas; Rochford, Caitlin

    2008-11-01

    In complex terrain, air in contact with the ground becomes cooled from radiative energy loss on a calm clear night and, being denser than the free atmosphere at the same elevation, sinks to valley bottoms. Cold-air pooling (CAP) occurs where this cooled air collects on the landscape. This article focuses on identifying locations on a landscape subject to considerably lower minimum temperatures than the regional average during conditions of clear skies and weak synoptic-scale winds, providing a simple automated method to map locations where cold air is likely to pool. Digital elevation models of regions of complex terrain were used to derive surfaces of local slope, curvature, and percentile elevation relative to surrounding terrain. Each pixel was classified as prone to CAP, not prone to CAP, or exhibiting no signal, based on the criterion that CAP occurs in regions with flat slopes in local depressions or valleys (negative curvature and low percentile). Along-valley changes in the topographic amplification factor (TAF) were then calculated to determine whether the cold air in the valley was likely to drain or pool. Results were checked against distributed temperature measurements in Loch Vale, Rocky Mountain National Park, Colorado; in the Eastern Pyrenees, France; and in Yosemite National Park, Sierra Nevada, California. Using CAP classification to interpolate temperatures across complex terrain resulted in improvements in root-mean-square errors compared to more basic interpolation techniques at most sites within the three areas examined, with average error reductions of up to 3°C at individual sites and about 1°C averaged over all sites in the study areas.

  4. Unsupervised parameter optimization for automated retention time alignment of severely shifted gas chromatographic data using the piecework alignment algorithm.

    SciTech Connect

    Pierce, Karisa M.; Wright, Bob W.; Synovec, Robert E.

    2007-02-02

    First, simulated chromatographic separations with declining retention time precision were used to study the performance of the piecewise retention time alignment algorithm and to demonstrate an unsupervised parameter optimization method. The average correlation coefficient between the first chromatogram and every other chromatogram in the data set was used to optimize the alignment parameters. This correlation method does not require a training set, so it is unsupervised and automated. This frees the user from needing to provide class information and makes the alignment algorithm more generally applicable to classifying completely unknown data sets. For a data set of simulated chromatograms where the average chromatographic peak was shifted past two neighboring peaks between runs, the average correlation coefficient of the raw data was 0.46 ± 0.25. After automated, optimized piecewise alignment, the average correlation coefficient was 0.93 ± 0.02. Additionally, a relative shift metric and principal component analysis (PCA) were used to independently quantify and categorize the alignment performance, respectively. The relative shift metric was defined as four times the standard deviation of a given peak’s retention time in all of the chromatograms, divided by the peak-width-at-base. The raw simulated data sets that were studied contained peaks with average relative shifts ranging between 0.3 and 3.0. Second, a “real” data set of gasoline separations was gathered using three different GC methods to induce severe retention time shifting. In these gasoline separations, retention time precision improved ~8 fold following alignment. Finally, piecewise alignment and the unsupervised correlation optimization method were applied to severely shifted GC separations of reformate distillation fractions. The effect of piecewise alignment on peak heights and peak areas is also reported. Piecewise alignment either did not change the peak height, or caused it to slightly

  5. Automated Conflict Resolution For Air Traffic Control

    NASA Technical Reports Server (NTRS)

    Erzberger, Heinz

    2005-01-01

    The ability to detect and resolve conflicts automatically is considered to be an essential requirement for the next generation air traffic control system. While systems for automated conflict detection have been used operationally by controllers for more than 20 years, automated resolution systems have so far not reached the level of maturity required for operational deployment. Analytical models and algorithms for automated resolution have been traffic conditions to demonstrate that they can handle the complete spectrum of conflict situations encountered in actual operations. The resolution algorithm described in this paper was formulated to meet the performance requirements of the Automated Airspace Concept (AAC). The AAC, which was described in a recent paper [1], is a candidate for the next generation air traffic control system. The AAC's performance objectives are to increase safety and airspace capacity and to accommodate user preferences in flight operations to the greatest extent possible. In the AAC, resolution trajectories are generated by an automation system on the ground and sent to the aircraft autonomously via data link .The algorithm generating the trajectories must take into account the performance characteristics of the aircraft, the route structure of the airway system, and be capable of resolving all types of conflicts for properly equipped aircraft without requiring supervision and approval by a controller. Furthermore, the resolution trajectories should be compatible with the clearances, vectors and flight plan amendments that controllers customarily issue to pilots in resolving conflicts. The algorithm described herein, although formulated specifically to meet the needs of the AAC, provides a generic engine for resolving conflicts. Thus, it can be incorporated into any operational concept that requires a method for automated resolution, including concepts for autonomous air to air resolution.

  6. Automated detection of lung nodules in CT images using shape-based genetic algorithm.

    PubMed

    Dehmeshki, Jamshid; Ye, Xujiong; Lin, Xinyu; Valdivieso, Manlio; Amin, Hamdan

    2007-09-01

    A shape-based genetic algorithm template-matching (GATM) method is proposed for the detection of nodules with spherical elements. A spherical-oriented convolution-based filtering scheme is used as a pre-processing step for enhancement. To define the fitness function for GATM, a 3D geometric shape feature is calculated at each voxel and then combined into a global nodule intensity distribution. Lung nodule phantom images are used as reference images for template matching. The proposed method has been validated on a clinical dataset of 70 thoracic CT scans (involving 16,800 CT slices) that contains 178 nodules as a gold standard. A total of 160 nodules were correctly detected by the proposed method and resulted in a detection rate of about 90%, with the number of false positives at approximately 14.6/scan (0.06/slice). The high-detection performance of the method suggested promising potential for clinical applications.

  7. Development of a Genetic Algorithm to Automate Clustering of a Dependency Structure Matrix

    NASA Technical Reports Server (NTRS)

    Rogers, James L.; Korte, John J.; Bilardo, Vincent J.

    2006-01-01

    Much technology assessment and organization design data exists in Microsoft Excel spreadsheets. Tools are needed to put this data into a form that can be used by design managers to make design decisions. One need is to cluster data that is highly coupled. Tools such as the Dependency Structure Matrix (DSM) and a Genetic Algorithm (GA) can be of great benefit. However, no tool currently combines the DSM and a GA to solve the clustering problem. This paper describes a new software tool that interfaces a GA written as an Excel macro with a DSM in spreadsheet format. The results of several test cases are included to demonstrate how well this new tool works.

  8. Automated identification of depsipeptide natural products by an informatic search algorithm.

    PubMed

    Skinnider, Michael A; Johnston, Chad W; Zvanych, Rostyslav; Magarvey, Nathan A

    2015-01-19

    Nonribosomal depsipeptides are a class of potent microbial natural products, which include several clinically approved pharmaceutical agents. Genome sequencing has revealed a large number of uninvestigated natural-product biosynthetic gene clusters. However, while novel informatic search methods to access these gene clusters have been developed to identify peptide natural products, depsipeptide detection has proven challenging. Herein, we present an improved version of our informatic search algorithm for natural products (iSNAP), which facilitates the detection of known and genetically predicted depsipeptides in complex microbial culture extracts. We validated this technology by identifying several depsipeptides from novel producers, and located a large number of novel depsipeptide gene clusters for future study. This approach highlights the value of chemoinformatic search methods for the discovery of genetically encoded metabolites by targeting specific areas of chemical space.

  9. Image processing algorithms for automated analysis of GMR data from inspection of multilayer structures

    NASA Astrophysics Data System (ADS)

    Karpenko, Oleksii; Safdernejad, Seyed; Dib, Gerges; Udpa, Lalita; Udpa, Satish; Tamburrino, Antonello

    2015-03-01

    Eddy current probes (EC) with Giant Magnetoresistive (GMR) sensors have recently emerged as a promising tool for rapid scanning of multilayer aircraft panels that helps detect cracks under fastener heads. However, analysis of GMR data is challenging due to the complexity of sensed magnetic fields. Further, probes that induce unidirectional currents are insensitive to cracks parallel to the current flow. In this paper, signal processing algorithms are developed for mixing data from two orthogonal EC-GMR scans in order to generate pseudo-rotating electromagnetic field images of fasteners with bottom layer cracks. Finite element simulations demonstrate that the normal component of numerically computed rotating field has uniform sensitivity to cracks emanating in all radial directions. The concept of pseudo-rotating field imaging is experimentally validated with the help of MAUS bilateral GMR array (Big-MR) designed by Boeing.

  10. Recognition of pharmaceuticals with compact mini-Raman-spectrometer and automized pattern recognition algorithms

    NASA Astrophysics Data System (ADS)

    Jähme, Hendrik; Di Florio, Giuseppe; Conti Nibali, Valeria; Esen, Cemal; Ostendorf, Andreas; Grafen, Markus; Henke, Erich; Soetebier, Jens; Brenner, Carsten; Havenith, Martina; Hofmann, Martin R.

    2016-04-01

    Robust classification of pharmaceuticals in an industrial process is an important step for validation of the final product. Especially for pharmaceuticals with similar visual appearance a quality control is only possible if a reliable algorithm based on easily obtainable spectroscopic data is available. We used Principal Component Analysis (PCA) and Support Vector Machines (SVM) on Raman spectroscopy data from a compact Raman system to classify several look-alike pharmaceuticals. This paper describes the data gathering and analysis process to robustly discriminate 19 different pharmaceuticals with similar visual appearance. With the described process we successfully identified all given pharmaceuticals which had a significant amount of active ingredients. Thus automatic validation of these pharmaceuticals in a process can be used to prevent wrong administration of look-alike drugs in an industrial setting, e.g. patient individual blistering.

  11. Characterization and classification of adherent cells in monolayer culture using automated tracking and evolutionary algorithms.

    PubMed

    Zhang, Zhen; Bedder, Matthew; Smith, Stephen L; Walker, Dawn; Shabir, Saqib; Southgate, Jennifer

    2016-08-01

    This paper presents a novel method for tracking and characterizing adherent cells in monolayer culture. A system of cell tracking employing computer vision techniques was applied to time-lapse videos of replicate normal human uro-epithelial cell cultures exposed to different concentrations of adenosine triphosphate (ATP) and a selective purinergic P2X antagonist (PPADS), acquired over a 24h period. Subsequent analysis following feature extraction demonstrated the ability of the technique to successfully separate the modulated classes of cell using evolutionary algorithms. Specifically, a Cartesian Genetic Program (CGP) network was evolved that identified average migration speed, in-contact angular velocity, cohesivity and average cell clump size as the principal features contributing to the separation. Our approach not only provides non-biased and parsimonious insight into modulated class behaviours, but can be extracted as mathematical formulae for the parameterization of computational models. Copyright © 2016 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.

  12. An algorithmic scheme for the automated calculation of fiber orientations in arterial walls

    NASA Astrophysics Data System (ADS)

    Fausten, Simon; Balzani, Daniel; Schröder, Jörg

    2016-11-01

    We propose an algorithmic scheme for the numerical calculation of fiber orientations in arterial walls. The basic assumption behind the procedure is that the fiber orientations are mainly governed by the principal tensile stress directions resulting in an improved load transfer within the artery as a consequence of the redistribution of stresses. This reflects the biological motivation that soft tissues continuously adapt to their mechanical environment in order to optimize their load-bearing capacities. The algorithmic scheme proposed here enhances efficiency of the general procedure given in Hariton et al. (Biomech Model Mechanobiol 6(3):163-175, 2007), which consists of repeatedly identifying a favored fiber orientation based on the principal tensile stresses under a certain loading scenario, and then re-calculating the stresses for that loading scenario with the modified favored fiber orientation. Since the method still depends on a highly accurate stress approximation of the finite element formulation, which is not straightforward to obtain in particular for incompressible and highly anisotropic materials, furthermore, a modified model is introduced. This model defines the favored fiber orientation not only in terms of the local principal stresses, but in terms of the volume averages of the principal stresses computed over individual finite elements. Thereby, the influence of imperfect stress approximations can be weakened leading to a stabilized convergence of the reorientation procedure and a more reasonable fiber orientation with less numerical noise. The performance of the proposed fiber reorientation scheme is investigated with respect to different finite element formulations and different favored fiber orientation models, Hariton et al. (Biomech Model Mechanobiol 6(3):163-175, 2007) and Cyron and Humphrey (Math Mech Solids 1-17, 2014). In addition, it is applied to calculate the fiber orientation in a patient-specific arterial geometry.

  13. A Computer-Based Automated Algorithm for Assessing Acinar Cell Loss after Experimental Pancreatitis

    PubMed Central

    Eisses, John F.; Davis, Amy W.; Tosun, Akif Burak; Dionise, Zachary R.; Chen, Cheng; Ozolek, John A.; Rohde, Gustavo K.; Husain, Sohail Z.

    2014-01-01

    The change in exocrine mass is an important parameter to follow in experimental models of pancreatic injury and regeneration. However, at present, the quantitative assessment of exocrine content by histology is tedious and operator-dependent, requiring manual assessment of acinar area on serial pancreatic sections. In this study, we utilized a novel computer-generated learning algorithm to construct an accurate and rapid method of quantifying acinar content. The algorithm works by learning differences in pixel characteristics from input examples provided by human experts. HE-stained pancreatic sections were obtained in mice recovering from a 2-day, hourly caerulein hyperstimulation model of experimental pancreatitis. For training data, a pathologist carefully outlined discrete regions of acinar and non-acinar tissue in 21 sections at various stages of pancreatic injury and recovery (termed the “ground truth”). After the expert defined the ground truth, the computer was able to develop a prediction rule that was then applied to a unique set of high-resolution images in order to validate the process. For baseline, non-injured pancreatic sections, the software demonstrated close agreement with the ground truth in identifying baseline acinar tissue area with only a difference of 1%±0.05% (p = 0.21). Within regions of injured tissue, the software reported a difference of 2.5%±0.04% in acinar area compared with the pathologist (p = 0.47). Surprisingly, on detailed morphological examination, the discrepancy was primarily because the software outlined acini and excluded inter-acinar and luminal white space with greater precision. The findings suggest that the software will be of great potential benefit to both clinicians and researchers in quantifying pancreatic acinar cell flux in the injured and recovering pancreas. PMID:25343460

  14. An Automated Algorithm for Producing Land Cover Information from Landsat Surface Reflectance Data Acquired Between 1984 and Present

    NASA Astrophysics Data System (ADS)

    Rover, J.; Goldhaber, M. B.; Holen, C.; Dittmeier, R.; Wika, S.; Steinwand, D.; Dahal, D.; Tolk, B.; Quenzer, R.; Nelson, K.; Wylie, B. K.; Coan, M.

    2015-12-01

    Multi-year land cover mapping from remotely sensed data poses challenges. Producing land cover products at spatial and temporal scales required for assessing longer-term trends in land cover change are typically a resource-limited process. A recently developed approach utilizes open source software libraries to automatically generate datasets, decision tree classifications, and data products while requiring minimal user interaction. Users are only required to supply coordinates for an area of interest, land cover from an existing source such as National Land Cover Database and percent slope from a digital terrain model for the same area of interest, two target acquisition year-day windows, and the years of interest between 1984 and present. The algorithm queries the Landsat archive for Landsat data intersecting the area and dates of interest. Cloud-free pixels meeting the user's criteria are mosaicked to create composite images for training the classifiers and applying the classifiers. Stratification of training data is determined by the user and redefined during an iterative process of reviewing classifiers and resulting predictions. The algorithm outputs include yearly land cover raster format data, graphics, and supporting databases for further analysis. Additional analytical tools are also incorporated into the automated land cover system and enable statistical analysis after data are generated. Applications tested include the impact of land cover change and water permanence. For example, land cover conversions in areas where shrubland and grassland were replaced by shale oil pads during hydrofracking of the Bakken Formation were quantified. Analytical analysis of spatial and temporal changes in surface water included identifying wetlands in the Prairie Pothole Region of North Dakota with potential connectivity to ground water, indicating subsurface permeability and geochemistry.

  15. 3D position of radiation sources using an automated gamma camera and ML algorithm with energy-dependent response functions

    NASA Astrophysics Data System (ADS)

    Lee, Wonho; Wehe, David

    2004-09-01

    Portable γ-ray imaging systems operating from 100keV to 3MeV are used in nuclear medicine, astrophysics and industrial applications. 2D images of γ-rays are common in many fields using radiation-detection systems (Appl. Opt. 17 (3) (1978) 337; IEEE Trans. Nucl. Sci. Ns- 31 (1984) 771; IEEE Trans. Nucl. Sci. NS- 44 (3) (1997) 911). In this work, the 3D position of a radiation source is determined by a portable gamma-ray imaging system. 2D gamma-ray images were obtained from different positions of the gamma camera and the third dimension, the distance between the detector and the radiation source, was calculated using triangulation. The imaging system consists of a 4×4 array of CsI(Tl) detectors coupled to photodiode detectors that are mounted on an automated table which can precisely position the angular axis of the camera. Lead shields the detector array from the background radiation. Additionally, a CCD camera is attached to the top of the gamma camera and provides coincident 2D visual information. The inferred distances from the center of the two measurement points and a radiation source had less than a 3% error within a range of 3m. The radiation image from the gamma camera and the visual image from CCD camera are superimposed into one combined image using a maximum-likelihood (ML) algorithm to make the image more precise. The response functions for the ML algorithm depend on the energy of incident radiation, and are obtained from both experiments and simulations. The energy-dependent response functions are shown to yield better imaging performance compared with the fixed energy response function commonly used previously.

  16. Validation of an Automated Cough Detection Algorithm for Tracking Recovery of Pulmonary Tuberculosis Patients

    PubMed Central

    Larson, Sandra; Comina, Germán; Gilman, Robert H.; Tracey, Brian H.; Bravard, Marjory; López, José W.

    2012-01-01

    Background A laboratory-free test for assessing recovery from pulmonary tuberculosis (TB) would be extremely beneficial in regions of the world where laboratory facilities are lacking. Our hypothesis is that analysis of cough sound recordings may provide such a test. In the current paper, we present validation of a cough analysis tool. Methodology/Principal Findings Cough data was collected from a cohort of TB patients in Lima, Peru and 25.5 hours of recordings were manually annotated by clinical staff. Analysis software was developed and validated by comparison to manual scoring. Because many patients cough in bursts, coughing was characterized in terms of cough epochs. Our software correctly detects 75.5% of cough episodes with a specificity of 99.6% (comparable to past results using the same definition) and a median false positive rate of 4 false positives/hour, due to the noisy, real-world nature of our dataset. We then manually review detected coughs to eliminate false positives, in effect using the algorithm as a pre-screening tool that reduces reviewing time to roughly 5% of the recording length. This cough analysis approach provides a foundation to support larger-scale studies of coughing rates over time for TB patients undergoing treatment. PMID:23071550

  17. Genetic algorithm based feature selection combined with dual classification for the automated detection of proliferative diabetic retinopathy.

    PubMed

    Welikala, R A; Fraz, M M; Dehmeshki, J; Hoppe, A; Tah, V; Mann, S; Williamson, T H; Barman, S A

    2015-07-01

    Proliferative diabetic retinopathy (PDR) is a condition that carries a high risk of severe visual impairment. The hallmark of PDR is the growth of abnormal new vessels. In this paper, an automated method for the detection of new vessels from retinal images is presented. This method is based on a dual classification approach. Two vessel segmentation approaches are applied to create two separate binary vessel map which each hold vital information. Local morphology features are measured from each binary vessel map to produce two separate 4-D feature vectors. Independent classification is performed for each feature vector using a support vector machine (SVM) classifier. The system then combines these individual outcomes to produce a final decision. This is followed by the creation of additional features to generate 21-D feature vectors, which feed into a genetic algorithm based feature selection approach with the objective of finding feature subsets that improve the performance of the classification. Sensitivity and specificity results using a dataset of 60 images are 0.9138 and 0.9600, respectively, on a per patch basis and 1.000 and 0.975, respectively, on a per image basis.

  18. An algorithm for automated detection, localization and measurement of local calcium signals from camera-based imaging.

    PubMed

    Ellefsen, Kyle L; Settle, Brett; Parker, Ian; Smith, Ian F

    2014-09-01

    Local Ca(2+) transients such as puffs and sparks form the building blocks of cellular Ca(2+) signaling in numerous cell types. They have traditionally been studied by linescan confocal microscopy, but advances in TIRF microscopy together with improved electron-multiplied CCD (EMCCD) cameras now enable rapid (>500 frames s(-1)) imaging of subcellular Ca(2+) signals with high spatial resolution in two dimensions. This approach yields vastly more information (ca. 1 Gb min(-1)) than linescan imaging, rendering visual identification and analysis of local events imaged both laborious and subject to user bias. Here we describe a routine to rapidly automate identification and analysis of local Ca(2+) events. This features an intuitive graphical user-interfaces and runs under Matlab and the open-source Python software. The underlying algorithm features spatial and temporal noise filtering to reliably detect even small events in the presence of noisy and fluctuating baselines; localizes sites of Ca(2+) release with sub-pixel resolution; facilitates user review and editing of data; and outputs time-sequences of fluorescence ratio signals for identified event sites along with Excel-compatible tables listing amplitudes and kinetics of events.

  19. MaxBin 2.0: an automated binning algorithm to recover genomes from multiple metagenomic datasets.

    PubMed

    Wu, Yu-Wei; Simmons, Blake A; Singer, Steven W

    2016-02-15

    The recovery of genomes from metagenomic datasets is a critical step to defining the functional roles of the underlying uncultivated populations. We previously developed MaxBin, an automated binning approach for high-throughput recovery of microbial genomes from metagenomes. Here we present an expanded binning algorithm, MaxBin 2.0, which recovers genomes from co-assembly of a collection of metagenomic datasets. Tests on simulated datasets revealed that MaxBin 2.0 is highly accurate in recovering individual genomes, and the application of MaxBin 2.0 to several metagenomes from environmental samples demonstrated that it could achieve two complementary goals: recovering more bacterial genomes compared to binning a single sample as well as comparing the microbial community composition between different sampling environments. MaxBin 2.0 is freely available at http://sourceforge.net/projects/maxbin/ under BSD license. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  20. Her2 Challenge Contest: A Detailed Assessment of Automated Her2 Scoring Algorithms in Whole Slide Images of Breast Cancer Tissues.

    PubMed

    Qaiser, Talha; Mukherjee, Abhik; Reddy Pb, Chaitanya; Munugoti, Sai Dileep; Tallam, Vamsi; Pitkäaho, Tomi; Lehtimäki, Taina; Naughton, Thomas; Berseth, Matt; Pedraza, Aníbal; Mukundan, Ramakrishnan; Smith, Matthew; Bhalerao, Abhir; Rodner, Erik; Simon, Marcel; Denzler, Joachim; Huang, Chao-Hui; Bueno, Gloria; Snead, David; Ellis, Ian O; Ilyas, Mohammad; Rajpoot, Nasir

    2017-08-03

    Evaluating expression of the Human epidermal growth factor receptor 2 (Her2) by visual examination of immunohistochemistry (IHC) on invasive breast cancer (BCa) is a key part of the diagnostic assessment of BCa due to its recognised importance as a predictive and prognostic marker in clinical practice. However, visual scoring of Her2 is subjective and consequently prone to inter-observer variability. Given the prognostic and therapeutic implications of Her2 scoring, a more objective method is required. In this paper, we report on a recent automated Her2 scoring contest, held in conjunction with the annual PathSoc meeting held in Nottingham in June 2016, aimed at systematically comparing and advancing the state-of-the-art Artificial Intelligence (AI) based automated methods for Her2 scoring. The contest dataset comprised of digitised whole slide images (WSI) of sections from 86 cases of invasive breast carcinoma stained with both Haematoxylin & Eosin (H&E) and IHC for Her2. The contesting algorithms automatically predicted scores of the IHC slides for an unseen subset of the dataset and the predicted scores were compared with the "ground truth" (a consensus score from at least two experts). We also report on a simple Man vs Machine contest for the scoring of Her2 and show that the automated methods could beat the pathology experts on this contest dataset. This paper presents a benchmark for comparing the performance of automated algorithms for scoring of Her2. It also demonstrates the enormous potential of automated algorithms in assisting the pathologist with objective IHC scoring. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  1. Developmental supervision for nurses.

    PubMed

    Donnelly, G; Brooks, P

    2001-01-01

    Developmental supervision is gaining increasing recognition in nursing as a form of clinical supervision that will promote professional growth and ultimately lead to improved patient care. Benner's (1984) model of career development is used as a framework in which to examine appropriate forms of supervision for each developmental stage. Directive, collaborative and non-directive supervision are applied to each of these developmental levels.

  2. SPEQTACLE: An automated generalized fuzzy C-means algorithm for tumor delineation in PET

    SciTech Connect

    Lapuyade-Lahorgue, Jérôme; Visvikis, Dimitris; Hatt, Mathieu; Pradier, Olivier; Cheze Le Rest, Catherine

    2015-10-15

    Purpose: Accurate tumor delineation in positron emission tomography (PET) images is crucial in oncology. Although recent methods achieved good results, there is still room for improvement regarding tumors with complex shapes, low signal-to-noise ratio, and high levels of uptake heterogeneity. Methods: The authors developed and evaluated an original clustering-based method called spatial positron emission quantification of tumor—Automatic Lp-norm estimation (SPEQTACLE), based on the fuzzy C-means (FCM) algorithm with a generalization exploiting a Hilbertian norm to more accurately account for the fuzzy and non-Gaussian distributions of PET images. An automatic and reproducible estimation scheme of the norm on an image-by-image basis was developed. Robustness was assessed by studying the consistency of results obtained on multiple acquisitions of the NEMA phantom on three different scanners with varying acquisition parameters. Accuracy was evaluated using classification errors (CEs) on simulated and clinical images. SPEQTACLE was compared to another FCM implementation, fuzzy local information C-means (FLICM) and fuzzy locally adaptive Bayesian (FLAB). Results: SPEQTACLE demonstrated a level of robustness similar to FLAB (variability of 14% ± 9% vs 14% ± 7%, p = 0.15) and higher than FLICM (45% ± 18%, p < 0.0001), and improved accuracy with lower CE (14% ± 11%) over both FLICM (29% ± 29%) and FLAB (22% ± 20%) on simulated images. Improvement was significant for the more challenging cases with CE of 17% ± 11% for SPEQTACLE vs 28% ± 22% for FLAB (p = 0.009) and 40% ± 35% for FLICM (p < 0.0001). For the clinical cases, SPEQTACLE outperformed FLAB and FLICM (15% ± 6% vs 37% ± 14% and 30% ± 17%, p < 0.004). Conclusions: SPEQTACLE benefitted from the fully automatic estimation of the norm on a case-by-case basis. This promising approach will be extended to multimodal images and multiclass estimation in future developments.

  3. Automated Means of Identifying Landslide Deposits using LiDAR Data using the Contour Connection Method Algorithm

    NASA Astrophysics Data System (ADS)

    Olsen, M. J.; Leshchinsky, B. A.; Tanyu, B. F.

    2014-12-01

    Landslides are a global natural hazard, resulting in severe economic, environmental and social impacts every year. Often, landslides occur in areas of repeated slope instability, but despite these trends, significant residential developments and critical infrastructure are built in the shadow of past landslide deposits and marginally stable slopes. These hazards, despite their sometimes enormous scale and regional propensity, however, are difficult to detect on the ground, often due to vegetative cover. However, new developments in remote sensing technology, specifically Light Detection and Ranging mapping (LiDAR) are providing a new means of viewing our landscape. Airborne LiDAR, combined with a level of post-processing, enable the creation of spatial data representative of the earth beneath the vegetation, highlighting the scars of unstable slopes of the past. This tool presents a revolutionary technique to mapping landslide deposits and their associated regions of risk; yet, their inventorying is often done manually, an approach that can be tedious, time-consuming and subjective. However, the associated LiDAR bare earth data present the opportunity to use this remote sensing technology and typical landslide geometry to create an automated algorithm that can detect and inventory deposits on a landscape scale. This algorithm, called the Contour Connection Method (CCM), functions by first detecting steep gradients, often associated with the headscarp of a failed hillslope, and initiating a search, highlighting deposits downslope of the failure. Based on input of search gradients, CCM can assist in highlighting regions identified as landslides consistently on a landscape scale, capable of mapping more than 14,000 hectares rapidly (<30 minutes). CCM has shown preliminary agreement with manual landslide inventorying in Oregon's Coast Range, realizing almost 90% agreement with inventorying performed by a trained geologist. The global threat of landslides necessitates

  4. Automated guidance algorithms for a space station-based crew escape vehicle.

    PubMed

    Flanary, R; Hammen, D G; Ito, D; Rabalais, B W; Rishikof, B H; Siebold, K H

    2003-04-01

    An escape vehicle was designed to provide an emergency evacuation for crew members living on a space station. For maximum escape capability, the escape vehicle needs to have the ability to safely evacuate a station in a contingency scenario such as an uncontrolled (e.g., tumbling) station. This emergency escape sequence will typically be divided into three events: The first separation event (SEP1), the navigation reconstruction event, and the second separation event (SEP2). SEP1 is responsible for taking the spacecraft from its docking port to a distance greater than the maximum radius of the rotating station. The navigation reconstruction event takes place prior to the SEP2 event and establishes the orbital state to within the tolerance limits necessary for SEP2. The SEP2 event calculates and performs an avoidance burn to prevent station recontact during the next several orbits. This paper presents the tools and results for the whole separation sequence with an emphasis on the two separation events. The first challenge includes collision avoidance during the escape sequence while the station is in an uncontrolled rotational state, with rotation rates of up to 2 degrees per second. The task of avoiding a collision may require the use of the Vehicle's de-orbit propulsion system for maximum thrust and minimum dwell time within the vicinity of the station vicinity. The thrust of the propulsion system is in a single direction, and can be controlled only by the attitude of the spacecraft. Escape algorithms based on a look-up table or analytical guidance can be implemented since the rotation rate and the angular momentum vector can be sensed onboard and a-priori knowledge of the position and relative orientation are available. In addition, crew intervention has been provided for in the event of unforeseen obstacles in the escape path. The purpose of the SEP2 burn is to avoid re-contact with the station over an extended period of time. Performing this maneuver requires

  5. Security system signal supervision

    SciTech Connect

    Chritton, M.R. ); Matter, J.C. )

    1991-09-01

    This purpose of this NUREG is to present technical information that should be useful to NRC licensees for understanding and applying line supervision techniques to security communication links. A review of security communication links is followed by detailed discussions of link physical protection and DC/AC static supervision and dynamic supervision techniques. Material is also presented on security for atmospheric transmission and video line supervision. A glossary of security communication line supervision terms is appended. 16 figs.

  6. Analysis of new bone, cartilage, and fibrosis tissue in healing murine allografts using whole slide imaging and a new automated histomorphometric algorithm

    PubMed Central

    Zhang, Longze; Chang, Martin; Beck, Christopher A; Schwarz, Edward M; Boyce, Brendan F

    2016-01-01

    Histomorphometric analysis of histologic sections of normal and diseased bone samples, such as healing allografts and fractures, is widely used in bone research. However, the utility of traditional semi-automated methods is limited because they are labor-intensive and can have high interobserver variability depending upon the parameters being assessed, and primary data cannot be re-analyzed automatically. Automated histomorphometry has long been recognized as a solution for these issues, and recently has become more feasible with the development of digital whole slide imaging and computerized image analysis systems that can interact with digital slides. Here, we describe the development and validation of an automated application (algorithm) using Visiopharm’s image analysis system to quantify newly formed bone, cartilage, and fibrous tissue in healing murine femoral allografts in high-quality digital images of H&E/alcian blue-stained decalcified histologic sections. To validate this algorithm, we compared the results obtained independently using OsteoMeasureTM and Visiopharm image analysis systems. The intraclass correlation coefficient between Visiopharm and OsteoMeasure was very close to one for all tissue elements tested, indicating nearly perfect reproducibility across methods. This new algorithm represents an accurate and labor-efficient method to quantify bone, cartilage, and fibrous tissue in healing mouse allografts. PMID:26816658

  7. Nonlinear analysis of the heartbeats in public patient ECGs using an automated PD2i algorithm for risk stratification of arrhythmic death

    PubMed Central

    Skinner, James E; Anchin, Jerry M; Weiss, Daniel N

    2008-01-01

    Heart rate variability (HRV) reflects both cardiac autonomic function and risk of arrhythmic death (AD). Reduced indices of HRV based on linear stochastic models are independent risk factors for AD in post-myocardial infarct cohorts. Indices based on nonlinear deterministic models have a significantly higher sensitivity and specificity for predicting AD in retrospective data. A need exists for nonlinear analytic software easily used by a medical technician. In the current study, an automated nonlinear algorithm, the time-dependent point correlation dimension (PD2i), was evaluated. The electrocardiogram (ECG) data were provided through an National Institutes of Health-sponsored internet archive (PhysioBank) and consisted of all 22 malignant arrhythmia ECG files (VF/VT) and 22 randomly selected arrhythmia files as the controls. The results were blindly calculated by automated software (Vicor 2.0, Vicor Technologies, Inc., Boca Raton, FL) and showed all analyzable VF/VT files had PD2i < 1.4 and all analyzable controls had PD2i > 1.4. Five VF/VT and six controls were excluded because surrogate testing showed the RR-intervals to contain noise, possibly resulting from the low digitization rate of the ECGs. The sensitivity was 100%, specificity 85%, relative risk > 100; p < 0.01, power > 90%. Thus, automated heartbeat analysis by the time-dependent nonlinear PD2i-algorithm can accurately stratify risk of AD in public data made available for competitive testing of algorithms. PMID:18728829

  8. Genetic Algorithm Based Framework for Automation of Stochastic Modeling of Multi-Season Streamflows

    NASA Astrophysics Data System (ADS)

    Srivastav, R. K.; Srinivasan, K.; Sudheer, K.

    2009-05-01

    bootstrap (MABB) ) based on the explicit objective functions of minimizing the relative bias and relative root mean square error in estimating the storage capacity of the reservoir. The optimal parameter set of the hybrid model is obtained based on the search over a multi- dimensional parameter space (involving simultaneous exploration of the parametric (PAR(1)) as well as the non-parametric (MABB) components). This is achieved using the efficient evolutionary search based optimization tool namely, non-dominated sorting genetic algorithm - II (NSGA-II). This approach helps in reducing the drudgery involved in the process of manual selection of the hybrid model, in addition to predicting the basic summary statistics dependence structure, marginal distribution and water-use characteristics accurately. The proposed optimization framework is used to model the multi-season streamflows of River Beaver and River Weber of USA. In case of both the rivers, the proposed GA-based hybrid model yields a much better prediction of the storage capacity (where simultaneous exploration of both parametric and non-parametric components is done) when compared with the MLE-based hybrid models (where the hybrid model selection is done in two stages, thus probably resulting in a sub-optimal model). This framework can be further extended to include different linear/non-linear hybrid stochastic models at other temporal and spatial scales as well.

  9. The role of human-automation consensus in multiple unmanned vehicle scheduling.

    PubMed

    Cummings, M L; Clare, Andrew; Hart, Christin

    2010-02-01

    This study examined the impact of increasing automation replanning rates on operator performance and workload when supervising a decentralized network of heterogeneous unmanned vehicles. Futuristic unmanned vehicles systems will invert the operator-to-vehicle ratio so that one operator can control multiple dissimilar vehicles connected through a decentralized network. Significant human-automation collaboration will be needed because of automation brittleness, but such collaboration could cause high workload. Three increasing levels of replanning were tested on an existing multiple unmanned vehicle simulation environment that leverages decentralized algorithms for vehicle routing and task allocation in conjunction with human supervision. Rapid replanning can cause high operator workload, ultimately resulting in poorer overall system performance. Poor performance was associated with a lack of operator consensus for when to accept the automation's suggested prompts for new plan consideration as well as negative attitudes toward unmanned aerial vehicles in general. Participants with video game experience tended to collaborate more with the automation, which resulted in better performance. In decentralized unmanned vehicle networks, operators who ignore the automation's requests for new plan consideration and impose rapid replans both increase their own workload and reduce the ability of the vehicle network to operate at its maximum capacity. These findings have implications for personnel selection and training for futuristic systems involving human collaboration with decentralized algorithms embedded in networks of autonomous systems.

  10. Inductive Supervised Quantum Learning

    NASA Astrophysics Data System (ADS)

    Monràs, Alex; Sentís, Gael; Wittek, Peter

    2017-05-01

    In supervised learning, an inductive learning algorithm extracts general rules from observed training instances, then the rules are applied to test instances. We show that this splitting of training and application arises naturally, in the classical setting, from a simple independence requirement with a physical interpretation of being nonsignaling. Thus, two seemingly different definitions of inductive learning happen to coincide. This follows from the properties of classical information that break down in the quantum setup. We prove a quantum de Finetti theorem for quantum channels, which shows that in the quantum case, the equivalence holds in the asymptotic setting, that is, for large numbers of test instances. This reveals a natural analogy between classical learning protocols and their quantum counterparts, justifying a similar treatment, and allowing us to inquire about standard elements in computational learning theory, such as structural risk minimization and sample complexity.

  11. Inductive Supervised Quantum Learning.

    PubMed

    Monràs, Alex; Sentís, Gael; Wittek, Peter

    2017-05-12

    In supervised learning, an inductive learning algorithm extracts general rules from observed training instances, then the rules are applied to test instances. We show that this splitting of training and application arises naturally, in the classical setting, from a simple independence requirement with a physical interpretation of being nonsignaling. Thus, two seemingly different definitions of inductive learning happen to coincide. This follows from the properties of classical information that break down in the quantum setup. We prove a quantum de Finetti theorem for quantum channels, which shows that in the quantum case, the equivalence holds in the asymptotic setting, that is, for large numbers of test instances. This reveals a natural analogy between classical learning protocols and their quantum counterparts, justifying a similar treatment, and allowing us to inquire about standard elements in computational learning theory, such as structural risk minimization and sample complexity.

  12. Initial Clinical Experience With a New Automated Antitachycardia Pacing Algorithm: Feasibility and Safety in an Ambulatory Patient Cohort.

    PubMed

    Yee, Raymond; Fisher, John D; Birgersdotter-Green, Ulrika; Smith, Timothy W; Kenigsberg, David N; Canby, Robert; Jackson, Troy; Taepke, Robert; DeGroot, Paul

    2017-09-01

    Antitachycardia pacing (ATP) in implantable cardioverter-defibrillators (ICD) decreases patient shock burden but has recognized limitations. A new automated ATP (AATP) based on electrophysiological first principles was designed. The study objective was to assess the feasibility and safety of AATP in ambulatory ICD patients. Enrolled patients had dual chamber or cardiac resynchronization therapy ICDs, history of ≥1 ICD-treated ventricular tachycardias (VT)/ventricular fibrillation episode, or a recorded, sustained monomorphic VT. Detection was set to ventricular fibrillation number of intervals to detect=24/32, VT number of intervals to detect≥16, and a fast VT zone of 240 to 320 ms. AATP prescribed the components and delivery of successive ATP sequences in real time, using the same settings for all patients. ICD datalogs were uploaded every ≈3 months, at unscheduled visits, exit, and death. Episodes and adverse events were adjudicated by separate committees. Results were adjusted (generalized estimating equations) for multiple episodes. AATP was downloaded into the ICDs of 144 patients (121 men), aged 67.4±11.9 years, left ventricular ejection fraction 33.1±13.6% (n=137), and treated 1626 episodes in 49 patients during 14.5±5.1 months of follow-up. Datalogs permitted adjudication of 702 episodes, including 669 sustained monomorphic VT, 20 polymorphic VT, 10 supraventricular tachycardia, and 3 malsensing episodes. AATP terminated 39 of 69 (59% adjusted) sustained monomorphic VT in the fast VT zone, 509 of 590 (85% adjusted) in the VT zone, and 6 of 10 in the ventricular fibrillation zone. No supraventricular tachycardias converted to VT or ventricular fibrillation. No anomalous AATP behavior was observed. The new AATP algorithm safely generated ATP sequences and controlled therapy progression in all zones without need for individualized programing. © 2017 American Heart Association, Inc.

  13. Differences between the CME fronts tracked by an expert, an automated algorithm, and the Solar Stormwatch project

    NASA Astrophysics Data System (ADS)

    Barnard, L.; Scott, C. J.; Owens, M.; Lockwood, M.; Crothers, S. R.; Davies, J. A.; Harrison, R. A.

    2015-10-01

    Observations from the Heliospheric Imager (HI) instruments aboard the twin STEREO spacecraft have enabled the compilation of several catalogues of coronal mass ejections (CMEs), each characterizing the propagation of CMEs through the inner heliosphere. Three such catalogues are the Rutherford Appleton Laboratory (RAL)-HI event list, the Solar Stormwatch CME catalogue, and, presented here, the J-tracker catalogue. Each catalogue uses a different method to characterize the location of CME fronts in the HI images: manual identification by an expert, the statistical reduction of the manual identifications of many citizen scientists, and an automated algorithm. We provide a quantitative comparison of the differences between these catalogues and techniques, using 51 CMEs common to each catalogue. The time-elongation profiles of these CME fronts are compared, as are the estimates of the CME kinematics derived from application of three widely used single-spacecraft-fitting techniques. The J-tracker and RAL-HI profiles are most similar, while the Solar Stormwatch profiles display a small systematic offset. Evidence is presented that these differences arise because the RAL-HI and J-tracker profiles follow the sunward edge of CME density enhancements, while Solar Stormwatch profiles track closer to the antisunward (leading) edge. We demonstrate that the method used to produce the time-elongation profile typically introduces more variability into the kinematic estimates than differences between the various single-spacecraft-fitting techniques. This has implications for the repeatability and robustness of these types of analyses, arguably especially so in the context of space weather forecasting, where it could make the results strongly dependent on the methods used by the forecaster.

  14. Classifying Force Spectroscopy of DNA Pulling Measurements Using Supervised and Unsupervised Machine Learning Methods.

    PubMed

    Karatay, Durmus U; Zhang, Jie; Harrison, Jeffrey S; Ginger, David S

    2016-04-25

    Dynamic force spectroscopy (DFS) measurements on biomolecules typically require classifying thousands of repeated force spectra prior to data analysis. Here, we study classification of atomic force microscope-based DFS measurements using machine-learning algorithms in order to automate selection of successful force curves. Notably, we collect a data set that has a testable positive signal using photoswitch-modified DNA before and after illumination with UV (365 nm) light. We generate a feature set consisting of six properties of force-distance curves to train supervised models and use principal component analysis (PCA) for an unsupervised model. For supervised classification, we train random forest models for binary and multiclass classification of force-distance curves. Random forest models predict successful pulls with an accuracy of 94% and classify them into five classes with an accuracy of 90%. The unsupervised method using Gaussian mixture models (GMM) reaches an accuracy of approximately 80% for binary classification.

  15. Weakly supervised classification in high energy physics

    DOE PAGES

    Dery, Lucio Mwinmaarong; Nachman, Benjamin; Rubbo, Francesco; ...

    2017-05-01

    As machine learning algorithms become increasingly sophisticated to exploit subtle features of the data, they often become more dependent on simulations. Here, this paper presents a new approach called weakly supervised classification in which class proportions are the only input into the machine learning algorithm. Using one of the most challenging binary classification tasks in high energy physics $-$ quark versus gluon tagging $-$ we show that weakly supervised classification can match the performance of fully supervised algorithms. Furthermore, by design, the new algorithm is insensitive to any mis-modeling of discriminating features in the data by the simulation. Weakly supervisedmore » classification is a general procedure that can be applied to a wide variety of learning problems to boost performance and robustness when detailed simulations are not reliable or not available.« less

  16. Fast and accurate metrology of multi-layered ceramic materials by an automated boundary detection algorithm developed for optical coherence tomography data.

    PubMed

    Ekberg, Peter; Su, Rong; Chang, Ernest W; Yun, Seok Hyun; Mattsson, Lars

    2014-02-01

    Optical coherence tomography (OCT) is useful for materials defect analysis and inspection with the additional possibility of quantitative dimensional metrology. Here, we present an automated image-processing algorithm for OCT analysis of roll-to-roll multilayers in 3D manufacturing of advanced ceramics. It has the advantage of avoiding filtering and preset modeling, and will, thus, introduce a simplification. The algorithm is validated for its capability of measuring the thickness of ceramic layers, extracting the boundaries of embedded features with irregular shapes, and detecting the geometric deformations. The accuracy of the algorithm is very high, and the reliability is better than 1 μm when evaluating with the OCT images using the same gauge block step height reference. The method may be suitable for industrial applications to the rapid inspection of manufactured samples with high accuracy and robustness.

  17. Fast and accurate metrology of multi-layered ceramic materials by an automated boundary detection algorithm developed for optical coherence tomography data

    PubMed Central

    Ekberg, Peter; Su, Rong; Chang, Ernest W.; Yun, Seok Hyun; Mattsson, Lars

    2014-01-01

    Optical coherence tomography (OCT) is useful for materials defect analysis and inspection with the additional possibility of quantitative dimensional metrology. Here, we present an automated image-processing algorithm for OCT analysis of roll-to-roll multilayers in 3D manufacturing of advanced ceramics. It has the advantage of avoiding filtering and preset modeling, and will, thus, introduce a simplification. The algorithm is validated for its capability of measuring the thickness of ceramic layers, extracting the boundaries of embedded features with irregular shapes, and detecting the geometric deformations. The accuracy of the algorithm is very high, and the reliability is better than 1 µm when evaluating with the OCT images using the same gauge block step height reference. The method may be suitable for industrial applications to the rapid inspection of manufactured samples with high accuracy and robustness. PMID:24562018

  18. A Supervision of Solidarity

    ERIC Educational Resources Information Center

    Reynolds, Vikki

    2010-01-01

    This article illustrates an approach to therapeutic supervision informed by a philosophy of solidarity and social justice activism. Called a "Supervision of Solidarity", this approach addresses the particular challenges in the supervision of therapists who work alongside clients who are subjected to social injustice and extreme marginalization. It…

  19. Supervising Schooling, Not Teachers.

    ERIC Educational Resources Information Center

    Duffy, Francis M.

    1997-01-01

    Because knowledge work occurs inside teachers' heads, it cannot be supervised directly. School improvement can become a permanent, ongoing organizational function by replacing traditional instructional supervision with a supervision-for-school-improvement function. The focus then shifts to examining a district's work processes, social…

  20. Automated clustering of probe molecules from solvent mapping of protein surfaces: new algorithms applied to hot-spot mapping and structure-based drug design

    NASA Astrophysics Data System (ADS)

    Lerner, Michael G.; Meagher, Kristin L.; Carlson, Heather A.

    2008-10-01

    Use of solvent mapping, based on multiple-copy minimization (MCM) techniques, is common in structure-based drug discovery. The minima of small-molecule probes define locations for complementary interactions within a binding pocket. Here, we present improved methods for MCM. In particular, a Jarvis-Patrick (JP) method is outlined for grouping the final locations of minimized probes into physical clusters. This algorithm has been tested through a study of protein-protein interfaces, showing the process to be robust, deterministic, and fast in the mapping of protein "hot spots." Improvements in the initial placement of probe molecules are also described. A final application to HIV-1 protease shows how our automated technique can be used to partition data too complicated to analyze by hand. These new automated methods may be easily and quickly extended to other protein systems, and our clustering methodology may be readily incorporated into other clustering packages.

  1. Semi-supervised learning via regularized boosting working on multiple semi-supervised assumptions.

    PubMed

    Chen, Ke; Wang, Shihai

    2011-01-01

    Semi-supervised learning concerns the problem of learning in the presence of labeled and unlabeled data. Several boosting algorithms have been extended to semi-supervised learning with various strategies. To our knowledge, however, none of them takes all three semi-supervised assumptions, i.e., smoothness, cluster, and manifold assumptions, together into account during boosting learning. In this paper, we propose a novel cost functional consisting of the margin cost on labeled data and the regularization penalty on unlabeled data based on three fundamental semi-supervised assumptions. Thus, minimizing our proposed cost functional with a greedy yet stagewise functional optimization procedure leads to a generic boosting framework for semi-supervised learning. Extensive experiments demonstrate that our algorithm yields favorite results for benchmark and real-world classification tasks in comparison to state-of-the-art semi-supervised learning algorithms, including newly developed boosting algorithms. Finally, we discuss relevant issues and relate our algorithm to the previous work.

  2. FindFoci: A Focus Detection Algorithm with Automated Parameter Training That Closely Matches Human Assignments, Reduces Human Inconsistencies and Increases Speed of Analysis

    PubMed Central

    Herbert, Alex D.; Carr, Antony M.; Hoffmann, Eva

    2014-01-01

    Accurate and reproducible quantification of the accumulation of proteins into foci in cells is essential for data interpretation and for biological inferences. To improve reproducibility, much emphasis has been placed on the preparation of samples, but less attention has been given to reporting and standardizing the quantification of foci. The current standard to quantitate foci in open-source software is to manually determine a range of parameters based on the outcome of one or a few representative images and then apply the parameter combination to the analysis of a larger dataset. Here, we demonstrate the power and utility of using machine learning to train a new algorithm (FindFoci) to determine optimal parameters. FindFoci closely matches human assignments and allows rapid automated exploration of parameter space. Thus, individuals can train the algorithm to mirror their own assignments and then automate focus counting using the same parameters across a large number of images. Using the training algorithm to match human assignments of foci, we demonstrate that applying an optimal parameter combination from a single image is not broadly applicable to analysis of other images scored by the same experimenter or by other experimenters. Our analysis thus reveals wide variation in human assignment of foci and their quantification. To overcome this, we developed training on multiple images, which reduces the inconsistency of using a single or a few images to set parameters for focus detection. FindFoci is provided as an open-source plugin for ImageJ. PMID:25478967

  3. Nonalcoholic Fatty Liver Disease (NAFLD) in the Veterans Administration Population: Development and Validation of an Algorithm for NAFLD using Automated Data

    PubMed Central

    Husain, Nisreen; Blais, Peter; Kramer, Jennifer; Kowalkowski, Marc; Richardson, Peter; El-Serag, Hashem B.; Kanwal, Fasiha

    2017-01-01

    Background In practice, non-alcoholic fatty liver (NAFLD) is diagnosed based on elevated liver enzymes and confirmatory liver biopsy or abdominal imaging. Neither method is feasible in identifying individuals with NAFLD in a large-scale healthcare system. Aim To develop and validate an algorithm to identify patients with NAFLD using automated data. Methods Using the Veterans Administration Corporate Data Warehouse, we identified patients who had persistent ALT elevation (≥2 values ≥40IU/ml ≥6 months apart) and did not have evidence of hepatitis B, hepatitis C, or excessive alcohol use. We conducted a structured chart review of 450 patients classified as NAFLD and 150 patients who were classified as non-NAFLD by the database algorithm, and subsequently refined the database algorithm. Results The sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV) for the initial database definition of NAFLD were 78.4% (95%CI=70.0-86.8%), 74.5% (95%CI=68.1-80.9%), 64.1% (95%CI: 56.4-71.7%), and 85.6% (95%Ci: 79.4-91.8%), respectively. Reclassifying patients as having NAFLD if they had 2 elevated ALTs that were at-least 6 months apart but within 2 years of each other, increased the specificity and PPV of the algorithm to 92.4% (95%CI=88.8 - 96.0%) and 80.8% (95%CI=72.5 - 89.0%), respectively. However, the sensitivity and NPV decreased to 55.0% (95%CI=46.1 - 63.9%) and 78.0% (95%CI=72.1 - 83.8%), respectively. Conclusions Predictive algorithms using automated data can be used to identify patients with NAFLD, determine prevalence of NAFLD at the system-wide level, and may help select a target population for future clinical studies in veterans with NAFLD. PMID:25155259

  4. Validation of the Total Visual Acuity Extraction Algorithm (TOVA) for Automated Extraction of Visual Acuity Data From Free Text, Unstructured Clinical Records

    PubMed Central

    Baughman, Douglas M.; Su, Grace L.; Tsui, Irena; Lee, Cecilia S.; Lee, Aaron Y.

    2017-01-01

    Purpose With increasing volumes of electronic health record data, algorithm-driven extraction may aid manual extraction. Visual acuity often is extracted manually in vision research. The total visual acuity extraction algorithm (TOVA) is presented and validated for automated extraction of visual acuity from free text, unstructured clinical notes. Methods Consecutive inpatient ophthalmology notes over an 8-year period from the University of Washington healthcare system in Seattle, WA were used for validation of TOVA. The total visual acuity extraction algorithm applied natural language processing to recognize Snellen visual acuity in free text notes and assign laterality. The best corrected measurement was determined for each eye and converted to logMAR. The algorithm was validated against manual extraction of a subset of notes. Results A total of 6266 clinical records were obtained giving 12,452 data points. In a subset of 644 validated notes, comparison of manually extracted data versus TOVA output showed 95% concordance. Interrater reliability testing gave κ statistics of 0.94 (95% confidence interval [CI], 0.89–0.99), 0.96 (95% CI, 0.94–0.98), 0.95 (95% CI, 0.92–0.98), and 0.94 (95% CI, 0.90–0.98) for acuity numerators, denominators, adjustments, and signs, respectively. Pearson correlation coefficient was 0.983. Linear regression showed an R2 of 0.966 (P < 0.0001). Conclusions The total visual acuity extraction algorithm is a novel tool for extraction of visual acuity from free text, unstructured clinical notes and provides an open source method of data extraction. Translational Relevance Automated visual acuity extraction through natural language processing can be a valuable tool for data extraction from free text ophthalmology notes. PMID:28299240

  5. Repeatability and Reproducibility of Eight Macular Intra-Retinal Layer Thicknesses Determined by an Automated Segmentation Algorithm Using Two SD-OCT Instruments

    PubMed Central

    Huang, Shenghai; Leng, Lin; Zhu, Dexi; Lu, Fan

    2014-01-01

    Purpose To evaluate the repeatability, reproducibility, and agreement of thickness profile measurements of eight intra-retinal layers determined by an automated algorithm applied to optical coherence tomography (OCT) images from two different instruments. Methods Twenty normal subjects (12 males, 8 females; 24 to 32 years old) were enrolled. Imaging was performed with a custom built ultra-high resolution OCT instrument (UHR-OCT, ∼3 µm resolution) and a commercial RTVue100 OCT (∼5 µm resolution) instrument. An automated algorithm was developed to segment the macular retina into eight layers and quantitate the thickness of each layer. The right eye of each subject was imaged two times by the first examiner using each instrument to assess intra-observer repeatability and once by the second examiner to assess inter-observer reproducibility. The intraclass correlation coefficient (ICC) and coefficients of repeatability and reproducibility (COR) were analyzed to evaluate the reliability. Results The ICCs for the intra-observer repeatability and inter-observer reproducibility of both SD-OCT instruments were greater than 0.945 for the total retina and all intra-retinal layers, except the photoreceptor inner segments, which ranged from 0.051 to 0.643, and the outer segments, which ranged from 0.709 to 0.959. The CORs were less than 6.73% for the total retina and all intra-retinal layers. The total retinal thickness measured by the UHR-OCT was significantly thinner than that measured by the RTVue100. However, the ICC for agreement of the thickness profiles between UHR-OCT and RTVue OCT were greater than 0.80 except for the inner segment and outer segment layers. Conclusions Thickness measurements of the intra-retinal layers determined by the automated algorithm are reliable when applied to images acquired by the UHR-OCT and RTVue100 instruments. PMID:24505345

  6. Use of administrative and electronic health record data for development of automated algorithms for childhood diabetes case ascertainment and type classification: the SEARCH for Diabetes in Youth Study

    PubMed Central

    Zhong, Victor W.; Pfaff, Emily R.; Beavers, Daniel P.; Thomas, Joan; Jaacks, Lindsay M.; Bowlby, Deborah A.; Carey, Timothy S.; Lawrence, Jean M.; Dabelea, Dana; Hamman, Richard F.; Pihoker, Catherine; Saydah, Sharon H.; Mayer-Davis, Elizabeth J.

    2014-01-01

    Background The performance of automated algorithms for childhood diabetes case ascertainment and type classification may differ by demographic characteristics. Objective This study evaluated the potential of administrative and electronic health record (EHR) data from a large academic care delivery system to conduct diabetes case ascertainment in youth according to type, age and race/ethnicity. Subjects 57,767 children aged <20 years as of December 31, 2011 seen at University of North Carolina Health Care System in 2011 were included. Methods Using an initial algorithm including billing data, patient problem lists, laboratory test results and diabetes related medications between July 1, 2008 and December 31, 2011, presumptive cases were identified and validated by chart review. More refined algorithms were evaluated by type (type 1 versus type 2), age (<10 versus ≥10 years) and race/ethnicity (non-Hispanic white versus “other”). Sensitivity, specificity and positive predictive value were calculated and compared. Results The best algorithm for ascertainment of diabetes cases overall was billing data. The best type 1 algorithm was the ratio of the number of type 1 billing codes to the sum of type 1 and type 2 billing codes ≥0.5. A useful algorithm to ascertain type 2 youth with “other” race/ethnicity was identified. Considerable age and racial/ethnic differences were present in type-non-specific and type 2 algorithms. Conclusions Administrative and EHR data may be used to identify cases of childhood diabetes (any type), and to identify type 1 cases. The performance of type 2 case ascertainment algorithms differed substantially by race/ethnicity. PMID:24913103

  7. Use of administrative and electronic health record data for development of automated algorithms for childhood diabetes case ascertainment and type classification: the SEARCH for Diabetes in Youth Study.

    PubMed

    Zhong, Victor W; Pfaff, Emily R; Beavers, Daniel P; Thomas, Joan; Jaacks, Lindsay M; Bowlby, Deborah A; Carey, Timothy S; Lawrence, Jean M; Dabelea, Dana; Hamman, Richard F; Pihoker, Catherine; Saydah, Sharon H; Mayer-Davis, Elizabeth J

    2014-12-01

    The performance of automated algorithms for childhood diabetes case ascertainment and type classification may differ by demographic characteristics. This study evaluated the potential of administrative and electronic health record (EHR) data from a large academic care delivery system to conduct diabetes case ascertainment in youth according to type, age, and race/ethnicity. Of 57 767 children aged <20 yr as of 31 December 2011 seen at University of North Carolina Health Care System in 2011 were included. Using an initial algorithm including billing data, patient problem lists, laboratory test results, and diabetes related medications between 1 July 2008 and 31 December 2011, presumptive cases were identified and validated by chart review. More refined algorithms were evaluated by type (type 1 vs. type 2), age (<10 vs. ≥10 yr) and race/ethnicity (non-Hispanic White vs. 'other'). Sensitivity, specificity, and positive predictive value were calculated and compared. The best algorithm for ascertainment of overall diabetes cases was billing data. The best type 1 algorithm was the ratio of the number of type 1 billing codes to the sum of type 1 and type 2 billing codes ≥0.5. A useful algorithm to ascertain youth with type 2 diabetes with 'other' race/ethnicity was identified. Considerable age and racial/ethnic differences were present in type-non-specific and type 2 algorithms. Administrative and EHR data may be used to identify cases of childhood diabetes (any type), and to identify type 1 cases. The performance of type 2 case ascertainment algorithms differed substantially by race/ethnicity. © 2014 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  8. Automated Method of Frequency Determination in Software Metric Data Through the Use of the Multiple Signal Classification (MUSIC) Algorithm

    DTIC Science & Technology

    1998-06-26

    METHOD OF FREQUENCY DETERMINATION 4 IN SOFTWARE METRIC DATA THROUGH THE USE OF THE 5 MULTIPLE SIGNAL CLASSIFICATION ( MUSIC ) ALGORITHM 6 7 STATEMENT OF...graph showing the estimated power spectral 12 density (PSD) generated by the multiple signal classification 13 ( MUSIC ) algorithm from the data set used...implemented in this module; however, it is preferred to use 1 the Multiple Signal Classification ( MUSIC ) algorithm. The MUSIC 2 algorithm is

  9. Detection of facilities in satellite imagery using semi-supervised image classification and auxiliary contextual observables

    SciTech Connect

    Harvey, Neal R; Ruggiero, Christy E; Pawley, Norma H; Brumby, Steven P; Macdonald, Brian; Balick, Lee; Oyer, Alden

    2009-01-01

    Detecting complex targets, such as facilities, in commercially available satellite imagery is a difficult problem that human analysts try to solve by applying world knowledge. Often there are known observables that can be extracted by pixel-level feature detectors that can assist in the facility detection process. Individually, each of these observables is not sufficient for an accurate and reliable detection, but in combination, these auxiliary observables may provide sufficient context for detection by a machine learning algorithm. We describe an approach for automatic detection of facilities that uses an automated feature extraction algorithm to extract auxiliary observables, and a semi-supervised assisted target recognition algorithm to then identify facilities of interest. We illustrate the approach using an example of finding schools in Quickbird image data of Albuquerque, New Mexico. We use Los Alamos National Laboratory's Genie Pro automated feature extraction algorithm to find a set of auxiliary features that should be useful in the search for schools, such as parking lots, large buildings, sports fields and residential areas and then combine these features using Genie Pro's assisted target recognition algorithm to learn a classifier that finds schools in the image data.

  10. Influencing Trust for Human-Automation Collaborative Scheduling of Multiple Unmanned Vehicles.

    PubMed

    Clare, Andrew S; Cummings, Mary L; Repenning, Nelson P

    2015-11-01

    We examined the impact of priming on operator trust and system performance when supervising a decentralized network of heterogeneous unmanned vehicles (UVs). Advances in autonomy have enabled a future vision of single-operator control of multiple heterogeneous UVs. Real-time scheduling for multiple UVs in uncertain environments requires the computational ability of optimization algorithms combined with the judgment and adaptability of human supervisors. Because of system and environmental uncertainty, appropriate operator trust will be instrumental to maintain high system performance and prevent cognitive overload. Three groups of operators experienced different levels of trust priming prior to conducting simulated missions in an existing, multiple-UV simulation environment. Participants who play computer and video games frequently were found to have a higher propensity to overtrust automation. By priming gamers to lower their initial trust to a more appropriate level, system performance was improved by 10% as compared to gamers who were primed to have higher trust in the automation. Priming was successful at adjusting the operator's initial and dynamic trust in the automated scheduling algorithm, which had a substantial impact on system performance. These results have important implications for personnel selection and training for futuristic multi-UV systems under human supervision. Although gamers may bring valuable skills, they may also be potentially prone to automation bias. Priming during training and regular priming throughout missions may be one potential method for overcoming this propensity to overtrust automation. © 2015, Human Factors and Ergonomics Society.

  11. A New List of Flux Transfer Events in the CLUSTER Data by Use of an Automated Technique

    NASA Astrophysics Data System (ADS)

    Sipes, T.; Karimabadi, H.; Wang, Y.; Lavraud, B.

    2007-12-01

    We have used our newly developed data mining software called MineTool for automated detection of flux transfer events (FTEs) in the CLUSTER data. Data mining techniques can be divided into two types, supervised and unsupervised. In supervised algorithms like MineTool, one teaches the algorithm using examples from labeled data. Considering the case of FTEs, the user would provide examples of FTEs as well as examples of non-FTEs and label (as FTE or non-FTE) the data. We used a list of FTEs compiled by Y. Wang to create the labeled data. We then used MineTool on this data set to develop an automated detection model for FTEs. Finally we applied this model to CLUSTER data to search for new FTEs. We have compiled a list of new FTEs which are made publicly available.

  12. Performance Monitoring Applied to System Supervision.

    PubMed

    Somon, Bertille; Campagne, Aurélie; Delorme, Arnaud; Berberian, Bruno

    2017-01-01

    Nowadays, automation is present in every aspect of our daily life and has some benefits. Nonetheless, empirical data suggest that traditional automation has many negative performance and safety consequences as it changed task performers into task supervisors. In this context, we propose to use recent insights into the anatomical and neurophysiological substrates of action monitoring in humans, to help further characterize performance monitoring during system supervision. Error monitoring is critical for humans to learn from the consequences of their actions. A wide variety of studies have shown that the error monitoring system is involved not only in our own errors, but also in the errors of others. We hypothesize that the neurobiological correlates of the self-performance monitoring activity can be applied to system supervision. At a larger scale, a better understanding of system supervision may allow its negative effects to be anticipated or even countered. This review is divided into three main parts. First, we assess the neurophysiological correlates of self-performance monitoring and their characteristics during error execution. Then, we extend these results to include performance monitoring and error observation of others or of systems. Finally, we provide further directions in the study of system supervision and assess the limits preventing us from studying a well-known phenomenon: the Out-Of-the-Loop (OOL) performance problem.

  13. Definition and Analysis of a System for the Automated Comparison of Curriculum Sequencing Algorithms in Adaptive Distance Learning

    ERIC Educational Resources Information Center

    Limongelli, Carla; Sciarrone, Filippo; Temperini, Marco; Vaste, Giulia

    2011-01-01

    LS-Lab provides automatic support to comparison/evaluation of the Learning Object Sequences produced by different Curriculum Sequencing Algorithms. Through this framework a teacher can verify the correspondence between the behaviour of different sequencing algorithms and her pedagogical preferences. In fact the teacher can compare algorithms…

  14. Definition and Analysis of a System for the Automated Comparison of Curriculum Sequencing Algorithms in Adaptive Distance Learning

    ERIC Educational Resources Information Center

    Limongelli, Carla; Sciarrone, Filippo; Temperini, Marco; Vaste, Giulia

    2011-01-01

    LS-Lab provides automatic support to comparison/evaluation of the Learning Object Sequences produced by different Curriculum Sequencing Algorithms. Through this framework a teacher can verify the correspondence between the behaviour of different sequencing algorithms and her pedagogical preferences. In fact the teacher can compare algorithms…

  15. Semi-supervised Learning for Phenotyping Tasks.

    PubMed

    Dligach, Dmitriy; Miller, Timothy; Savova, Guergana K

    2015-01-01

    Supervised learning is the dominant approach to automatic electronic health records-based phenotyping, but it is expensive due to the cost of manual chart review. Semi-supervised learning takes advantage of both scarce labeled and plentiful unlabeled data. In this work, we study a family of semi-supervised learning algorithms based on Expectation Maximization (EM) in the context of several phenotyping tasks. We first experiment with the basic EM algorithm. When the modeling assumptions are violated, basic EM leads to inaccurate parameter estimation. Augmented EM attenuates this shortcoming by introducing a weighting factor that downweights the unlabeled data. Cross-validation does not always lead to the best setting of the weighting factor and other heuristic methods may be preferred. We show that accurate phenotyping models can be trained with only a few hundred labeled (and a large number of unlabeled) examples, potentially providing substantial savings in the amount of the required manual chart review.

  16. An Automated Algorithm for Measurement of Surgical Tip Excursion in Ultrasonic Vibration Using the Spatial 2-Dimensional Fourier Transform in an Optical Image

    NASA Astrophysics Data System (ADS)

    Manandhar, Prakash; Ward, Andrew; Allen, Patrick; Cotter, Daniel J.

    The International Electrotechnical Commission (IEC) has defined a standard IEC 61847 (First Edition, 1998) for characterization of ultrasonic surgical systems. This standard prescribes several methods for measurement of primary tip vibration excursion. The first method described in the standard uses an optical microscope and relies on the motion blur of a vibrating object as it is imaged at low frame rates (e.g. 30 Hz) of conventional video equipment. This is a widely used method, that predates the standard, in ultrasonic surgical instrument design, and it is one of the key parameters that surgeons who use these devices are aware of. It is relatively easily measured using a microscope system. Although this method is widespread, the accuracy of this method is highly dependent on multiple factors such as operator training, microscope lighting and modulation of surgical tip motion. It is also a manual and time consuming measurement such that a continuous measurement that describes dynamics at the scale of micro-seconds becomes impossible. Here we describe an algorithm to automate this measurement so that it can be done at high speed without operator training, reducing human error and operator variation. The algorithm derives from techniques used in motion blur estimation and reduction in the image processing literature. A 2 dimensional spatial Fourier transform is computed from the microscope image of an ultrasonically vibrating tip. A peak detection algorithm is used along with pre-processing to reduce noise. Separation of peaks in the Fourier domain is used to estimate tip excursion. We present data that shows an error of about 1% between manual and automated methods, when measurements are in the range of 300 microns and about 20% when the measurements are in the range of 30 microns.

  17. Partially supervised speaker clustering.

    PubMed

    Tang, Hao; Chu, Stephen Mingyu; Hasegawa-Johnson, Mark; Huang, Thomas S

    2012-05-01

    model-based distance metrics, 2) our advocated use of the cosine distance metric yields consistent increases in the speaker clustering performance as compared to the commonly used euclidean distance metric, 3) our partially supervised speaker clustering concept and strategies significantly improve the speaker clustering performance over the baselines, and 4) our proposed LSDA algorithm further leads to state-of-the-art speaker clustering performance.

  18. Experiments in Virtual Supervision.

    ERIC Educational Resources Information Center

    Walker, Rob

    This paper examines the use of First Class conferencing software to create a virtual culture among research students and as a vehicle for supervision and advising. Topics discussed include: computer-mediated communication and research; entry to cyberculture, i.e., research students' induction into the research community; supervision and the…

  19. Networks of Professional Supervision

    ERIC Educational Resources Information Center

    Annan, Jean; Ryba, Ken

    2013-01-01

    An ecological analysis of the supervisory activity of 31 New Zealand school psychologists examined simultaneously the theories of school psychology, supervision practices, and the contextual qualities that mediated participants' supervisory actions. The findings indicated that the school psychologists worked to achieve the supervision goals of…

  20. Spirituality in Supervision.

    ERIC Educational Resources Information Center

    Polanski, Patricia J.

    2003-01-01

    As an important component of counselor education and development, supervision is a likely teaching and learning opportunity to address spirituality in counseling. The author examines ways in which spiritual and religious issues might be presented in supervision, using the focus areas of the Discrimination Model, namely intervention,…

  1. Supervision in Libraries.

    ERIC Educational Resources Information Center

    Bailey, Martha J.

    Although the literature of library administration draws extensively on that of business management, it is difficult to compare library supervision to business or industrial supervision. Library supervisors often do not have managerial training and may consider their management role as secondary. The educational level of the staff they supervise…

  2. Supervised Business Experience Handbook.

    ERIC Educational Resources Information Center

    Missouri Univ., Columbia. Instructional Materials Lab.

    This handbook explains how to conduct a supervised business education experience program in Missouri, outlining the program, rationale, components, principles, and resources. Specifically, the 11 units cover the following: (1) introduction to supervised business experience; (2) program design; (3) state policies; (4) the advisory committee; (5)…

  3. Theme: Supervised Experience.

    ERIC Educational Resources Information Center

    Cox, David E.; And Others

    1991-01-01

    Includes "It's Time to Stop Quibbling over the Acronym" (Cox); "Information Rich--Experience Poor" (Elliot et al.); "Supervised Agricultural Experience Selection Process" (Yokum, Boggs); "Point System" (Fraze, Vaughn); "Urban Diversity Rural Style" (Morgan, Henry); "Nonoccupational Supervised Experience" (Croom); "Reflecting Industry" (Miller);…

  4. Theme: Supervised Experience.

    ERIC Educational Resources Information Center

    Cox, David E.; And Others

    1991-01-01

    Includes "It's Time to Stop Quibbling over the Acronym" (Cox); "Information Rich--Experience Poor" (Elliot et al.); "Supervised Agricultural Experience Selection Process" (Yokum, Boggs); "Point System" (Fraze, Vaughn); "Urban Diversity Rural Style" (Morgan, Henry); "Nonoccupational Supervised Experience" (Croom); "Reflecting Industry" (Miller);…

  5. A multi-stage heuristic algorithm for matching problem in the modified miniload automated storage and retrieval system of e-commerce

    NASA Astrophysics Data System (ADS)

    Wang, Wenrui; Wu, Yaohua; Wu, Yingying

    2016-05-01

    E-commerce, as an emerging marketing mode, has attracted more and more attention and gradually changed the way of our life. However, the existing layout of distribution centers can't fulfill the storage and picking demands of e-commerce sufficiently. In this paper, a modified miniload automated storage/retrieval system is designed to fit these new characteristics of e-commerce in logistics. Meanwhile, a matching problem, concerning with the improvement of picking efficiency in new system, is studied in this paper. The problem is how to reduce the travelling distance of totes between aisles and picking stations. A multi-stage heuristic algorithm is proposed based on statement and model of this problem. The main idea of this algorithm is, with some heuristic strategies based on similarity coefficients, minimizing the transportations of items which can not arrive in the destination picking stations just through direct conveyors. The experimental results based on the cases generated by computers show that the average reduced rate of indirect transport times can reach 14.36% with the application of multi-stage heuristic algorithm. For the cases from a real e-commerce distribution center, the order processing time can be reduced from 11.20 h to 10.06 h with the help of the modified system and the proposed algorithm. In summary, this research proposed a modified system and a multi-stage heuristic algorithm that can reduce the travelling distance of totes effectively and improve the whole performance of e-commerce distribution center.

  6. Spectral matching techniques (SMTs) and automated cropland classification algorithms (ACCAs) for mapping croplands of Australia using MODIS 250-m time-series (2000–2015) data

    USGS Publications Warehouse

    Teluguntla, Pardhasaradhi G.; Thenkabail, Prasad S.; Xiong, Jun N.; Gumma, Murali Krishna; Congalton, Russell G.; Oliphant, Adam; Poehnelt, Justin; Yadav, Kamini; Rao, Mahesh N.; Massey, Richard

    2017-01-01

    Mapping croplands, including fallow areas, are an important measure to determine the quantity of food that is produced, where they are produced, and when they are produced (e.g. seasonality). Furthermore, croplands are known as water guzzlers by consuming anywhere between 70% and 90% of all human water use globally. Given these facts and the increase in global population to nearly 10 billion by the year 2050, the need for routine, rapid, and automated cropland mapping year-after-year and/or season-after-season is of great importance. The overarching goal of this study was to generate standard and routine cropland products, year-after-year, over very large areas through the use of two novel methods: (a) quantitative spectral matching techniques (QSMTs) applied at continental level and (b) rule-based Automated Cropland Classification Algorithm (ACCA) with the ability to hind-cast, now-cast, and future-cast. Australia was chosen for the study given its extensive croplands, rich history of agriculture, and yet nonexistent routine yearly generated cropland products using multi-temporal remote sensing. This research produced three distinct cropland products using Moderate Resolution Imaging Spectroradiometer (MODIS) 250-m normalized difference vegetation index 16-day composite time-series data for 16 years: 2000 through 2015. The products consisted of: (1) cropland extent/areas versus cropland fallow areas, (2) irrigated versus rainfed croplands, and (3) cropping intensities: single, double, and continuous cropping. An accurate reference cropland product (RCP) for the year 2014 (RCP2014) produced using QSMT was used as a knowledge base to train and develop the ACCA algorithm that was then applied to the MODIS time-series data for the years 2000–2015. A comparison between the ACCA-derived cropland products (ACPs) for the year 2014 (ACP2014) versus RCP2014 provided an overall agreement of 89.4% (kappa = 0.814) with six classes: (a) producer’s accuracies varying

  7. An automated sleep-state classification algorithm for quantifying sleep timing and sleep-dependent dynamics of electroencephalographic and cerebral metabolic parameters

    PubMed Central

    Rempe, Michael J; Clegern, William C; Wisor, Jonathan P

    2015-01-01

    Introduction Rodent sleep research uses electroencephalography (EEG) and electromyography (EMG) to determine the sleep state of an animal at any given time. EEG and EMG signals, typically sampled at >100 Hz, are segmented arbitrarily into epochs of equal duration (usually 2–10 seconds), and each epoch is scored as wake, slow-wave sleep (SWS), or rapid-eye-movement sleep (REMS), on the basis of visual inspection. Automated state scoring can minimize the burden associated with state and thereby facilitate the use of shorter epoch durations. Methods We developed a semiautomated state-scoring procedure that uses a combination of principal component analysis and naïve Bayes classification, with the EEG and EMG as inputs. We validated this algorithm against human-scored sleep-state scoring of data from C57BL/6J and BALB/CJ mice. We then applied a general homeostatic model to characterize the state-dependent dynamics of sleep slow-wave activity and cerebral glycolytic flux, measured as lactate concentration. Results More than 89% of epochs scored as wake or SWS by the human were scored as the same state by the machine, whether scoring in 2-second or 10-second epochs. The majority of epochs scored as REMS by the human were also scored as REMS by the machine. However, of epochs scored as REMS by the human, more than 10% were scored as SWS by the machine and 18 (10-second epochs) to 28% (2-second epochs) were scored as wake. These biases were not strain-specific, as strain differences in sleep-state timing relative to the light/dark cycle, EEG power spectral profiles, and the homeostatic dynamics of both slow waves and lactate were detected equally effectively with the automated method or the manual scoring method. Error associated with mathematical modeling of temporal dynamics of both EEG slow-wave activity and cerebral lactate either did not differ significantly when state scoring was done with automated versus visual scoring, or was reduced with automated state

  8. Power of automated algorithms for combining time-line follow-back and urine drug screening test results in stimulant-abuse clinical trials.

    PubMed

    Oden, Neal L; VanVeldhuisen, Paul C; Wakim, Paul G; Trivedi, Madhukar H; Somoza, Eugene; Lewis, Daniel

    2011-09-01

    In clinical trials of treatment for stimulant abuse, researchers commonly record both Time-Line Follow-Back (TLFB) self-reports and urine drug screen (UDS) results. To compare the power of self-report, qualitative (use vs. no use) UDS assessment, and various algorithms to generate self-report-UDS composite measures to detect treatment differences via t-test in simulated clinical trial data. We performed Monte Carlo simulations patterned in part on real data to model self-report reliability, UDS errors, dropout, informatively missing UDS reports, incomplete adherence to a urine donation schedule, temporal correlation of drug use, number of days in the study period, number of patients per arm, and distribution of drug-use probabilities. Investigated algorithms include maximum likelihood and Bayesian estimates, self-report alone, UDS alone, and several simple modifications of self-report (referred to here as ELCON algorithms) which eliminate perceived contradictions between it and UDS. Among the algorithms investigated, simple ELCON algorithms gave rise to the most powerful t-tests to detect mean group differences in stimulant drug use. Further investigation is needed to determine if simple, naïve procedures such as the ELCON algorithms are optimal for comparing clinical study treatment arms. But researchers who currently require an automated algorithm in scenarios similar to those simulated for combining TLFB and UDS to test group differences in stimulant use should consider one of the ELCON algorithms. This analysis continues a line of inquiry which could determine how best to measure outpatient stimulant use in clinical trials (NIDA. NIDA Monograph-57: Self-Report Methods of Estimating Drug Abuse: Meeting Current Challenges to Validity. NTIS PB 88248083. Bethesda, MD: National Institutes of Health, 1985; NIDA. NIDA Research Monograph 73: Urine Testing for Drugs of Abuse. NTIS PB 89151971. Bethesda, MD: National Institutes of Health, 1987; NIDA. NIDA Research

  9. A practical tool for public health surveillance: Semi-automated coding of short injury narratives from large administrative databases using Naïve Bayes algorithms.

    PubMed

    Marucci-Wellman, Helen R; Lehto, Mark R; Corns, Helen L

    2015-11-01

    Public health surveillance programs in the U.S. are undergoing landmark changes with the availability of electronic health records and advancements in information technology. Injury narratives gathered from hospital records, workers compensation claims or national surveys can be very useful for identifying antecedents to injury or emerging risks. However, classifying narratives manually can become prohibitive for large datasets. The purpose of this study was to develop a human-machine system that could be relatively easily tailored to routinely and accurately classify injury narratives from large administrative databases such as workers compensation. We used a semi-automated approach based on two Naïve Bayesian algorithms to classify 15,000 workers compensation narratives into two-digit Bureau of Labor Statistics (BLS) event (leading to injury) codes. Narratives were filtered out for manual review if the algorithms disagreed or made weak predictions. This approach resulted in an overall accuracy of 87%, with consistently high positive predictive values across all two-digit BLS event categories including the very small categories (e.g., exposure to noise, needle sticks). The Naïve Bayes algorithms were able to identify and accurately machine code most narratives leaving only 32% (4853) for manual review. This strategy substantially reduces the need for resources compared with manual review alone.

  10. Supervised and Unsupervised Learning Technology in the Study of Rodent Behavior.

    PubMed

    Gris, Katsiaryna V; Coutu, Jean-Philippe; Gris, Denis

    2017-01-01

    Quantifying behavior is a challenge for scientists studying neuroscience, ethology, psychology, pathology, etc. Until now, behavior was mostly considered as qualitative descriptions of postures or labor intensive counting of bouts of individual movements. Many prominent behavioral scientists conducted studies describing postures of mice and rats, depicting step by step eating, grooming, courting, and other behaviors. Automated video assessment technologies permit scientists to quantify daily behavioral patterns/routines, social interactions, and postural changes in an unbiased manner. Here, we extensively reviewed published research on the topic of the structural blocks of behavior and proposed a structure of behavior based on the latest publications. We discuss the importance of defining a clear structure of behavior to allow professionals to write viable algorithms. We presented a discussion of technologies that are used in automated video assessment of behavior in mice and rats. We considered advantages and limitations of supervised and unsupervised learning. We presented the latest scientific discoveries that were made using automated video assessment. In conclusion, we proposed that the automated quantitative approach to evaluating animal behavior is the future of understanding the effect of brain signaling, pathologies, genetic content, and environment on behavior.

  11. Butterflies, Bugs and Supervising Teachers.

    ERIC Educational Resources Information Center

    Morris, John E.; And Others

    1979-01-01

    Presented is an effective, nonthreatening way to provide feedback to supervising teachers. It involves an exercise called "Butterflies (ways supervising teachers helped) and Bugs (behaviors of supervising teachers which were detrimental or unprofessional)." (KC)

  12. Semi-supervised clustering methods

    PubMed Central

    Bair, Eric

    2013-01-01

    Cluster analysis methods seek to partition a data set into homogeneous subgroups. It is useful in a wide variety of applications, including document processing and modern genetics. Conventional clustering methods are unsupervised, meaning that there is no outcome variable nor is anything known about the relationship between the observations in the data set. In many situations, however, information about the clusters is available in addition to the values of the features. For example, the cluster labels of some observations may be known, or certain observations may be known to belong to the same cluster. In other cases, one may wish to identify clusters that are associated with a particular outcome variable. This review describes several clustering algorithms (known as “semi-supervised clustering” methods) that can be applied in these situations. The majority of these methods are modifications of the popular k-means clustering method, and several of them will be described in detail. A brief description of some other semi-supervised clustering algorithms is also provided. PMID:24729830

  13. Automated multidetector row CT dataset segmentation with an interactive watershed transform (IWT) algorithm: Part 1. Understanding the IWT technique.

    PubMed

    Heath, David G; Hahn, Horst K; Johnson, Pamela T; Fishman, Elliot K

    2008-12-01

    Segmentation of volumetric computed tomography (CT) datasets facilitates evaluation of 3D CT angiography renderings, particularly with maximum intensity projection displays. This manuscript describes a novel automated bone editing program that uses an interactive watershed transform (IWT) technique to rapidly extract the skeletal structures from the volume. Advantages of this tool include efficient segmentation of large datasets with minimal need for correction. In the first of this two-part series, the principles of the IWT technique are reviewed, followed by a discussion of clinical utility based on our experience.

  14. Optimal installation locations for automated external defibrillators in Taipei 7-Eleven stores: using GIS and a genetic algorithm with a new stirring operator.

    PubMed

    Huang, Chung-Yuan; Wen, Tzai-Hung

    2014-01-01

    Immediate treatment with an automated external defibrillator (AED) increases out-of-hospital cardiac arrest (OHCA) patient survival potential. While considerable attention has been given to determining optimal public AED locations, spatial and temporal factors such as time of day and distance from emergency medical services (EMSs) are understudied. Here we describe a geocomputational genetic algorithm with a new stirring operator (GANSO) that considers spatial and temporal cardiac arrest occurrence factors when assessing the feasibility of using Taipei 7-Eleven stores as installation locations for AEDs. Our model is based on two AED conveyance modes, walking/running and driving, involving service distances of 100 and 300 meters, respectively. Our results suggest different AED allocation strategies involving convenience stores in urban settings. In commercial areas, such installations can compensate for temporal gaps in EMS locations when responding to nighttime OHCA incidents. In residential areas, store installations can compensate for long distances from fire stations, where AEDs are currently held in Taipei.

  15. Self-supervised ARTMAP.

    PubMed

    Amis, Gregory P; Carpenter, Gail A

    2010-03-01

    Computational models of learning typically train on labeled input patterns (supervised learning), unlabeled input patterns (unsupervised learning), or a combination of the two (semi-supervised learning). In each case input patterns have a fixed number of features throughout training and testing. Human and machine learning contexts present additional opportunities for expanding incomplete knowledge from formal training, via self-directed learning that incorporates features not previously experienced. This article defines a new self-supervised learning paradigm to address these richer learning contexts, introducing a neural network called self-supervised ARTMAP. Self-supervised learning integrates knowledge from a teacher (labeled patterns with some features), knowledge from the environment (unlabeled patterns with more features), and knowledge from internal model activation (self-labeled patterns). Self-supervised ARTMAP learns about novel features from unlabeled patterns without destroying partial knowledge previously acquired from labeled patterns. A category selection function bases system predictions on known features, and distributed network activation scales unlabeled learning to prediction confidence. Slow distributed learning on unlabeled patterns focuses on novel features and confident predictions, defining classification boundaries that were ambiguous in the labeled patterns. Self-supervised ARTMAP improves test accuracy on illustrative low-dimensional problems and on high-dimensional benchmarks. Model code and benchmark data are available from: http://techlab.eu.edu/SSART/. Copyright 2009 Elsevier Ltd. All rights reserved.

  16. A Semi-Automated Machine Learning Algorithm for Tree Cover Delineation from 1-m Naip Imagery Using a High Performance Computing Architecture

    NASA Astrophysics Data System (ADS)

    Basu, S.; Ganguly, S.; Nemani, R. R.; Mukhopadhyay, S.; Milesi, C.; Votava, P.; Michaelis, A.; Zhang, G.; Cook, B. D.; Saatchi, S. S.; Boyda, E.

    2014-12-01

    Accurate tree cover delineation is a useful instrument in the derivation of Above Ground Biomass (AGB) density estimates from Very High Resolution (VHR) satellite imagery data. Numerous algorithms have been designed to perform tree cover delineation in high to coarse resolution satellite imagery, but most of them do not scale to terabytes of data, typical in these VHR datasets. In this paper, we present an automated probabilistic framework for the segmentation and classification of 1-m VHR data as obtained from the National Agriculture Imagery Program (NAIP) for deriving tree cover estimates for the whole of Continental United States, using a High Performance Computing Architecture. The results from the classification and segmentation algorithms are then consolidated into a structured prediction framework using a discriminative undirected probabilistic graphical model based on Conditional Random Field (CRF), which helps in capturing the higher order contextual dependencies between neighboring pixels. Once the final probability maps are generated, the framework is updated and re-trained by incorporating expert knowledge through the relabeling of misclassified image patches. This leads to a significant improvement in the true positive rates and reduction in false positive rates. The tree cover maps were generated for the state of California, which covers a total of 11,095 NAIP tiles and spans a total geographical area of 163,696 sq. miles. Our framework produced correct detection rates of around 85% for fragmented forests and 70% for urban tree cover areas, with false positive rates lower than 3% for both regions. Comparative studies with the National Land Cover Data (NLCD) algorithm and the LiDAR high-resolution canopy height model shows the effectiveness of our algorithm in generating accurate high-resolution tree cover maps.

  17. Validity of automated x-ray photoelectron spectroscopy algorithm to determine the amount of substance and the depth distribution of atoms

    SciTech Connect

    Tougaard, Sven

    2013-05-15

    The author reports a systematic study of the range of validity of a previously developed algorithm for automated x-ray photoelectron spectroscopy analysis, which takes into account the variation in both peak intensity and the intensity in the background of inelastically scattered electrons. This test was done by first simulating spectra for the Au4d peak with gold atoms distributed in the form of a wide range of nanostructures, which includes overlayers with varying thickness, a 5 A layer of atoms buried at varying depths and a substrate covered with an overlayer of varying thickness. Next, the algorithm was applied to analyze these spectra. The algorithm determines the number of atoms within the outermost 3 {lambda} of the surface. This amount of substance is denoted AOS{sub 3{lambda}} (where {lambda} is the electron inelastic mean free path). In general the determined AOS{sub 3{lambda}} is found to be accurate to within {approx}10-20% depending on the depth distribution of the atoms. The algorithm also determines a characteristic length L, which was found to give unambiguous information on the depth distribution of the atoms for practically all studied cases. A set of rules for this parameter, which relates the value of L to the depths where the atoms are distributed, was tested, and these rules were found to be generally valid with only a few exceptions. The results were found to be rather independent of the spectral energy range (from 20 to 40 eV below the peak energy) used in the analysis.

  18. Automated Glioblastoma Segmentation Based on a Multiparametric Structured Unsupervised Classification

    PubMed Central

    Juan-Albarracín, Javier; Fuster-Garcia, Elies; Manjón, José V.; Robles, Montserrat; Aparici, F.; Martí-Bonmatí, L.; García-Gómez, Juan M.

    2015-01-01

    Automatic brain tumour segmentation has become a key component for the future of brain tumour treatment. Currently, most of brain tumour segmentation approaches arise from the supervised learning standpoint, which requires a labelled training dataset from which to infer the models of the classes. The performance of these models is directly determined by the size and quality of the training corpus, whose retrieval becomes a tedious and time-consuming task. On the other hand, unsupervised approaches avoid these limitations but often do not reach comparable results than the supervised methods. In this sense, we propose an automated unsupervised method for brain tumour segmentation based on anatomical Magnetic Resonance (MR) images. Four unsupervised classification algorithms, grouped by their structured or non-structured condition, were evaluated within our pipeline. Considering the non-structured algorithms, we evaluated K-means, Fuzzy K-means and Gaussian Mixture Model (GMM), whereas as structured classification algorithms we evaluated Gaussian Hidden Markov Random Field (GHMRF). An automated postprocess based on a statistical approach supported by tissue probability maps is proposed to automatically identify the tumour classes after the segmentations. We evaluated our brain tumour segmentation method with the public BRAin Tumor Segmentation (BRATS) 2013 Test and Leaderboard datasets. Our approach based on the GMM model improves the results obtained by most of the supervised methods evaluated with the Leaderboard set and reaches the second position in the ranking. Our variant based on the GHMRF achieves the first position in the Test ranking of the unsupervised approaches and the seventh position in the general Test ranking, which confirms the method as a viable alternative for brain tumour segmentation. PMID:25978453

  19. Automated glioblastoma segmentation based on a multiparametric structured unsupervised classification.

    PubMed

    Juan-Albarracín, Javier; Fuster-Garcia, Elies; Manjón, José V; Robles, Montserrat; Aparici, F; Martí-Bonmatí, L; García-Gómez, Juan M

    2015-01-01

    Automatic brain tumour segmentation has become a key component for the future of brain tumour treatment. Currently, most of brain tumour segmentation approaches arise from the supervised learning standpoint, which requires a labelled training dataset from which to infer the models of the classes. The performance of these models is directly determined by the size and quality of the training corpus, whose retrieval becomes a tedious and time-consuming task. On the other hand, unsupervised approaches avoid these limitations but often do not reach comparable results than the supervised methods. In this sense, we propose an automated unsupervised method for brain tumour segmentation based on anatomical Magnetic Resonance (MR) images. Four unsupervised classification algorithms, grouped by their structured or non-structured condition, were evaluated within our pipeline. Considering the non-structured algorithms, we evaluated K-means, Fuzzy K-means and Gaussian Mixture Model (GMM), whereas as structured classification algorithms we evaluated Gaussian Hidden Markov Random Field (GHMRF). An automated postprocess based on a statistical approach supported by tissue probability maps is proposed to automatically identify the tumour classes after the segmentations. We evaluated our brain tumour segmentation method with the public BRAin Tumor Segmentation (BRATS) 2013 Test and Leaderboard datasets. Our approach based on the GMM model improves the results obtained by most of the supervised methods evaluated with the Leaderboard set and reaches the second position in the ranking. Our variant based on the GHMRF achieves the first position in the Test ranking of the unsupervised approaches and the seventh position in the general Test ranking, which confirms the method as a viable alternative for brain tumour segmentation.

  20. Volumetric analysis of lung nodules in computed tomography (CT): comparison of two different segmentation algorithm softwares and two different reconstruction filters on automated volume calculation.

    PubMed

    Christe, Andreas; Brönnimann, Alain; Vock, Peter

    2014-02-01

    A precise detection of volume change allows for better estimating the biological behavior of the lung nodules. Postprocessing tools with automated detection, segmentation, and volumetric analysis of lung nodules may expedite radiological processes and give additional confidence to the radiologists. To compare two different postprocessing software algorithms (LMS Lung, Median Technologies; LungCARE®, Siemens) in CT volumetric measurement and to analyze the effect of soft (B30) and hard reconstruction filter (B70) on automated volume measurement. Between January 2010 and April 2010, 45 patients with a total of 113 pulmonary nodules were included. The CT exam was performed on a 64-row multidetector CT scanner (Somatom Sensation, Siemens, Erlangen, Germany) with the following parameters: collimation, 24x1.2 mm; pitch, 1.15; voltage, 120 kVp; reference tube current-time, 100 mAs. Automated volumetric measurement of each lung nodule was performed with the two different postprocessing algorithms based on two reconstruction filters (B30 and B70). The average relative volume measurement difference (VME%) and the limits of agreement between two methods were used for comparison. At soft reconstruction filters the LMS system produced mean nodule volumes that were 34.1% (P < 0.0001) larger than those by LungCARE® system. The VME% was 42.2% with a limit of agreement between -53.9% and 138.4%.The volume measurement with soft filters (B30) was significantly larger than with hard filters (B70); 11.2% for LMS and 1.6% for LungCARE®, respectively (both with P < 0.05). LMS measured greater volumes with both filters, 13.6% for soft and 3.8% for hard filters, respectively (P < 0.01 and P > 0.05). There is a substantial inter-software (LMS/LungCARE®) as well as intra-software variability (B30/B70) in lung nodule volume measurement; therefore, it is mandatory to use the same equipment with the same reconstruction filter for the follow-up of lung nodule volume.

  1. MDL constrained 3-D grayscale skeletonization algorithm for automated extraction of dendrites and spines from fluorescence confocal images.

    PubMed

    Yuan, Xiaosong; Trachtenberg, Joshua T; Potter, Steve M; Roysam, Badrinath

    2009-12-01

    This paper presents a method for improved automatic delineation of dendrites and spines from three-dimensional (3-D) images of neurons acquired by confocal or multi-photon fluorescence microscopy. The core advance presented here is a direct grayscale skeletonization algorithm that is constrained by a structural complexity penalty using the minimum description length (MDL) principle, and additional neuroanatomy-specific constraints. The 3-D skeleton is extracted directly from the grayscale image data, avoiding errors introduced by image binarization. The MDL method achieves a practical tradeoff between the complexity of the skeleton and its coverage of the fluorescence signal. Additional advances include the use of 3-D spline smoothing of dendrites to improve spine detection, and graph-theoretic algorithms to explore and extract the dendritic structure from the grayscale skeleton using an intensity-weighted minimum spanning tree (IW-MST) algorithm. This algorithm was evaluated on 30 datasets organized in 8 groups from multiple laboratories. Spines were detected with false negative rates less than 10% on most datasets (the average is 7.1%), and the average false positive rate was 11.8%. The software is available in open source form.

  2. Automated real-time search and analysis algorithms for a non-contact 3D profiling system

    NASA Astrophysics Data System (ADS)

    Haynes, Mark; Wu, Chih-Hang John; Beck, B. Terry; Peterman, Robert J.

    2013-04-01

    The purpose of this research is to develop a new means of identifying and extracting geometrical feature statistics from a non-contact precision-measurement 3D profilometer. Autonomous algorithms have been developed to search through large-scale Cartesian point clouds to identify and extract geometrical features. These algorithms are developed with the intent of providing real-time production quality control of cold-rolled steel wires. The steel wires in question are prestressing steel reinforcement wires for concrete members. The geometry of the wire is critical in the performance of the overall concrete structure. For this research a custom 3D non-contact profilometry system has been developed that utilizes laser displacement sensors for submicron resolution surface profiling. Optimizations in the control and sensory system allow for data points to be collected at up to an approximate 400,000 points per second. In order to achieve geometrical feature extraction and tolerancing with this large volume of data, the algorithms employed are optimized for parsing large data quantities. The methods used provide a unique means of maintaining high resolution data of the surface profiles while keeping algorithm running times within practical bounds for industrial application. By a combination of regional sampling, iterative search, spatial filtering, frequency filtering, spatial clustering, and template matching a robust feature identification method has been developed. These algorithms provide an autonomous means of verifying tolerances in geometrical features. The key method of identifying the features is through a combination of downhill simplex and geometrical feature templates. By performing downhill simplex through several procedural programming layers of different search and filtering techniques, very specific geometrical features can be identified within the point cloud and analyzed for proper tolerancing. Being able to perform this quality control in real time

  3. Quantification of accuracy of the automated nonlinear image matching and anatomical labeling (ANIMAL) nonlinear registration algorithm for 4D CT images of lung.

    PubMed

    Heath, E; Collins, D L; Keall, P J; Dong, L; Seuntjens, J

    2007-11-01

    The performance of the ANIMAL (Automated Nonlinear Image Matching and Anatomical Labeling) nonlinear registration algorithm for registration of thoracic 4D CT images was investigated. The algorithm was modified to minimize the incidence of deformation vector discontinuities that occur during the registration of lung images. Registrations were performed between the inhale and exhale phases for five patients. The registration accuracy was quantified by the cross-correlation of transformed and target images and distance to agreement (DTA) measured based on anatomical landmarks and triangulated surfaces constructed from manual contours. On average, the vector DTA between transformed and target landmarks was 1.6 mm. Comparing transformed and target 3D triangulated surfaces derived from planning contours, the average target volume (GTV) center-of-mass shift was 2.0 mm and the 3D DTA was 1.6 mm. An average DTA of 1.8 mm was obtained for all planning structures. All DTA metrics were comparable to inter observer uncertainties established for landmark identification and manual contouring.

  4. Motility mapping as evaluation tool for bowel motility: initial results on the development of an automated color-coding algorithm in cine MRI.

    PubMed

    Hahnemann, Maria L; Nensa, Felix; Kinner, Sonja; Gerken, Guido; Lauenstein, Thomas C

    2015-02-01

    To develop and implement an automated algorithm for visualizing and quantifying bowel motility using cine magnetic resonance imaging (MRI). Four healthy volunteers as well as eight patients with suspected or diagnosed inflammatory bowel disease (IBD) underwent MR examinations on a 1.5T scanner. Coronal T2-weighted cine MR images were acquired in healthy volunteers without and with intravenous (i.v.) administration of butylscopolamine. In patients with IBD, cine MRI sequences were collected prior to standard bowel MRI. Bowel motility was assessed using an optical flow algorithm. The resulting motion vector magnitudes were presented as bowel motility maps. Motility changes after i.v. administration of butylscopolamine were measured in healthy volunteers. Inflamed bowel segments in patients were correlated with motility map findings. The acquisition of bowel motility maps was feasible in all subjects examined. In healthy volunteers butylscopolamine led to quantitatively measurable decrease in bowel motility (mean decrease of 59%; P = 0.171). In patients with IBD, visualization of bowel movement by color-coded motility mapping allowed for the detection of segments with abnormal bowel motility. Inflamed bowel segments could be identified by exhibiting a decreased motility. Our method is a feasible and promising approach for the assessment of bowel motility disorders. © 2014 Wiley Periodicals, Inc.

  5. SWIFT—Scalable Clustering for Automated Identification of Rare Cell Populations in Large, High-Dimensional Flow Cytometry Datasets, Part 1: Algorithm Design

    PubMed Central

    Naim, Iftekhar; Datta, Suprakash; Rebhahn, Jonathan; Cavenaugh, James S; Mosmann, Tim R; Sharma, Gaurav

    2014-01-01

    We present a model-based clustering method, SWIFT (Scalable Weighted Iterative Flow-clustering Technique), for digesting high-dimensional large-sized datasets obtained via modern flow cytometry into more compact representations that are well-suited for further automated or manual analysis. Key attributes of the method include the following: (a) the analysis is conducted in the multidimensional space retaining the semantics of the data, (b) an iterative weighted sampling procedure is utilized to maintain modest computational complexity and to retain discrimination of extremely small subpopulations (hundreds of cells from datasets containing tens of millions), and (c) a splitting and merging procedure is incorporated in the algorithm to preserve distinguishability between biologically distinct populations, while still providing a significant compaction relative to the original data. This article presents a detailed algorithmic description of SWIFT, outlining the application-driven motivations for the different design choices, a discussion of computational complexity of the different steps, and results obtained with SWIFT for synthetic data and relatively simple experimental data that allow validation of the desirable attributes. A companion paper (Part 2) highlights the use of SWIFT, in combination with additional computational tools, for more challenging biological problems. © 2014 The Authors. Published by Wiley Periodicals Inc. PMID:24677621

  6. SWIFT-scalable clustering for automated identification of rare cell populations in large, high-dimensional flow cytometry datasets, part 1: algorithm design.

    PubMed

    Naim, Iftekhar; Datta, Suprakash; Rebhahn, Jonathan; Cavenaugh, James S; Mosmann, Tim R; Sharma, Gaurav

    2014-05-01

    We present a model-based clustering method, SWIFT (Scalable Weighted Iterative Flow-clustering Technique), for digesting high-dimensional large-sized datasets obtained via modern flow cytometry into more compact representations that are well-suited for further automated or manual analysis. Key attributes of the method include the following: (a) the analysis is conducted in the multidimensional space retaining the semantics of the data, (b) an iterative weighted sampling procedure is utilized to maintain modest computational complexity and to retain discrimination of extremely small subpopulations (hundreds of cells from datasets containing tens of millions), and (c) a splitting and merging procedure is incorporated in the algorithm to preserve distinguishability between biologically distinct populations, while still providing a significant compaction relative to the original data. This article presents a detailed algorithmic description of SWIFT, outlining the application-driven motivations for the different design choices, a discussion of computational complexity of the different steps, and results obtained with SWIFT for synthetic data and relatively simple experimental data that allow validation of the desirable attributes. A companion paper (Part 2) highlights the use of SWIFT, in combination with additional computational tools, for more challenging biological problems. © 2014 The Authors. Published by Wiley Periodicals Inc. on behalf of the International Society for Advancement of Cytometry.

  7. An efficient semi-supervised classification approach for hyperspectral imagery

    NASA Astrophysics Data System (ADS)

    Tan, Kun; Li, Erzhu; Du, Qian; Du, Peijun

    2014-11-01

    In this paper, an efficient semi-supervised support vector machine (SVM) with segmentation-based ensemble (S2SVMSE) algorithm is proposed for hyperspectral image classification. The algorithm utilizes spatial information extracted by a segmentation algorithm for unlabeled sample selection. The unlabeled samples that are the most similar to the labeled ones are found and the candidate set of unlabeled samples to be chosen is enlarged to the corresponding image segments. To ensure the finally selected unlabeled samples be spatially widely distributed and less correlated, random selection is conducted with the flexibility of the number of unlabeled samples actually participating in semi-supervised learning. Classification is also refined through a spectral-spatial feature ensemble technique. The proposed method with very limited labeled training samples is evaluated via experiments with two real hyperspectral images, where it outperforms the fully supervised SVM and the semi-supervised version without spectral-spatial ensemble.

  8. Two Approaches to Clinical Supervision.

    ERIC Educational Resources Information Center

    Anderson, Eugene M.

    Criteria are established for a definition of "clinical supervision" and the effectiveness of such supervisory programs in a student teaching context are considered. Two differing genres of clinical supervision are constructed: "supervision by pattern analysis" is contrasted with "supervision by performance objectives." An outline of procedural…

  9. Semi-supervised and unsupervised extreme learning machines.

    PubMed

    Huang, Gao; Song, Shiji; Gupta, Jatinder N D; Wu, Cheng

    2014-12-01

    Extreme learning machines (ELMs) have proven to be efficient and effective learning mechanisms for pattern classification and regression. However, ELMs are primarily applied to supervised learning problems. Only a few existing research papers have used ELMs to explore unlabeled data. In this paper, we extend ELMs for both semi-supervised and unsupervised tasks based on the manifold regularization, thus greatly expanding the applicability of ELMs. The key advantages of the proposed algorithms are as follows: 1) both the semi-supervised ELM (SS-ELM) and the unsupervised ELM (US-ELM) exhibit learning capability and computational efficiency of ELMs; 2) both algorithms naturally handle multiclass classification or multicluster clustering; and 3) both algorithms are inductive and can handle unseen data at test time directly. Moreover, it is shown in this paper that all the supervised, semi-supervised, and unsupervised ELMs can actually be put into a unified framework. This provides new perspectives for understanding the mechanism of random feature mapping, which is the key concept in ELM theory. Empirical study on a wide range of data sets demonstrates that the proposed algorithms are competitive with the state-of-the-art semi-supervised or unsupervised learning algorithms in terms of accuracy and efficiency.

  10. The implementation of an automated tracking algorithm for the track detection of migratory anticyclones affecting the Mediterranean

    NASA Astrophysics Data System (ADS)

    Hatzaki, Maria; Flocas, Elena A.; Simmonds, Ian; Kouroutzoglou, John; Keay, Kevin; Rudeva, Irina

    2013-04-01

    Migratory cyclones and anticyclones mainly account for the short-term weather variations in extra-tropical regions. By contrast to cyclones that have drawn major scientific attention due to their direct link to active weather and precipitation, climatological studies on anticyclones are limited, even though they also are associated with extreme weather phenomena and play an important role in global and regional climate. This is especially true for the Mediterranean, a region particularly vulnerable to climate change, and the little research which has been done is essentially confined to the manual analysis of synoptic charts. For the construction of a comprehensive climatology of migratory anticyclonic systems in the Mediterranean using an objective methodology, the Melbourne University automatic tracking algorithm is applied, based to the ERA-Interim reanalysis mean sea level pressure database. The algorithm's reliability in accurately capturing the weather patterns and synoptic climatology of the transient activity has been widely proven. This algorithm has been extensively applied for cyclone studies worldwide and it has been also successfully applied for the Mediterranean, though its use for anticyclone tracking is limited to the Southern Hemisphere. In this study the performance of the tracking algorithm under different data resolutions and different choices of parameter settings in the scheme is examined. Our focus is on the appropriate modification of the algorithm in order to efficiently capture the individual characteristics of the anticyclonic tracks in the Mediterranean, a closed basin with complex topography. We show that the number of the detected anticyclonic centers and the resulting tracks largely depend upon the data resolution and the search radius. We also find that different scale anticyclones and secondary centers that lie within larger anticyclone structures can be adequately represented; this is important, since the extensions of major

  11. Feasibility of a semi-automated contrast-oriented algorithm for tumor segmentation in retrospectively gated PET images: phantom and clinical validation.

    PubMed

    Carles, Montserrat; Fechter, Tobias; Nemer, Ursula; Nanko, Norbert; Mix, Michael; Nestle, Ursula; Schaefer, Andrea

    2015-12-21

    PET/CT plays an important role in radiotherapy planning for lung tumors. Several segmentation algorithms have been proposed for PET tumor segmentation. However, most of them do not take into account respiratory motion and are not well validated. The aim of this work was to evaluate a semi-automated contrast-oriented algorithm (COA) for PET tumor segmentation adapted to retrospectively gated (4D) images. The evaluation involved a wide set of 4D-PET/CT acquisitions of dynamic experimental phantoms and lung cancer patients. In addition, segmentation accuracy of 4D-COA was compared with four other state-of-the-art algorithms. In phantom evaluation, the physical properties of the objects defined the gold standard. In clinical evaluation, the ground truth was estimated by the STAPLE (Simultaneous Truth and Performance Level Estimation) consensus of three manual PET contours by experts. Algorithm evaluation with phantoms resulted in: (i) no statistically significant diameter differences for different targets and movements (Δφ = 0.3 ± 1.6 mm); (ii) reproducibility for heterogeneous and irregular targets independent of user initial interaction and (iii) good segmentation agreement for irregular targets compared to manual CT delineation in terms of Dice Similarity Coefficient (DSC = 0.66 ± 0.04), Positive Predictive Value (PPV  = 0.81 ± 0.06) and Sensitivity (Sen. = 0.49 ± 0.05). In clinical evaluation, the segmented volume was in reasonable agreement with the consensus volume (difference in volume (%Vol) = 40 ± 30, DSC = 0.71 ± 0.07 and PPV = 0.90 ± 0.13). High accuracy in target tracking position (ΔME) was obtained for experimental and clinical data (ΔME(exp) = 0 ± 3 mm; ΔME(clin) 0.3 ± 1.4 mm). In the comparison with other lung segmentation methods, 4D-COA has shown the highest volume accuracy in both experimental and clinical data. In conclusion, the accuracy in volume delineation, position tracking and its robustness on highly irregular target movements

  12. Feasibility of a semi-automated contrast-oriented algorithm for tumor segmentation in retrospectively gated PET images: phantom and clinical validation

    NASA Astrophysics Data System (ADS)

    Carles, Montserrat; Fechter, Tobias; Nemer, Ursula; Nanko, Norbert; Mix, Michael; Nestle, Ursula; Schaefer, Andrea

    2015-12-01

    PET/CT plays an important role in radiotherapy planning for lung tumors. Several segmentation algorithms have been proposed for PET tumor segmentation. However, most of them do not take into account respiratory motion and are not well validated. The aim of this work was to evaluate a semi-automated contrast-oriented algorithm (COA) for PET tumor segmentation adapted to retrospectively gated (4D) images. The evaluation involved a wide set of 4D-PET/CT acquisitions of dynamic experimental phantoms and lung cancer patients. In addition, segmentation accuracy of 4D-COA was compared with four other state-of-the-art algorithms. In phantom evaluation, the physical properties of the objects defined the gold standard. In clinical evaluation, the ground truth was estimated by the STAPLE (Simultaneous Truth and Performance Level Estimation) consensus of three manual PET contours by experts. Algorithm evaluation with phantoms resulted in: (i) no statistically significant diameter differences for different targets and movements (Δ φ =0.3+/- 1.6 mm); (ii) reproducibility for heterogeneous and irregular targets independent of user initial interaction and (iii) good segmentation agreement for irregular targets compared to manual CT delineation in terms of Dice Similarity Coefficient (DSC  =  0.66+/- 0.04 ), Positive Predictive Value (PPV  =  0.81+/- 0.06 ) and Sensitivity (Sen.  =  0.49+/- 0.05 ). In clinical evaluation, the segmented volume was in reasonable agreement with the consensus volume (difference in volume (%Vol)  =  40+/- 30 , DSC  =  0.71+/- 0.07 and PPV  =  0.90+/- 0.13 ). High accuracy in target tracking position (Δ ME) was obtained for experimental and clinical data (Δ ME{{}\\text{exp}}=0+/- 3 mm; Δ ME{{}\\text{clin}}=0.3+/- 1.4 mm). In the comparison with other lung segmentation methods, 4D-COA has shown the highest volume accuracy in both experimental and clinical data. In conclusion, the accuracy in volume

  13. Supervised classification of solar features using prior information

    NASA Astrophysics Data System (ADS)

    De Visscher, Ruben; Delouille, Véronique; Dupont, Pierre; Deledalle, Charles-Alban

    2015-10-01

    Context: The Sun as seen by Extreme Ultraviolet (EUV) telescopes exhibits a variety of large-scale structures. Of particular interest for space-weather applications is the extraction of active regions (AR) and coronal holes (CH). The next generation of GOES-R satellites will provide continuous monitoring of the solar corona in six EUV bandpasses that are similar to the ones provided by the SDO-AIA EUV telescope since May 2010. Supervised segmentations of EUV images that are consistent with manual segmentations by for example space-weather forecasters help in extracting useful information from the raw data. Aims: We present a supervised segmentation method that is based on the Maximum A Posteriori rule. Our method allows integrating both manually segmented images as well as other type of information. It is applied on SDO-AIA images to segment them into AR, CH, and the remaining Quiet Sun (QS) part. Methods: A Bayesian classifier is applied on training masks provided by the user. The noise structure in EUV images is non-trivial, and this suggests the use of a non-parametric kernel density estimator to fit the intensity distribution within each class. Under the Naive Bayes assumption we can add information such as latitude distribution and total coverage of each class in a consistent manner. Those information can be prescribed by an expert or estimated with an Expectation-Maximization algorithm. Results: The segmentation masks are in line with the training masks given as input and show consistency over time. Introduction of additional information besides pixel intensity improves upon the quality of the final segmentation. Conclusions: Such a tool can aid in building automated segmentations that are consistent with some ground truth' defined by the users.

  14. A semi-automated 2D/3D marker-based registration algorithm modelling prostate shrinkage during radiotherapy for prostate cancer.

    PubMed

    Budiharto, Tom; Slagmolen, Pieter; Hermans, Jeroen; Maes, Frederik; Verstraete, Jan; Heuvel, Frank Van den; Depuydt, Tom; Oyen, Raymond; Haustermans, Karin

    2009-03-01

    Currently, most available patient alignment tools based on implanted markers use manual marker matching and rigid registration transformations to measure the needed translational shifts. To quantify the particular effect of prostate gland shrinkage, implanted gold markers were tracked during a course of radiotherapy including an isotropic scaling factor to model prostate shrinkage. Eight patients with prostate cancer had gold markers implanted transrectally and seven were treated with (neo) adjuvant androgen deprivation therapy. After patient alignment to skin tattoos, orthogonal electronic portal images (EPIs) were taken. A semi-automated 2D/3D marker-based registration was performed to calculate the necessary couch shifts. The registration consists of a rigid transformation combined with an isotropic scaling to model prostate shrinkage. The inclusion of an isotropic shrinkage model in the registration algorithm cancelled the corresponding increase in registration error. The mean scaling factor was 0.89+/-0.09. For all but two patients, a decrease of the isotropic scaling factor during treatment was observed. However, there was almost no difference in the translation offset between the manual matching of the EPIs to the digitally reconstructed radiographs and the semi-automated 2D/3D registration. A decrease in the intermarker distance was found correlating with prostate shrinkage rather than with random marker migration. Inclusion of shrinkage in the registration process reduces registration errors during a course of radiotherapy. Nevertheless, this did not lead to a clinically significant change in the proposed table translations when compared to translations obtained with manual marker matching without a scaling correction.

  15. Multi-objective genetic algorithm for the automated planning of a wireless sensor network to monitor a critical facility

    NASA Astrophysics Data System (ADS)

    Jourdan, Damien B.; de Weck, Olivier L.

    2004-09-01

    This paper examines the optimal placement of nodes for a Wireless Sensor Network (WSN) designed to monitor a critical facility in a hostile region. The sensors are dropped from an aircraft, and they must be connected (directly or via hops) to a High Energy Communication Node (HECN), which serves as a relay from the ground to a satellite or a high-altitude aircraft. The sensors are assumed to have fixed communication and sensing ranges. The facility is modeled as circular and served by two roads. This simple model is used to benchmark the performance of the optimizer (a Multi-Objective Genetic Algorithm, or MOGA) in creating WSN designs that provide clear assessments of movements in and out of the facility, while minimizing both the likelihood of sensors being discovered and the number of sensors to be dropped. The algorithm is also tested on two other scenarios; in the first one the WSN must detect movements in and out of a circular area, and in the second one it must cover uniformly a square region. The MOGA is shown again to perform well on those scenarios, which shows its flexibility and possible application to more complex mission scenarios with multiple and diverse targets of observation.

  16. Datamining the NOAO NVO Portal: Automated Image Classification

    NASA Astrophysics Data System (ADS)

    Vaswani, Pooja; Miller, C. J.; Barg, I.; Smith, R. C.

    2006-12-01

    Image metadata describes the properties of an image and can be used for classification, e.g., galactic, extra-galactic, solar system, standard star, among others. We are developing a data mining application to automate such a classification process based on supervised learning using decision trees. We are applying this application to the NOAO NVO Portal (www.nvo.noao.edu). The core concepts of Quinlan's C4.5 decision tree induction algorithm are used to train, build a decision tree, and generate classification rules. These rules are then used to classify previously unseen image metadata. We utilize a collection of decision trees instead of a single classifier and average the classification probabilities. The concept of ``Bagging'' was used to create the collection of classifiers. The classification algorithm also facilitates the addition of weights to the probability estimate of the classes when prior knowledge of the class distribution is known.

  17. Automated wholeslide analysis of multiplex-brightfield IHC images for cancer cells and carcinoma-associated fibroblasts

    NASA Astrophysics Data System (ADS)

    Lorsakul, Auranuch; Andersson, Emilia; Vega Harring, Suzana; Sade, Hadassah; Grimm, Oliver; Bredno, Joerg

    2017-03-01

    Multiplex-brightfield immunohistochemistry (IHC) staining and quantitative measurement of multiple biomarkers can support therapeutic targeting of carcinoma-associated fibroblasts (CAF). This paper presents an automated digitalpathology solution to simultaneously analyze multiple biomarker expressions within a single tissue section stained with an IHC duplex assay. Our method was verified against ground truth provided by expert pathologists. In the first stage, the automated method quantified epithelial-carcinoma cells expressing cytokeratin (CK) using robust nucleus detection and supervised cell-by-cell classification algorithms with a combination of nucleus and contextual features. Using fibroblast activation protein (FAP) as biomarker for CAFs, the algorithm was trained, based on ground truth obtained from pathologists, to automatically identify tumor-associated stroma using a supervised-generation rule. The algorithm reported distance to nearest neighbor in the populations of tumor cells and activated-stromal fibroblasts as a wholeslide measure of spatial relationships. A total of 45 slides from six indications (breast, pancreatic, colorectal, lung, ovarian, and head-and-neck cancers) were included for training and verification. CK-positive cells detected by the algorithm were verified by a pathologist with good agreement (R2=0.98) to ground-truth count. For the area occupied by FAP-positive cells, the inter-observer agreement between two sets of ground-truth measurements was R2=0.93 whereas the algorithm reproduced the pathologists' areas with R2=0.96. The proposed methodology enables automated image analysis to measure spatial relationships of cells stained in an IHC-multiplex assay. Our proof-of-concept results show an automated algorithm can be trained to reproduce the expert assessment and provide quantitative readouts that potentially support a cutoff determination in hypothesis testing related to CAF-targeting-therapy decisions.

  18. Automated quantification of cerebral edema following hemispheric infarction: Application of a machine-learning algorithm to evaluate CSF shifts on serial head CTs.

    PubMed

    Chen, Yasheng; Dhar, Rajat; Heitsch, Laura; Ford, Andria; Fernandez-Cadenas, Israel; Carrera, Caty; Montaner, Joan; Lin, Weili; Shen, Dinggang; An, Hongyu; Lee, Jin-Moo

    2016-01-01

    Although cerebral edema is a major cause of death and deterioration following hemispheric stroke, there remains no validated biomarker that captures the full spectrum of this critical complication. We recently demonstrated that reduction in intracranial cerebrospinal fluid (CSF) volume (∆ CSF) on serial computed tomography (CT) scans provides an accurate measure of cerebral edema severity, which may aid in early triaging of stroke patients for craniectomy. However, application of such a volumetric approach would be too cumbersome to perform manually on serial scans in a real-world setting. We developed and validated an automated technique for CSF segmentation via integration of random forest (RF) based machine learning with geodesic active contour (GAC) segmentation. The proposed RF + GAC approach was compared to conventional Hounsfield Unit (HU) thresholding and RF segmentation methods using Dice similarity coefficient (DSC) and the correlation of volumetric measurements, with manual delineation serving as the ground truth. CSF spaces were outlined on scans performed at baseline (< 6 h after stroke onset) and early follow-up (FU) (closest to 24 h) in 38 acute ischemic stroke patients. RF performed significantly better than optimized HU thresholding (p < 10(- 4) in baseline and p < 10(- 5) in FU) and RF + GAC performed significantly better than RF (p < 10(- 3) in baseline and p < 10(- 5) in FU). Pearson correlation coefficients between the automatically detected ∆ CSF and the ground truth were r = 0.178 (p = 0.285), r = 0.876 (p < 10(- 6)) and r = 0.879 (p < 10(- 6)) for thresholding, RF and RF + GAC, respectively, with a slope closer to the line of identity in RF + GAC. When we applied the algorithm trained from images of one stroke center to segment CTs from another center, similar findings held. In conclusion, we have developed and validated an accurate automated approach to segment CSF and calculate its shifts on

  19. Automated treatment planning for a dedicated multi-source intracranial radiosurgery treatment unit using projected gradient and grassfire algorithms

    SciTech Connect

    Ghobadi, Kimia; Ghaffari, Hamid R.; Aleman, Dionne M.; Jaffray, David A.; Ruschin, Mark

    2012-06-15

    Purpose: The purpose of this work is to develop a framework to the inverse problem for radiosurgery treatment planning on the Gamma Knife{sup Registered-Sign} Perfexion Trade-Mark-Sign (PFX) for intracranial targets. Methods: The approach taken in the present study consists of two parts. First, a hybrid grassfire and sphere-packing algorithm is used to obtain shot positions (isocenters) based on the geometry of the target to be treated. For the selected isocenters, a sector duration optimization (SDO) model is used to optimize the duration of radiation delivery from each collimator size from each individual source bank. The SDO model is solved using a projected gradient algorithm. This approach has been retrospectively tested on seven manually planned clinical cases (comprising 11 lesions) including acoustic neuromas and brain metastases. Results: In terms of conformity and organ-at-risk (OAR) sparing, the quality of plans achieved with the inverse planning approach were, on average, improved compared to the manually generated plans. The mean difference in conformity index between inverse and forward plans was -0.12 (range: -0.27 to +0.03) and +0.08 (range: 0.00-0.17) for classic and Paddick definitions, respectively, favoring the inverse plans. The mean difference in volume receiving the prescribed dose (V{sub 100}) between forward and inverse plans was 0.2% (range: -2.4% to +2.0%). After plan renormalization for equivalent coverage (i.e., V{sub 100}), the mean difference in dose to 1 mm{sup 3} of brainstem between forward and inverse plans was -0.24 Gy (range: -2.40 to +2.02 Gy) favoring the inverse plans. Beam-on time varied with the number of isocenters but for the most optimal plans was on average 33 min longer than manual plans (range: -17 to +91 min) when normalized to a calibration dose rate of 3.5 Gy/min. In terms of algorithm performance, the isocenter selection for all the presented plans was performed in less than 3 s, while the SDO was performed in an

  20. Automated treatment planning for a dedicated multi-source intracranial radiosurgery treatment unit using projected gradient and grassfire algorithms.

    PubMed

    Ghobadi, Kimia; Ghaffari, Hamid R; Aleman, Dionne M; Jaffray, David A; Ruschin, Mark

    2012-06-01

    The purpose of this work is to develop a framework to the inverse problem for radiosurgery treatment planning on the Gamma Knife(®) Perfexion™ (PFX) for intracranial targets. The approach taken in the present study consists of two parts. First, a hybrid grassfire and sphere-packing algorithm is used to obtain shot positions (isocenters) based on the geometry of the target to be treated. For the selected isocenters, a sector duration optimization (SDO) model is used to optimize the duration of radiation delivery from each collimator size from each individual source bank. The SDO model is solved using a projected gradient algorithm. This approach has been retrospectively tested on seven manually planned clinical cases (comprising 11 lesions) including acoustic neuromas and brain metastases. In terms of conformity and organ-at-risk (OAR) sparing, the quality of plans achieved with the inverse planning approach were, on average, improved compared to the manually generated plans. The mean difference in conformity index between inverse and forward plans was -0.12 (range: -0.27 to +0.03) and +0.08 (range: 0.00-0.17) for classic and Paddick definitions, respectively, favoring the inverse plans. The mean difference in volume receiving the prescribed dose (V(100)) between forward and inverse plans was 0.2% (range: -2.4% to +2.0%). After plan renormalization for equivalent coverage (i.e., V(100)), the mean difference in dose to 1 mm(3) of brainstem between forward and inverse plans was -0.24 Gy (range: -2.40 to +2.02 Gy) favoring the inverse plans. Beam-on time varied with the number of isocenters but for the most optimal plans was on average 33 min longer than manual plans (range: -17 to +91 min) when normalized to a calibration dose rate of 3.5 Gy/min. In terms of algorithm performance, the isocenter selection for all the presented plans was performed in less than 3 s, while the SDO was performed in an average of 215 min. PFX inverse planning can be performed using

  1. Supervised Classification of Underwater Optical Imagery for Improved Detection and Characterization of Underwater Military Munitions

    DTIC Science & Technology

    2015-06-01

    FINAL REPORT Supervised Classification of Underwater Optical Imagery for Improved Detection and Characterization of Underwater Military...SUBTITLE Supervised Classification of Underwater Optical Imagery for 5a. CONTRACT NUMBER W912HQ-14-P-0012 Improved Detection and...image classification algorithm was tested for identifying UWMM and seabed types. Also, an extension to the algorithm using seabed microtopography

  2. Comparison of K-means and fuzzy c-means algorithm performance for automated determination of the arterial input function.

    PubMed

    Yin, Jiandong; Sun, Hongzan; Yang, Jiawen; Guo, Qiyong

    2014-01-01

    The arterial input function (AIF) plays a crucial role in the quantification of cerebral perfusion parameters. The traditional method for AIF detection is based on manual operation, which is time-consuming and subjective. Two automatic methods have been reported that are based on two frequently used clustering algorithms: fuzzy c-means (FCM) and K-means. However, it is still not clear which is better for AIF detection. Hence, we compared the performance of these two clustering methods using both simulated and clinical data. The results demonstrate that K-means analysis can yield more accurate and robust AIF results, although it takes longer to execute than the FCM method. We consider that this longer execution time is trivial relative to the total time required for image manipulation in a PACS setting, and is acceptable if an ideal AIF is obtained. Therefore, the K-means method is preferable to FCM in AIF detection.

  3. Selective identification of different brachytherapy sources, ferromagnetic seeds, and fiducials in the prostate using an automated seed sorting algorithm.

    PubMed

    Davis, Brian J; Brinkmann, Debra H; Kruse, Jon J; Herman, Michael G; LaJoie, Wayne N; Schwartz, David J; Pisansky, Thomas M; Kline, Robert W

    2004-01-01

    Routine permanent prostate brachytherapy (PPB) includes CT-based postimplant dosimetry (PID). A method of identifying different source types from CT data in the same implant volume is described. A previously described automatic method for seed localization using CT data is used in this study. Two cases were analyzed: a PPB case with (103)Pd followed by salvage (125)I implantation, both performed at another institution, and a cadaver case where 4 different seed types, including ferromagnetic seeds, and fiducials were implanted. Automatic segregation of different seed types with minimal manual correction is demonstrated using the described localization algorithm. The process is confirmed accurate by comparison of plain film radiographs to CT data and digitally reconstructed radiographs. Unique identification of different source types, including PPB seeds, fiducial markers, and ferromagnetic seeds in permanent implants is possible and permits dosimetric analyses that are spatially coincident.

  4. Comparison of K-Means and Fuzzy c-Means Algorithm Performance for Automated Determination of the Arterial Input Function

    PubMed Central

    Yin, Jiandong; Sun, Hongzan; Yang, Jiawen; Guo, Qiyong

    2014-01-01

    The arterial input function (AIF) plays a crucial role in the quantification of cerebral perfusion parameters. The traditional method for AIF detection is based on manual operation, which is time-consuming and subjective. Two automatic methods have been reported that are based on two frequently used clustering algorithms: fuzzy c-means (FCM) and K-means. However, it is still not clear which is better for AIF detection. Hence, we compared the performance of these two clustering methods using both simulated and clinical data. The results demonstrate that K-means analysis can yield more accurate and robust AIF results, although it takes longer to execute than the FCM method. We consider that this longer execution time is trivial relative to the total time required for image manipulation in a PACS setting, and is acceptable if an ideal AIF is obtained. Therefore, the K-means method is preferable to FCM in AIF detection. PMID:24503700

  5. A bifurcation identifier for IV-OCT using orthogonal least squares and supervised machine learning.

    PubMed

    Macedo, Maysa M G; Guimarães, Welingson V N; Galon, Micheli Z; Takimura, Celso K; Lemos, Pedro A; Gutierrez, Marco Antonio

    2015-12-01

    Intravascular optical coherence tomography (IV-OCT) is an in-vivo imaging modality based on the intravascular introduction of a catheter which provides a view of the inner wall of blood vessels with a spatial resolution of 10-20 μm. Recent studies in IV-OCT have demonstrated the importance of the bifurcation regions. Therefore, the development of an automated tool to classify hundreds of coronary OCT frames as bifurcation or nonbifurcation can be an important step to improve automated methods for atherosclerotic plaques quantification, stent analysis and co-registration between different modalities. This paper describes a fully automated method to identify IV-OCT frames in bifurcation regions. The method is divided into lumen detection; feature extraction; and classification, providing a lumen area quantification, geometrical features of the cross-sectional lumen and labeled slices. This classification method is a combination of supervised machine learning algorithms and feature selection using orthogonal least squares methods. Training and tests were performed in sets with a maximum of 1460 human coronary OCT frames. The lumen segmentation achieved a mean difference of lumen area of 0.11 mm(2) compared with manual segmentation, and the AdaBoost classifier presented the best result reaching a F-measure score of 97.5% using 104 features.

  6. Semi-Supervised Two Stage Classification Technique.

    DTIC Science & Technology

    1987-07-31

    and a cluster as would be required for a minimum distance to mean rule. The Mahalanobis distance (Duda and Hart,1973), however, appears to be a metric...of the cluster, by including the inverse of the covariance ma- trix. The Mahalanobis distance measure is a complex cal- culation and defeats one of the...Automated Classification 5 2.3 Supervised and Unsupervised Classifications 10 2.4 Classification Decision Rules 13 2.4.1 Minimum Distance to Mean 14 2.4.2

  7. Automated Overnight Closed-Loop Control Using a Proportional-Integral-Derivative Algorithm with Insulin Feedback in Children and Adolescents with Type 1 Diabetes at Diabetes Camp.

    PubMed

    Ly, Trang T; Keenan, D Barry; Roy, Anirban; Han, Jino; Grosman, Benyamin; Cantwell, Martin; Kurtz, Natalie; von Eyben, Rie; Clinton, Paula; Wilson, Darrell M; Buckingham, Bruce A

    2016-06-01

    This study determined the feasibility and efficacy of an automated proportional-integral-derivative with insulin feedback (PID-IFB) controller in overnight closed-loop (OCL) control of children and adolescents with type 1 diabetes over multiple days in a diabetes camp setting. The Medtronic (Northridge, CA) Android™ (Google, Mountain View, CA)-based PID-IFB system consists of the Medtronic Minimed Revel™ 2.0 pump and Enlite™ sensor, a control algorithm residing on an Android phone, a translator, and remote monitoring capabilities. An inpatient study was completed for 16 participants to determine feasibility. For the camp study, subjects with type 1 diabetes were randomized to either OCL or sensor-augmented pump therapy (control conditions) per night for up to 6 nights at diabetes camp. During the camp study, 21 subjects completed 50 OCL nights and 52 control nights. Based on intention to treat, the median time spent in range, from 70 to 150 mg/dL, was greater during OCL at 66.4% (n = 55) versus 50.6% (n = 52) during the control period (P = 0.004). A per-protocol analysis allowed for assessment of algorithm performance with the median percentage time in range, 70-150 mg/dL, being 75.5% (n = 37) for OCL versus 47.6% (n = 32) for the control period (P < 0.001). There was less time spent in the hypoglycemic ranges <60 mg/dL and <70 mg/dL during OCL compared with the control period (P = 0.003 and P < 0.001, respectively). The PID-IFB controller is effective in improving time spent in range as well as reducing nocturnal hypoglycemia during the overnight period in children and adolescents with type 1 diabetes in a diabetes camp setting.

  8. Automated classification of seismic sources in large database using random forest algorithm: First results at Piton de la Fournaise volcano (La Réunion).

    NASA Astrophysics Data System (ADS)

    Hibert, Clément; Provost, Floriane; Malet, Jean-Philippe; Stumpf, André; Maggi, Alessia; Ferrazzini, Valérie

    2016-04-01

    In the past decades the increasing quality of seismic sensors and capability to transfer remotely large quantity of data led to a fast densification of local, regional and global seismic networks for near real-time monitoring. This technological advance permits the use of seismology to document geological and natural/anthropogenic processes (volcanoes, ice-calving, landslides, snow and rock avalanches, geothermal fields), but also led to an ever-growing quantity of seismic data. This wealth of seismic data makes the construction of complete seismicity catalogs, that include earthquakes but also other sources of seismic waves, more challenging and very time-consuming as this critical pre-processing stage is classically done by human operators. To overcome this issue, the development of automatic methods for the processing of continuous seismic data appears to be a necessity. The classification algorithm should satisfy the need of a method that is robust, precise and versatile enough to be deployed to monitor the seismicity in very different contexts. We propose a multi-class detection method based on the random forests algorithm to automatically classify the source of seismic signals. Random forests is a supervised machine learning technique that is based on the computation of a large number of decision trees. The multiple decision trees are constructed from training sets including each of the target classes. In the case of seismic signals, these attributes may encompass spectral features but also waveform characteristics, multi-stations observations and other relevant information. The Random Forests classifier is used because it provides state-of-the-art performance when compared with other machine learning techniques (e.g. SVM, Neural Networks) and requires no fine tuning. Furthermore it is relatively fast, robust, easy to parallelize, and inherently suitable for multi-class problems. In this work, we present the first results of the classification method applied

  9. Automated seismic detection of landslides at regional scales: a Random Forest based detection algorithm for Alaska and the Himalaya.

    NASA Astrophysics Data System (ADS)

    Hibert, Clement; Malet, Jean-Philippe; Provost, Floriane; Michéa, David; Geertsema, Marten

    2017-04-01

    Detection of landslide occurrences and measurement of their dynamics properties during run-out is a high research priority but a logistical and technical challenge. Seismology has started to help in several important ways. Taking advantage of the densification of global, regional and local networks of broadband seismic stations, recent advances now permit the seismic detection and location of landslides in near-real-time. This seismic detection could potentially greatly increase the spatio-temporal resolution at which we study landslides triggering, which is critical to better understand the influence of external forcings such as rainfalls and earthquakes. However, detecting automatically seismic signals generated by landslides still represents a challenge, especially for events with volumes below one millions of cubic meters. The low signal-to-noise ratio classically observed for landslide-generated seismic signals and the difficulty to discriminate these signals from those generated by regional earthquakes or anthropogenic and natural noises are some of the obstacles that have to be circumvented. We present a new method for automatically constructing instrumental landslide catalogues from continuous seismic data. We developed a robust and versatile solution, which can be implemented in any context where a seismic detection of landslides or other mass movements is relevant. The method is based on a spectral detection of the seismic signals and the identification of the sources with a Random Forest algorithm. The spectral detection allows detecting signals with low signal-to-noise ratio, while the Random Forest algorithm achieve a high rate of positive identification of the seismic signals generated by landslides and other seismic sources. We present here the preliminary results of the application of this processing chain in two contexts: i) In Himalaya with the data acquired between 2002 and 2005 by the Hi-Climb network; ii) In Alaska using data recorded by the

  10. The effects of irreversible JPEG compression on an automated algorithm for measuring carotid artery intima-media thickness from ultrasound images.

    PubMed

    Hangiandreou, N J; James, E M; McBane, R D; Tradup, D J; Persons, K R

    2002-01-01

    Our ultrasound practice has begun to investigate automated measurements of carotid artery intima-media thickness (IMT) as an indicator of subtle atherosclerosis. Since our clinical ultrasound images are irreversibly compressed, we investigated the effects of this compression on our IMT measurements. We obtained 10 ultrasound images of normal carotid arteries. These were compressed using JPEG to ratios of 5:1, 10:1, 15:1, 20:1, and 30:1. IMT measurements made from all compressed and uncompressed images were compared. For compression ratios ?10:1, IMT deviations between compressed and uncompressed images were ?0.03 mm. Higher than 10:1, the overall IMT deviations were small (0.01 +/- 0.04 mm), although one 25% deviation was measured. Comparison of other parameters yielded similar results. This initial study indicates that compression at 10:1 using baseline JPEG should have little effect on IMT measurements made using the current algorithm, and that compression to 20:1 or 30:1 may be feasible.

  11. Fully-automated approach to hippocampus segmentation using a graph-cuts algorithm combined with atlas-based segmentation and morphological opening.

    PubMed

    Kwak, Kichang; Yoon, Uicheul; Lee, Dong-Kyun; Kim, Geon Ha; Seo, Sang Won; Na, Duk L; Shim, Hack-Joon; Lee, Jong-Min

    2013-09-01

    The hippocampus has been known to be an important structure as a biomarker for Alzheimer's disease (AD) and other neurological and psychiatric diseases. However, it requires accurate, robust and reproducible delineation of hippocampal structures. In this study, an automated hippocampal segmentation method based on a graph-cuts algorithm combined with atlas-based segmentation and morphological opening was proposed. First of all, the atlas-based segmentation was applied to define initial hippocampal region for a priori information on graph-cuts. The definition of initial seeds was further elaborated by incorporating estimation of partial volume probabilities at each voxel. Finally, morphological opening was applied to reduce false positive of the result processed by graph-cuts. In the experiments with twenty-seven healthy normal subjects, the proposed method showed more reliable results (similarity index=0.81±0.03) than the conventional atlas-based segmentation method (0.72±0.04). Also as for segmentation accuracy which is measured in terms of the ratios of false positive and false negative, the proposed method (precision=0.76±0.04, recall=0.86±0.05) produced lower ratios than the conventional methods (0.73±0.05, 0.72±0.06) demonstrating its plausibility for accurate, robust and reliable segmentation of hippocampus.

  12. Optimization of automated segmentation of monkeypox virus-induced lung lesions from normal lung CT images using hard C-means algorithm

    NASA Astrophysics Data System (ADS)

    Castro, Marcelo A.; Thomasson, David; Avila, Nilo A.; Hufton, Jennifer; Senseney, Justin; Johnson, Reed F.; Dyall, Julie

    2013-03-01

    Monkeypox virus is an emerging zoonotic pathogen that results in up to 10% mortality in humans. Knowledge of clinical manifestations and temporal progression of monkeypox disease is limited to data collected from rare outbreaks in remote regions of Central and West Africa. Clinical observations show that monkeypox infection resembles variola infection. Given the limited capability to study monkeypox disease in humans, characterization of the disease in animal models is required. A previous work focused on the identification of inflammatory patterns using PET/CT image modality in two non-human primates previously inoculated with the virus. In this work we extended techniques used in computer-aided detection of lung tumors to identify inflammatory lesions from monkeypox virus infection and their progression using CT images. Accurate estimation of partial volumes of lung lesions via segmentation is difficult because of poor discrimination between blood vessels, diseased regions, and outer structures. We used hard C-means algorithm in conjunction with landmark based registration to estimate the extent of monkeypox virus induced disease before inoculation and after disease progression. Automated estimation is in close agreement with manual segmentation.

  13. Slotting optimization of automated storage and retrieval system (AS/RS) for efficient delivery of parts in an assembly shop using genetic algorithm: A case Study

    NASA Astrophysics Data System (ADS)

    Yue, L.; Guan, Z.; He, C.; Luo, D.; Saif, U.

    2017-06-01

    In recent years, the competitive pressure on manufacturing companies shifted them from mass production to mass customization to produce large variety of products. It is a great challenge for companies nowadays to produce customized mixed flow mode of production to meet customized demand on time. Due to large variety of products, the storage system to deliver variety of products to production lines influences on the timely production of variety of products, as investigated from by simulation study of an inefficient storage system of a real Company, in the current research. Therefore, current research proposed a slotting optimization model with mixed model sequence to assemble in consideration of the final flow lines to optimize whole automated storage and retrieval system (AS/RS) and distribution system in the case company. Current research is aimed to minimize vertical height of centre of gravity of AS/RS and total time spent for taking the materials out from the AS/RS simultaneously. Genetic algorithm is adopted to solve the proposed problem and computational result shows significant improvement in stability and efficiency of AS/RS as compared to the existing method used in the case company.

  14. Supervision as Cultural Inquiry.

    ERIC Educational Resources Information Center

    Flinders, David J.

    1991-01-01

    Describes a framework for "culturally responsive supervision." An understanding of analogic or iconic metaphors reveals the power of language to shape what are regarded as matters of fact. Kinesics, proxemics, and prosody bring into focus channels of nonverbal communication. The concept of "framing" calls attention to the metamessages of verbal…

  15. Revisiting Supervised Agricultural Experience.

    ERIC Educational Resources Information Center

    Camp, William G.; Clarke, Ariane; Fallon, Maureen

    2000-01-01

    A Delphi panel of 40 agricultural educators unanimously agreed that supervised agricultural experience should remain an integral component of the curriculum; a name change is not currently warranted. Categories recommended were agribusiness entrepreneurship, placement, production, research, directed school lab, communications, exploration, and…

  16. Supervising Graduate Assistants

    ERIC Educational Resources Information Center

    White, Jessica; Nonnamaker, John

    2011-01-01

    Discussions of personnel management in student affairs literature and at national conferences often focus on supervising new or midlevel professionals and the myriad challenges and possibilities these relationships entail (Carpenter, 2001; Winston and Creamer, 1997). Graduate students as employees and the often-complicated and ill-structured…

  17. Practice of Clinical Supervision.

    ERIC Educational Resources Information Center

    Holland, Patricia E.

    1988-01-01

    Clinical supervision remained grounded in empirical inquiry as late as Morris Cogan's writings on the subject in 1973. With the acknowledgment of Thomas Kuhn's (1962) paradigm shift, educational theory and practice developed interpretive methodologies. An interpretive reflection on Cogan's rationale offers insights into the current, matured…

  18. Practice of Clinical Supervision.

    ERIC Educational Resources Information Center

    Holland, Patricia E.

    1988-01-01

    Clinical supervision remained grounded in empirical inquiry as late as Morris Cogan's writings on the subject in 1973. With the acknowledgment of Thomas Kuhn's (1962) paradigm shift, educational theory and practice developed interpretive methodologies. An interpretive reflection on Cogan's rationale offers insights into the current, matured…

  19. Revisiting Supervised Agricultural Experience.

    ERIC Educational Resources Information Center

    Camp, William G.; Clarke, Ariane; Fallon, Maureen

    2000-01-01

    A Delphi panel of 40 agricultural educators unanimously agreed that supervised agricultural experience should remain an integral component of the curriculum; a name change is not currently warranted. Categories recommended were agribusiness entrepreneurship, placement, production, research, directed school lab, communications, exploration, and…

  20. Supervising Graduate Assistants

    ERIC Educational Resources Information Center

    White, Jessica; Nonnamaker, John

    2011-01-01

    Discussions of personnel management in student affairs literature and at national conferences often focus on supervising new or midlevel professionals and the myriad challenges and possibilities these relationships entail (Carpenter, 2001; Winston and Creamer, 1997). Graduate students as employees and the often-complicated and ill-structured…

  1. What Is Effective Supervision? A National Survey of Supervision Experts.

    ERIC Educational Resources Information Center

    Worthen, Vaughn E.; McNeill, Brian W.

    Previous research regarding the supervision of psychotherapists has been primarily based on the perceptions of supervisors and supervisees at various levels of experience. This national survey examines the attitudes and beliefs of experts in the field of supervision concerning what constitutes effective supervision. A number of themes and…

  2. A randomised controlled trial of an automated oxygen delivery algorithm for preterm neonates receiving supplemental oxygen without mechanical ventilation

    PubMed Central

    Zapata, James; Gómez, John Jairo; Araque Campo, Robinson; Matiz Rubio, Alejandro; Sola, Augusto

    2014-01-01

    Aim Providing consistent levels of oxygen saturation (SpO2) for infants in neonatal intensive care units is not easy. This study explored how effectively the Auto-Mixer® algorithm automatically adjusted fraction of inspired oxygen (FiO2) levels to maintain SpO2 within an intended range in extremely low birth weight infants receiving supplemental oxygen without mechanical ventilation. Methods Twenty extremely low birth weight infants were randomly assigned to the Auto-Mixer® group or the manual intervention group and studied for 12 h. The SpO2 target was 85–93%, and the outcomes were the percentage of time SpO2 was within target, SpO2 variability, SpO2 >95%, oxygen received and manual interventions. Results The percentage of time within intended SpO2 was 58 ± 4% in the Auto-Mixer® group and 33.7 ± 4.7% in the manual group, SpO2 >95% was 26.5% vs 54.8%, average SpO2 and FiO2 were 89.8% vs 92.2% and 37% vs 44.1%, and manual interventions were 0 vs 80 (p < 0.05). Brief periods of SpO2 < 85% occurred more frequently in the Auto-Mixer® group. Conclusion The Auto-Mixer® effectively increased the percentage of time that SpO2 was within the intended target range and decreased the time with high SpO2 in spontaneously breathing extremely low birth weight infants receiving supplemental oxygen. PMID:24813808

  3. Novice Supervisors' Understandings of Supervision.

    ERIC Educational Resources Information Center

    Waite, Duncan

    Findings of a study that examined novice supervisors' understandings of supervision are presented in this paper. Data were collected from 110 graduate-level students enrolled in an introductory supervision class. Four themes emerged from students' definitions of supervision-domains, relationships, traits, and tasks. The most surprising finding was…

  4. SU-E-I-89: Assessment of CT Radiation Dose and Image Quality for An Automated Tube Potential Selection Algorithm Using Pediatric Anthropomorphic and ACR Phantoms

    SciTech Connect

    Mahmood, U; Erdi, Y; Wang, W

    2014-06-01

    Purpose: To assess the impact of General Electrics automated tube potential algorithm, kV assist (kVa) on radiation dose and image quality, with an emphasis on optimizing protocols based on noise texture. Methods: Radiation dose was assessed by inserting optically stimulated luminescence dosimeters (OSLs) throughout the body of a pediatric anthropomorphic phantom (CIRS). The baseline protocol was: 120 kVp, 80 mA, 0.7s rotation time. Image quality was assessed by calculating the contrast to noise ratio (CNR) and noise power spectrum (NPS) from the ACR CT accreditation phantom. CNRs were calculated according to the steps described in ACR CT phantom testing document. NPS was determined by taking the 3D FFT of the uniformity section of the ACR phantom. NPS and CNR were evaluated with and without kVa and for all available adaptive iterative statistical reconstruction (ASiR) settings, ranging from 0 to 100%. Each NPS was also evaluated for its peak frequency difference (PFD) with respect to the baseline protocol. Results: For the baseline protocol, CNR was found to decrease from 0.460 ± 0.182 to 0.420 ± 0.057 when kVa was activated. When compared against the baseline protocol, the PFD at ASiR of 40% yielded a decrease in noise magnitude as realized by the increase in CNR = 0.620 ± 0.040. The liver dose decreased by 30% with kVa activation. Conclusion: Application of kVa reduces the liver dose up to 30%. However, reduction in image quality for abdominal scans occurs when using the automated tube voltage selection feature at the baseline protocol. As demonstrated by the CNR and NPS analysis, the texture and magnitude of the noise in reconstructed images at ASiR 40% was found to be the same as our baseline images. We have demonstrated that 30% dose reduction is possible when using 40% ASiR with kVa in pediatric patients.

  5. Validity of an automated algorithm to identify waking and in-bed wear time in hip-worn accelerometer data collected with a 24 h wear protocol in young adults.

    PubMed

    McVeigh, Joanne A; Winkler, Elisabeth A H; Healy, Genevieve N; Slater, James; Eastwood, Peter R; Straker, Leon M

    2016-09-21

    Researchers are increasingly using 24 h accelerometer wear protocols. No automated method has been published that accurately distinguishes 'waking' wear time from other data ('in-bed', non-wear, invalid days) in young adults. This study examined the validity of an automated algorithm developed to achieve this for hip-worn Actigraph GT3X  +  60 s epoch data. We compared the algorithm against a referent method ('into-bed' and 'out-of-bed' times visually identified by two independent raters) and benchmarked against two published algorithms. All methods used the same non-wear rules. The development sample (n  =  11) and validation sample (n  =  95) were Australian young adults from the Raine pregnancy cohort (54% female), all aged approximately 22 years. The agreement with Rater 1 in each minute's classification (yes/no) of waking wear time was examined as kappa (κ), limited to valid days (⩾10 h waking wear time per day) according to the algorithm and Rater 1. Bland-Altman methods assessed agreement in daily totals of waking wear and in-bed wear time. Excellent agreement (κ  >  0.75) was obtained between the raters for 80% of participants (median κ  =  0.94). The algorithm showed excellent agreement with Rater 1 (κ  >  0.75) for 89% of participants and poor agreement (κ  <  0.40) for 1%. In this sample, the algorithm (median κ  =  0.86) performed better than algorithms validated in children (median κ  =  0.77) and adolescents (median κ  =  0.66). The mean difference (95% limits of agreement) between Rater 1 and the algorithm was 7 (-220, 234) min d(-1) for waking wear time on valid days and  -41 (-309, 228) min d(-1) for in-bed wear time. In this population, the automated algorithm's validity for identifying waking wear time was mostly good, not worse than inter-rater agreement, and better than the evaluated published alternatives. However, the algorithm requires

  6. On Training Targets for Supervised Speech Separation

    PubMed Central

    Wang, Yuxuan; Narayanan, Arun; Wang, DeLiang

    2014-01-01

    Formulation of speech separation as a supervised learning problem has shown considerable promise. In its simplest form, a supervised learning algorithm, typically a deep neural network, is trained to learn a mapping from noisy features to a time-frequency representation of the target of interest. Traditionally, the ideal binary mask (IBM) is used as the target because of its simplicity and large speech intelligibility gains. The supervised learning framework, however, is not restricted to the use of binary targets. In this study, we evaluate and compare separation results by using different training targets, including the IBM, the target binary mask, the ideal ratio mask (IRM), the short-time Fourier transform spectral magnitude and its corresponding mask (FFT-MASK), and the Gammatone frequency power spectrum. Our results in various test conditions reveal that the two ratio mask targets, the IRM and the FFT-MASK, outperform the other targets in terms of objective intelligibility and quality metrics. In addition, we find that masking based targets, in general, are significantly better than spectral envelope based targets. We also present comparisons with recent methods in non-negative matrix factorization and speech enhancement, which show clear performance advantages of supervised speech separation. PMID:25599083

  7. Extracting PICO Sentences from Clinical Trial Reports using Supervised Distant Supervision.

    PubMed

    Wallace, Byron C; Kuiper, Joël; Sharma, Aakash; Zhu, Mingxi Brian; Marshall, Iain J

    2016-01-01

    Systematic reviews underpin Evidence Based Medicine (EBM) by addressing precise clinical questions via comprehensive synthesis of all relevant published evidence. Authors of systematic reviews typically define a Population/Problem, Intervention, Comparator, and Outcome (a PICO criteria) of interest, and then retrieve, appraise and synthesize results from all reports of clinical trials that meet these criteria. Identifying PICO elements in the full-texts of trial reports is thus a critical yet time-consuming step in the systematic review process. We seek to expedite evidence synthesis by developing machine learning models to automatically extract sentences from articles relevant to PICO elements. Collecting a large corpus of training data for this task would be prohibitively expensive. Therefore, we derive distant supervision (DS) with which to train models using previously conducted reviews. DS entails heuristically deriving 'soft' labels from an available structured resource. However, we have access only to unstructured, free-text summaries of PICO elements for corresponding articles; we must derive from these the desired sentence-level annotations. To this end, we propose a novel method - supervised distant supervision (SDS) - that uses a small amount of direct supervision to better exploit a large corpus of distantly labeled instances by learning to pseudo-annotate articles using the available DS. We show that this approach tends to outperform existing methods with respect to automated PICO extraction.

  8. Supervised autonomous robotic soft tissue surgery.

    PubMed

    Shademan, Azad; Decker, Ryan S; Opfermann, Justin D; Leonard, Simon; Krieger, Axel; Kim, Peter C W

    2016-05-04

    The current paradigm of robot-assisted surgeries (RASs) depends entirely on an individual surgeon's manual capability. Autonomous robotic surgery-removing the surgeon's hands-promises enhanced efficacy, safety, and improved access to optimized surgical techniques. Surgeries involving soft tissue have not been performed autonomously because of technological limitations, including lack of vision systems that can distinguish and track the target tissues in dynamic surgical environments and lack of intelligent algorithms that can execute complex surgical tasks. We demonstrate in vivo supervised autonomous soft tissue surgery in an open surgical setting, enabled by a plenoptic three-dimensional and near-infrared fluorescent (NIRF) imaging system and an autonomous suturing algorithm. Inspired by the best human surgical practices, a computer program generates a plan to complete complex surgical tasks on deformable soft tissue, such as suturing and intestinal anastomosis. We compared metrics of anastomosis-including the consistency of suturing informed by the average suture spacing, the pressure at which the anastomosis leaked, the number of mistakes that required removing the needle from the tissue, completion time, and lumen reduction in intestinal anastomoses-between our supervised autonomous system, manual laparoscopic surgery, and clinically used RAS approaches. Despite dynamic scene changes and tissue movement during surgery, we demonstrate that the outcome of supervised autonomous procedures is superior to surgery performed by expert surgeons and RAS techniques in ex vivo porcine tissues and in living pigs. These results demonstrate the potential for autonomous robots to improve the efficacy, consistency, functional outcome, and accessibility of surgical techniques.

  9. Coupled dimensionality reduction and classification for supervised and semi-supervised multilabel learning.

    PubMed

    Gönen, Mehmet

    2014-03-01

    Coupled training of dimensionality reduction and classification is proposed previously to improve the prediction performance for single-label problems. Following this line of research, in this paper, we first introduce a novel Bayesian method that combines linear dimensionality reduction with linear binary classification for supervised multilabel learning and present a deterministic variational approximation algorithm to learn the proposed probabilistic model. We then extend the proposed method to find intrinsic dimensionality of the projected subspace using automatic relevance determination and to handle semi-supervised learning using a low-density assumption. We perform supervised learning experiments on four benchmark multilabel learning data sets by comparing our method with baseline linear dimensionality reduction algorithms. These experiments show that the proposed approach achieves good performance values in terms of hamming loss, average AUC, macro F1, and micro F1 on held-out test data. The low-dimensional embeddings obtained by our method are also very useful for exploratory data analysis. We also show the effectiveness of our approach in finding intrinsic subspace dimensionality and semi-supervised learning tasks.

  10. Coupled dimensionality reduction and classification for supervised and semi-supervised multilabel learning

    PubMed Central

    Gönen, Mehmet

    2014-01-01

    Coupled training of dimensionality reduction and classification is proposed previously to improve the prediction performance for single-label problems. Following this line of research, in this paper, we first introduce a novel Bayesian method that combines linear dimensionality reduction with linear binary classification for supervised multilabel learning and present a deterministic variational approximation algorithm to learn the proposed probabilistic model. We then extend the proposed method to find intrinsic dimensionality of the projected subspace using automatic relevance determination and to handle semi-supervised learning using a low-density assumption. We perform supervised learning experiments on four benchmark multilabel learning data sets by comparing our method with baseline linear dimensionality reduction algorithms. These experiments show that the proposed approach achieves good performance values in terms of hamming loss, average AUC, macro F1, and micro F1 on held-out test data. The low-dimensional embeddings obtained by our method are also very useful for exploratory data analysis. We also show the effectiveness of our approach in finding intrinsic subspace dimensionality and semi-supervised learning tasks. PMID:24532862

  11. Abdominal adipose tissue quantification on water-suppressed and non-water-suppressed MRI at 3T using semi-automated FCM clustering algorithm

    NASA Astrophysics Data System (ADS)

    Valaparla, Sunil K.; Peng, Qi; Gao, Feng; Clarke, Geoffrey D.

    2014-03-01

    Accurate measurements of human body fat distribution are desirable because excessive body fat is associated with impaired insulin sensitivity, type 2 diabetes mellitus (T2DM) and cardiovascular disease. In this study, we hypothesized that the performance of water suppressed (WS) MRI is superior to non-water suppressed (NWS) MRI for volumetric assessment of abdominal subcutaneous (SAT), intramuscular (IMAT), visceral (VAT), and total (TAT) adipose tissues. We acquired T1-weighted images on a 3T MRI system (TIM Trio, Siemens), which was analyzed using semi-automated segmentation software that employs a fuzzy c-means (FCM) clustering algorithm. Sixteen contiguous axial slices, centered at the L4-L5 level of the abdomen, were acquired in eight T2DM subjects with water suppression (WS) and without (NWS). Histograms from WS images show improved separation of non-fatty tissue pixels from fatty tissue pixels, compared to NWS images. Paired t-tests of WS versus NWS showed a statistically significant lower volume of lipid in the WS images for VAT (145.3 cc less, p=0.006) and IMAT (305 cc less, p<0.001), but not SAT (14.1 cc more, NS). WS measurements of TAT also resulted in lower fat volumes (436.1 cc less, p=0.002). There is strong correlation between WS and NWS quantification methods for SAT measurements (r=0.999), but poorer correlation for VAT studies (r=0.845). These results suggest that NWS pulse sequences may overestimate adipose tissue volumes and that WS pulse sequences are more desirable due to the higher contrast generated between fatty and non-fatty tissues.

  12. MAGIC: an automated N-linked glycoprotein identification tool using a Y1-ion pattern matching algorithm and in silico MS² approach.

    PubMed

    Lynn, Ke-Shiuan; Chen, Chen-Chun; Lih, T Mamie; Cheng, Cheng-Wei; Su, Wan-Chih; Chang, Chun-Hao; Cheng, Chia-Ying; Hsu, Wen-Lian; Chen, Yu-Ju; Sung, Ting-Yi

    2015-02-17

    Glycosylation is a highly complex modification influencing the functions and activities of proteins. Interpretation of intact glycopeptide spectra is crucial but challenging. In this paper, we present a mass spectrometry-based automated glycopeptide identification platform (MAGIC) to identify peptide sequences and glycan compositions directly from intact N-linked glycopeptide collision-induced-dissociation spectra. The identification of the Y1 (peptideY0 + GlcNAc) ion is critical for the correct analysis of unknown glycoproteins, especially without prior knowledge of the proteins and glycans present in the sample. To ensure accurate Y1-ion assignment, we propose a novel algorithm called Trident that detects a triplet pattern corresponding to [Y0, Y1, Y2] or [Y0-NH3, Y0, Y1] from the fragmentation of the common trimannosyl core of N-linked glycopeptides. To facilitate the subsequent peptide sequence identification by common database search engines, MAGIC generates in silico spectra by overwriting the original precursor with the naked peptide m/z and removing all of the glycan-related ions. Finally, MAGIC computes the glycan compositions and ranks them. For the model glycoprotein horseradish peroxidase (HRP) and a 5-glycoprotein mixture, a 2- to 31-fold increase in the relative intensities of the peptide fragments was achieved, which led to the identification of 7 tryptic glycopeptides from HRP and 16 glycopeptides from the mixture via Mascot. In the HeLa cell proteome data set, MAGIC processed over a thousand MS(2) spectra in 3 min on a PC and reported 36 glycopeptides from 26 glycoproteins. Finally, a remarkable false discovery rate of 0 was achieved on the N-glycosylation-free Escherichia coli data set. MAGIC is available at http://ms.iis.sinica.edu.tw/COmics/Software_MAGIC.html .

  13. Identifying Rare and Subtle Behaviors: A Weakly Supervised Joint Topic Model.

    PubMed

    Hospedales, Timothy M; Li, Jian; Gong, Shaogang; Xiang, Tao

    2011-12-01

    One of the most interesting and desired capabilities for automated video behavior analysis is the identification of rarely occurring and subtle behaviors. This is of practical value because dangerous or illegal activities often have few or possibly only one prior example to learn from and are often subtle. Rare and subtle behavior learning is challenging for two reasons: (1) Contemporary modeling approaches require more data and supervision than may be available and (2) the most interesting and potentially critical rare behaviors are often visually subtle-occurring among more obvious typical behaviors or being defined by only small spatio-temporal deviations from typical behaviors. In this paper, we introduce a novel weakly supervised joint topic model which addresses these issues. Specifically, we introduce a multiclass topic model with partially shared latent structure and associated learning and inference algorithms. These contributions will permit modeling of behaviors from as few as one example, even without localization by the user and when occurring in clutter, and subsequent classification and localization of such behaviors online and in real time. We extensively validate our approach on two standard public-space data sets, where it clearly outperforms a batch of contemporary alternatives.

  14. Cross-Domain Semi-Supervised Learning Using Feature Formulation.

    PubMed

    Xingquan Zhu

    2011-12-01

    Semi-Supervised Learning (SSL) traditionally makes use of unlabeled samples by including them into the training set through an automated labeling process. Such a primitive Semi-Supervised Learning (pSSL) approach suffers from a number of disadvantages including false labeling and incapable of utilizing out-of-domain samples. In this paper, we propose a formative Semi-Supervised Learning (fSSL) framework which explores hidden features between labeled and unlabeled samples to achieve semi-supervised learning. fSSL regards that both labeled and unlabeled samples are generated from some hidden concepts with labeling information partially observable for some samples. The key of the fSSL is to recover the hidden concepts, and take them as new features to link labeled and unlabeled samples for semi-supervised learning. Because unlabeled samples are only used to generate new features, but not to be explicitly included in the training set like pSSL does, fSSL overcomes the inherent disadvantages of the traditional pSSL methods, especially for samples not within the same domain as the labeled instances. Experimental results and comparisons demonstrate that fSSL significantly outperforms pSSL-based methods for both within-domain and cross-domain semi-supervised learning.

  15. FIGENIX: Intelligent automation of genomic annotation: expertise integration in a new software platform

    PubMed Central

    Gouret, Philippe; Vitiello, Vérane; Balandraud, Nathalie; Gilles, André; Pontarotti, Pierre; Danchin, Etienne GJ

    2005-01-01

    Background Two of the main objectives of the genomic and post-genomic era are to structurally and functionally annotate genomes which consists of detecting genes' position and structure, and inferring their function (as well as of other features of genomes). Structural and functional annotation both require the complex chaining of numerous different software, algorithms and methods under the supervision of a biologist. The automation of these pipelines is necessary to manage huge amounts of data released by sequencing projects. Several pipelines already automate some of these complex chaining but still necessitate an important contribution of biologists for supervising and controlling the results at various steps. Results Here we propose an innovative automated platform, FIGENIX, which includes an expert system capable to substitute to human expertise at several key steps. FIGENIX currently automates complex pipelines of structural and functional annotation under the supervision of the expert system (which allows for example to make key decisions, check intermediate results or refine the dataset). The quality of the results produced by FIGENIX is comparable to those obtained by expert biologists with a drastic gain in terms of time costs and avoidance of errors due to the human manipulation of data. Conclusion The core engine and expert system of the FIGENIX platform currently handle complex annotation processes of broad interest for the genomic community. They could be easily adapted to new, or more specialized pipelines, such as for example the annotation of miRNAs, the classification of complex multigenic families, annotation of regulatory elements and other genomic features of interest. PMID:16083500

  16. Supervised variational model with statistical inference and its application in medical image segmentation.

    PubMed

    Li, Changyang; Wang, Xiuying; Eberl, Stefan; Fulham, Michael; Yin, Yong; Dagan Feng, David

    2015-01-01

    Automated and general medical image segmentation can be challenging because the foreground and the background may have complicated and overlapping density distributions in medical imaging. Conventional region-based level set algorithms often assume piecewise constant or piecewise smooth for segments, which are implausible for general medical image segmentation. Furthermore, low contrast and noise make identification of the boundaries between foreground and background difficult for edge-based level set algorithms. Thus, to address these problems, we suggest a supervised variational level set segmentation model to harness the statistical region energy functional with a weighted probability approximation. Our approach models the region density distributions by using the mixture-of-mixtures Gaussian model to better approximate real intensity distributions and distinguish statistical intensity differences between foreground and background. The region-based statistical model in our algorithm can intuitively provide better performance on noisy images. We constructed a weighted probability map on graphs to incorporate spatial indications from user input with a contextual constraint based on the minimization of contextual graphs energy functional. We measured the performance of our approach on ten noisy synthetic images and 58 medical datasets with heterogeneous intensities and ill-defined boundaries and compared our technique to the Chan-Vese region-based level set model, the geodesic active contour model with distance regularization, and the random walker model. Our method consistently achieved the highest Dice similarity coefficient when compared to the other methods.

  17. Active link selection for efficient semi-supervised community detection

    PubMed Central

    Yang, Liang; Jin, Di; Wang, Xiao; Cao, Xiaochun

    2015-01-01

    Several semi-supervised community detection algorithms have been proposed recently to improve the performance of traditional topology-based methods. However, most of them focus on how to integrate supervised information with topology information; few of them pay attention to which information is critical for performance improvement. This leads to large amounts of demand for supervised information, which is expensive or difficult to obtain in most fields. For this problem we propose an active link selection framework, that is we actively select the most uncertain and informative links for human labeling for the efficient utilization of the supervised information. We also disconnect the most likely inter-community edges to further improve the efficiency. Our main idea is that, by connecting uncertain nodes to their community hubs and disconnecting the inter-community edges, one can sharpen the block structure of adjacency matrix more efficiently than randomly labeling links as the existing methods did. Experiments on both synthetic and real networks demonstrate that our new approach significantly outperforms the existing methods in terms of the efficiency of using supervised information. It needs ~13% of the supervised information to achieve a performance similar to that of the original semi-supervised approaches. PMID:25761385

  18. Towards harmonized seismic analysis across Europe using supervised machine learning approaches

    NASA Astrophysics Data System (ADS)

    Zaccarelli, Riccardo; Bindi, Dino; Cotton, Fabrice; Strollo, Angelo

    2017-04-01

    In the framework of the Thematic Core Services for Seismology of EPOS-IP (European Plate Observing System-Implementation Phase), a service for disseminating a regionalized logic-tree of ground motions models for Europe is under development. While for the Mediterranean area the large availability of strong motion data qualified and disseminated through the Engineering Strong Motion database (ESM-EPOS), supports the development of both selection criteria and ground motion models, for the low-to-moderate seismic regions of continental Europe the development of ad-hoc models using weak motion recordings of moderate earthquakes is unavoidable. Aim of this work is to present a platform for creating application-oriented earthquake databases by retrieving information from EIDA (European Integrated Data Archive) and applying supervised learning models for earthquake records selection and processing suitable for any specific application of interest. Supervised learning models, i.e. the task of inferring a function from labelled training data, have been extensively used in several fields such as spam detection, speech and image recognition and in general pattern recognition. Their suitability to detect anomalies and perform a semi- to fully- automated filtering on large waveform data set easing the effort of (or replacing) human expertise is therefore straightforward. Being supervised learning algorithms capable of learning from a relatively small training set to predict and categorize unseen data, its advantage when processing large amount of data is crucial. Moreover, their intrinsic ability to make data driven predictions makes them suitable (and preferable) in those cases where explicit algorithms for detection might be unfeasible or too heuristic. In this study, we consider relatively simple statistical classifiers (e.g., Naive Bayes, Logistic Regression, Random Forest, SVMs) where label are assigned to waveform data based on "recognized classes" needed for our use case

  19. Dynamic hierarchical algorithm for accelerated microfossil identification

    NASA Astrophysics Data System (ADS)

    Wong, Cindy M.; Joseph, Dileepan

    2015-02-01

    Marine microfossils provide a useful record of the Earth's resources and prehistory via biostratigraphy. To study Hydrocarbon reservoirs and prehistoric climate, geoscientists visually identify the species of microfossils found in core samples. Because microfossil identification is labour intensive, automation has been investigated since the 1980s. With the initial rule-based systems, users still had to examine each specimen under a microscope. While artificial neural network systems showed more promise for reducing expert labour, they also did not displace manual identification for a variety of reasons, which we aim to overcome. In our human-based computation approach, the most difficult step, namely taxon identification is outsourced via a frontend website to human volunteers. A backend algorithm, called dynamic hierarchical identification, uses unsupervised, supervised, and dynamic learning to accelerate microfossil identification. Unsupervised learning clusters specimens so that volunteers need not identify every specimen during supervised learning. Dynamic learning means interim computation outputs prioritize subsequent human inputs. Using a dataset of microfossils identified by an expert, we evaluated correct and incorrect genus and species rates versus simulated time, where each specimen identification defines a moment. The proposed algorithm accelerated microfossil identification effectively, especially compared to benchmark results obtained using a k-nearest neighbour method.

  20. Reflections on supervision in psychotherapy.

    PubMed

    Fernández-Alvarez, Héctor

    2016-01-01

    The aim of the author is to share his reflections on supervision as a central topic in therapists' education and training programs. The concept of supervision, its functions and effects on the training process along with the contributions of different theoretical models to its evolution are addressed. Supervision alliance, the roles of supervisor and supervisee, evaluation as a central component and the influence of socioeconomic factors are discussed. The conclusions depict the most interesting paths for development in the near future and the areas where research needs to be further conducted along with the subjects most worthy of efforts in the supervision field.

  1. An Automated Treatment Plan Quality Control Tool for Intensity-Modulated Radiation Therapy Using a Voxel-Weighting Factor-Based Re-Optimization Algorithm.

    PubMed

    Song, Ting; Li, Nan; Zarepisheh, Masoud; Li, Yongbao; Gautier, Quentin; Zhou, Linghong; Mell, Loren; Jiang, Steve; Cerviño, Laura

    2016-01-01

    Intensity-modulated radiation therapy (IMRT) currently plays an important role in radiotherapy, but its treatment plan quality can vary significantly among institutions and planners. Treatment plan quality control (QC) is a necessary component for individual clinics to ensure that patients receive treatments with high therapeutic gain ratios. The voxel-weighting factor-based plan re-optimization mechanism has been proved able to explore a larger Pareto surface (solution domain) and therefore increase the possibility of finding an optimal treatment plan. In this study, we incorporated additional modules into an in-house developed voxel weighting factor-based re-optimization algorithm, which was enhanced as a highly automated and accurate IMRT plan QC tool (TPS-QC tool). After importing an under-assessment plan, the TPS-QC tool was able to generate a QC report within 2 minutes. This QC report contains the plan quality determination as well as information supporting the determination. Finally, the IMRT plan quality can be controlled by approving quality-passed plans and replacing quality-failed plans using the TPS-QC tool. The feasibility and accuracy of the proposed TPS-QC tool were evaluated using 25 clinically approved cervical cancer patient IMRT plans and 5 manually created poor-quality IMRT plans. The results showed high consistency between the QC report quality determinations and the actual plan quality. In the 25 clinically approved cases that the TPS-QC tool identified as passed, a greater difference could be observed for dosimetric endpoints for organs at risk (OAR) than for planning target volume (PTV), implying that better dose sparing could be achieved in OAR than in PTV. In addition, the dose-volume histogram (DVH) curves of the TPS-QC tool re-optimized plans satisfied the dosimetric criteria more frequently than did the under-assessment plans. In addition, the criteria for unsatisfied dosimetric endpoints in the 5 poor-quality plans could typically be

  2. SU-E-I-81: Assessment of CT Radiation Dose and Image Quality for An Automated Tube Potential Selection Algorithm Using Adult Anthropomorphic and ACR Phantoms

    SciTech Connect

    Mahmood, U; Erdi, Y; Wang, W

    2014-06-01

    Purpose: To assess the impact of General Electrics (GE) automated tube potential algorithm, kV assist (kVa) on radiation dose and image quality, with an emphasis on optimizing protocols based on noise texture. Methods: Radiation dose was assessed by inserting optically stimulated luminescence dosimeters (OSLs) throughout the body of an adult anthropomorphic phantom (CIRS). The baseline protocol was: 120 kVp, Auto mA (180 to 380 mA), noise index (NI) = 14, adaptive iterative statistical reconstruction (ASiR) of 20%, 0.8s rotation time. Image quality was evaluated by calculating the contrast to noise ratio (CNR) and noise power spectrum (NPS) from the ACR CT accreditation phantom. CNRs were calculated according to the steps described in ACR CT phantom testing document. NPS was determined by taking the 3D FFT of the uniformity section of the ACR phantom. NPS and CNR were evaluated with and without kVa and for all available adaptive iterative statistical reconstruction (ASiR) settings, ranging from 0 to 100%. Each NPS was also evaluated for its peak frequency difference (PFD) with respect to the baseline protocol. Results: The CNR for the adult male was found to decrease from CNR = 0.912 ± 0.045 for the baseline protocol without kVa to a CNR = 0.756 ± 0.049 with kVa activated. When compared against the baseline protocol, the PFD at ASiR of 40% yielded a decrease in noise magnitude as realized by the increase in CNR = 0.903 ± 0.023. The difference in the central liver dose with and without kVa was found to be 0.07%. Conclusion: Dose reduction was insignificant in the adult phantom. As determined by NPS analysis, ASiR of 40% produced images with similar noise texture to the baseline protocol. However, the CNR at ASiR of 40% with kVa fails to meet the current ACR CNR passing requirement of 1.0.

  3. An Automated Treatment Plan Quality Control Tool for Intensity-Modulated Radiation Therapy Using a Voxel-Weighting Factor-Based Re-Optimization Algorithm

    PubMed Central

    Song, Ting; Li, Nan; Zarepisheh, Masoud; Li, Yongbao; Gautier, Quentin; Zhou, Linghong; Mell, Loren; Jiang, Steve; Cerviño, Laura

    2016-01-01

    Intensity-modulated radiation therapy (IMRT) currently plays an important role in radiotherapy, but its treatment plan quality can vary significantly among institutions and planners. Treatment plan quality control (QC) is a necessary component for individual clinics to ensure that patients receive treatments with high therapeutic gain ratios. The voxel-weighting factor-based plan re-optimization mechanism has been proved able to explore a larger Pareto surface (solution domain) and therefore increase the possibility of finding an optimal treatment plan. In this study, we incorporated additional modules into an in-house developed voxel weighting factor-based re-optimization algorithm, which was enhanced as a highly automated and accurate IMRT plan QC tool (TPS-QC tool). After importing an under-assessment plan, the TPS-QC tool was able to generate a QC report within 2 minutes. This QC report contains the plan quality determination as well as information supporting the determination. Finally, the IMRT plan quality can be controlled by approving quality-passed plans and replacing quality-failed plans using the TPS-QC tool. The feasibility and accuracy of the proposed TPS-QC tool were evaluated using 25 clinically approved cervical cancer patient IMRT plans and 5 manually created poor-quality IMRT plans. The results showed high consistency between the QC report quality determinations and the actual plan quality. In the 25 clinically approved cases that the TPS-QC tool identified as passed, a greater difference could be observed for dosimetric endpoints for organs at risk (OAR) than for planning target volume (PTV), implying that better dose sparing could be achieved in OAR than in PTV. In addition, the dose-volume histogram (DVH) curves of the TPS-QC tool re-optimized plans satisfied the dosimetric criteria more frequently than did the under-assessment plans. In addition, the criteria for unsatisfied dosimetric endpoints in the 5 poor-quality plans could typically be

  4. Coronary CTA using scout-based automated tube potential and current selection algorithm, with breast displacement results in lower radiation exposure in females compared to males

    PubMed Central

    Vadvala, Harshna; Kim, Phillip; Mayrhofer, Thomas; Pianykh, Oleg; Kalra, Mannudeep; Hoffmann, Udo

    2014-01-01

    Purpose To evaluate the effect of automatic tube potential selection and automatic exposure control combined with female breast displacement during coronary computed tomography angiography (CCTA) on radiation exposure in women versus men of the same body size. Materials and methods Consecutive clinical exams between January 2012 and July 2013 at an academic medical center were retrospectively analyzed. All examinations were performed using ECG-gating, automated tube potential, and tube current selection algorithm (APS-AEC) with breast displacement in females. Cohorts were stratified by sex and standard World Health Organization body mass index (BMI) ranges. CT dose index volume (CTDIvol), dose length product (DLP) median effective dose (ED), and size specific dose estimate (SSDE) were recorded. Univariable and multivariable regression analyses were performed to evaluate the effect of gender on radiation exposure per BMI. Results A total of 726 exams were included, 343 (47%) were females; mean BMI was similar by gender (28.6±6.9 kg/m2 females vs. 29.2±6.3 kg/m2 males; P=0.168). Median ED was 2.3 mSv (1.4-5.2) for females and 3.6 (2.5-5.9) for males (P<0.001). Females were exposed to less radiation by a difference in median ED of –1.3 mSv, CTDIvol –4.1 mGy, and SSDE –6.8 mGy (all P<0.001). After adjusting for BMI, patient characteristics, and gating mode, females exposure was lower by a median ED of –0.7 mSv, CTDIvol –2.3 mGy, and SSDE –3.15 mGy, respectively (all P<0.01). Conclusions: We observed a difference in radiation exposure to patients undergoing CCTA with the combined use of AEC-APS and breast displacement in female patients as compared to their BMI-matched male counterparts, with female patients receiving one third less exposure. PMID:25610804

  5. Automated road extraction from aerial imagery by self-organization

    NASA Astrophysics Data System (ADS)

    Doucette, Peter J.

    To date, computer vision methods have largely focused on extraction from panchromatic imagery. Despite significant technological advances, road extraction algorithms have fallen short of satisfying rigorous production requirements. To that end, the objective of this thesis is to present a new approach for automating road detection from high-resolution multispectral imagery. This thesis considers three main research objectives: (1) development of a fully automated road extraction strategy in that interactive human supervision or input initializations are not required; (2) development of a globalized approach to road detection that is motivated by principles of self-organization; (3) meaningful exploitation of high-resolution multispectral imagery. Several new techniques are presented for fully automated road extraction from high-resolution imagery. The core algorithms implemented include (1) Anti-parallel edge Centerline Extractor (ACE), (2) Fuzzy Organization of Elongated Regions (FOrgER), and (3) Self-Organizing Road Finder (SORF). The ACE algorithm extends the idea of anti-parallel edge detection in a new approach that considers multi-layer images. The FOrgER algorithm is motivated by Gestalt grouping principles in perceptual organization. The FOrgER approach combines principles of self-organization with fuzzy inferencing to building road topology. Self-organization represents a learning paradigm that is neurobiologically motivated. Globalized analysis promotes lower sensitivity to fragmented information, and demonstrates robust capacity for handling scene clutter in high-resolution images. Finally, the SORF algorithm bridges concepts from ACE and FOrgER into a comprehensive and cooperative approach for fully automated road finding. By providing an exceptional breadth of input parameters, output metrics, modes of operation, and adaptability to various input, SORF is particularly well suited as an analytical research tool. Extraction results from the SORF

  6. Supervising PETE Candidates Using the Situational Supervision Model

    ERIC Educational Resources Information Center

    Levy, Linda S.; Johnson, Lynn V.

    2012-01-01

    Physical education teacher candidates (PETCs) often, as part of their curricular requirements, engage in early field experiences that prepare them for student teaching. Matching the PETC's developmental level with the mentor's supervision style enhances this experience. The situational supervision model, based on the situational leadership model,…

  7. A Collaboratively Supervised Teaching Internship: Implications for Future Supervision.

    ERIC Educational Resources Information Center

    Baker, Thomas E.

    This paper describes the 5-year Austin Teacher Program (ATP) at Austin College with emphasis on the collaboratively supervised internship in the graduate year. Some results of a comprehensive survey of over 400 ATP graduates are discussed, as well as issues and needs in the supervision of interns, and implications for the future in the supervision…

  8. Supervision Learning as Conceptual Threshold Crossing: When Supervision Gets "Medieval"

    ERIC Educational Resources Information Center

    Carter, Susan

    2016-01-01

    This article presumes that supervision is a category of teaching, and that we all "learn" how to teach better. So it enquires into what novice supervisors need to learn. An anonymised digital questionnaire sought data from supervisors [n226] on their experiences of supervision to find out what was difficult, and supervisor interviews…

  9. Supervision of Supervised Agricultural Experience Programs: A Synthesis of Research.

    ERIC Educational Resources Information Center

    Dyer, James E.; Williams, David L.

    1997-01-01

    A review of literature from 1964 to 1993 found that supervised agricultural experience (SAE) teachers, students, parents, and employers value the teachers' supervisory role. Implementation practices vary widely and there are no cumulative data to guide policies and standards for SAE supervision. (SK)

  10. Exploring Clinical Supervision as Instrument for Effective Teacher Supervision

    ERIC Educational Resources Information Center

    Ibara, E. C.

    2013-01-01

    This paper examines clinical supervision approaches that have the potential to promote and implement effective teacher supervision in Nigeria. The various approaches have been analysed based on the conceptual framework of instructional supervisory behavior. The findings suggest that a clear distinction can be made between the prescriptive and…

  11. Supervision Learning as Conceptual Threshold Crossing: When Supervision Gets "Medieval"

    ERIC Educational Resources Information Center

    Carter, Susan

    2016-01-01

    This article presumes that supervision is a category of teaching, and that we all "learn" how to teach better. So it enquires into what novice supervisors need to learn. An anonymised digital questionnaire sought data from supervisors [n226] on their experiences of supervision to find out what was difficult, and supervisor interviews…

  12. Supervision of Supervised Agricultural Experience Programs: A Synthesis of Research.

    ERIC Educational Resources Information Center

    Dyer, James E.; Williams, David L.

    1997-01-01

    A review of literature from 1964 to 1993 found that supervised agricultural experience (SAE) teachers, students, parents, and employers value the teachers' supervisory role. Implementation practices vary widely and there are no cumulative data to guide policies and standards for SAE supervision. (SK)

  13. Exploring Clinical Supervision as Instrument for Effective Teacher Supervision

    ERIC Educational Resources Information Center

    Ibara, E. C.

    2013-01-01

    This paper examines clinical supervision approaches that have the potential to promote and implement effective teacher supervision in Nigeria. The various approaches have been analysed based on the conceptual framework of instructional supervisory behavior. The findings suggest that a clear distinction can be made between the prescriptive and…

  14. Scheduling algorithms

    NASA Astrophysics Data System (ADS)

    Wolfe, William J.; Wood, David; Sorensen, Stephen E.

    1996-12-01

    This paper discusses automated scheduling as it applies to complex domains such as factories, transportation, and communications systems. The window-constrained-packing problem is introduced as an ideal model of the scheduling trade offs. Specific algorithms are compared in terms of simplicity, speed, and accuracy. In particular, dispatch, look-ahead, and genetic algorithms are statistically compared on randomly generated job sets. The conclusion is that dispatch methods are fast and fairly accurate; while modern algorithms, such as genetic and simulate annealing, have excessive run times, and are too complex to be practical.

  15. Automated Student Model Improvement

    ERIC Educational Resources Information Center

    Koedinger, Kenneth R.; McLaughlin, Elizabeth A.; Stamper, John C.

    2012-01-01

    Student modeling plays a critical role in developing and improving instruction and instructional technologies. We present a technique for automated improvement of student models that leverages the DataShop repository, crowd sourcing, and a version of the Learning Factors Analysis algorithm. We demonstrate this method on eleven educational…

  16. Design of Supervision Systems: Theory and Practice

    NASA Astrophysics Data System (ADS)

    Bouamama, Belkacem Ould

    2008-06-01

    The term "supervision" means a set of tools and methods used to operate an industrial process in normal situation as well as in the presence of failures or undesired disturbances. The activities concerned with the supervision are the Fault Detection and Isolation (FDI) in the diagnosis level, and the Fault Tolerant Control (FTC) through necessary reconfiguration, whenever possible, in the fault accommodation level. The final goal of a supervision platform is to provide the operator a set of tools that helps to safely run the process and to take appropriate decision in the presence of faults. Different approaches to the design of such decision making tools have been developed in the past twenty years, depending on the kind of knowledge (structural, statistical, fuzzy, expert rules, functional, behavioural…) used to describe the plant operation. How to elaborate models for FDI design, how to develop the FDI algorithm, how to avoid false alarms, how to improve the diagnosability of the faults for alarm management design, how to continue to control the process in failure mode, what are the limits of each method,…?. Such are the main purposes concerned by the presented plenary session from an industrial and theoretical point of view.

  17. Special Issue on Clinical Supervision: A Reflection

    ERIC Educational Resources Information Center

    Bernard, Janine M.

    2010-01-01

    This special issue about clinical supervision offers an array of contributions with disparate insights into the supervision process. Using a synergy of supervision model, the articles are categorized as addressing the infrastructure required for adequate supervision, the relationship dynamics endemic to supervision, or the process of delivering…

  18. Novel algorithm and MATLAB-based program for automated power law analysis of single particle, time-dependent mean-square displacement

    NASA Astrophysics Data System (ADS)

    Umansky, Moti; Weihs, Daphne

    2012-08-01

    In many physical and biophysical studies, single-particle tracking is utilized to reveal interactions, diffusion coefficients, active modes of driving motion, dynamic local structure, micromechanics, and microrheology. The basic analysis applied to those data is to determine the time-dependent mean-square displacement (MSD) of particle trajectories and perform time- and ensemble-averaging of similar motions. The motion of particles typically exhibits time-dependent power-law scaling, and only trajectories with qualitatively and quantitatively comparable MSD should be ensembled. Ensemble averaging trajectories that arise from different mechanisms, e.g., actively driven and diffusive, is incorrect and can result inaccurate correlations between structure, mechanics, and activity. We have developed an algorithm to automatically and accurately determine power-law scaling of experimentally measured single-particle MSD. Trajectories can then categorized and grouped according to user defined cutoffs of time, amplitudes, scaling exponent values, or combinations. Power-law fits are then provided for each trajectory alongside categorized groups of trajectories, histograms of power laws, and the ensemble-averaged MSD of each group. The codes are designed to be easily incorporated into existing user codes. We expect that this algorithm and program will be invaluable to anyone performing single-particle tracking, be it in physical or biophysical systems. Catalogue identifier: AEMD_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEMD_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 25 892 No. of bytes in distributed program, including test data, etc.: 5 572 780 Distribution format: tar.gz Programming language: MATLAB (MathWorks Inc.) version 7.11 (2010b) or higher, program

  19. Clinical supervision for nurse lecturers.

    PubMed

    Lewis, D

    This article builds on a previous one which discussed the use of de Bono's thinking tool, 'six thinking hats' in the clinical, managerial, educational and research areas of nursing (Lewis 1995). This article explores clinical supervision and describes how the six thinking hats may be used as a reflective tool in the supervision of nurse lecturers who teach counselling skills.

  20. Supervisees' Perception of Clinical Supervision

    ERIC Educational Resources Information Center

    Willis, Lisa

    2010-01-01

    Supervisors must become aware of the possible conflicts that could arise during clinical supervision. It is important that supervisors communicate their roles and expectations effectively with their supervisees. This paper supports the notion that supervision is a mutual agreement between the supervisee and the supervisor and the roles of…

  1. Style and Structure in Supervision.

    ERIC Educational Resources Information Center

    Munson, Carlton E.

    1981-01-01

    A study of 64 social work supervisors and 65 supervisees explores elements of structure, authority, and teaching within the supervisory relationship. Incongruence in actual and preferred structure, authority, and content in supervision indicates a need to examine autonomy in practice and control in supervision. (MSE)

  2. Supervision in Special Language Programs.

    ERIC Educational Resources Information Center

    Florez-Tighe, Viola

    Too little emphasis is placed on instructional supervision in special language programs for limited-English-proficient students. Such supervision can provide a mechanism to promote the growth of instructional staff, improve the instructional program, and lead to curriculum development. Many supervisors are undertrained and unable to provide…

  3. Semi-supervised multi-label collective classification ensemble for functional genomics

    PubMed Central

    2014-01-01

    Background With the rapid accumulation of proteomic and genomic datasets in terms of genome-scale features and interaction networks through high-throughput experimental techniques, the process of manual predicting functional properties of the proteins has become increasingly cumbersome, and computational methods to automate this annotation task are urgently needed. Most of the approaches in predicting functional properties of proteins require to either identify a reliable set of labeled proteins with similar attribute features to unannotated proteins, or to learn from a fully-labeled protein interaction network with a large amount of labeled data. However, acquiring such labels can be very difficult in practice, especially for multi-label protein function prediction problems. Learning with only a few labeled data can lead to poor performance as limited supervision knowledge can be obtained from similar proteins or from connections between them. To effectively annotate proteins even in the paucity of labeled data, it is important to take advantage of all data sources that are available in this problem setting, including interaction networks, attribute feature information, correlations of functional labels, and unlabeled data. Results In this paper, we show that the underlying nature of predicting functional properties of proteins using various data sources of relational data is a typical collective classification (CC) problem in machine learning. The protein functional prediction task with limited annotation is then cast into a semi-supervised multi-label collective classification (SMCC) framework. As such, we propose a novel generative model based SMCC algorithm, called GM-SMCC, to effectively compute the label probability distributions of unannotated protein instances and predict their functional properties. To further boost the predicting performance, we extend the method in an ensemble manner, called EGM-SMCC, by utilizing multiple heterogeneous networks with

  4. A Semi-Supervised Learning Approach to Enhance Health Care Community–Based Question Answering: A Case Study in Alcoholism

    PubMed Central

    Klabjan, Diego; Jonnalagadda, Siddhartha Reddy

    2016-01-01

    Background Community-based question answering (CQA) sites play an important role in addressing health information needs. However, a significant number of posted questions remain unanswered. Automatically answering the posted questions can provide a useful source of information for Web-based health communities. Objective In this study, we developed an algorithm to automatically answer health-related questions based on past questions and answers (QA). We also aimed to understand information embedded within Web-based health content that are good features in identifying valid answers. Methods Our proposed algorithm uses information retrieval techniques to identify candidate answers from resolved QA. To rank these candidates, we implemented a semi-supervised leaning algorithm that extracts the best answer to a question. We assessed this approach on a curated corpus from Yahoo! Answers and compared against a rule-based string similarity baseline. Results On our dataset, the semi-supervised learning algorithm has an accuracy of 86.2%. Unified medical language system–based (health related) features used in the model enhance the algorithm’s performance by proximately 8%. A reasonably high rate of accuracy is obtained given that the data are considerably noisy. Important features distinguishing a valid answer from an invalid answer include text length, number of stop words contained in a test question, a distance between the test question and other questions in the corpus, and a number of overlapping health-related terms between questions. Conclusions Overall, our automated QA system based on historical QA pairs is shown to be effective according to the dataset in this case study. It is developed for general use in the health care domain, which can also be applied to other CQA sites. PMID:27485666

  5. Fully Decentralized Semi-supervised Learning via Privacy-preserving Matrix Completion.

    PubMed

    Fierimonte, Roberto; Scardapane, Simone; Uncini, Aurelio; Panella, Massimo

    2016-08-26

    Distributed learning refers to the problem of inferring a function when the training data are distributed among different nodes. While significant work has been done in the contexts of supervised and unsupervised learning, the intermediate case of Semi-supervised learning in the distributed setting has received less attention. In this paper, we propose an algorithm for this class of problems, by extending the framework of manifold regularization. The main component of the proposed algorithm consists of a fully distributed computation of the adjacency matrix of the training patterns. To this end, we propose a novel algorithm for low-rank distributed matrix completion, based on the framework of diffusion adaptation. Overall, the distributed Semi-supervised algorithm is efficient and scalable, and it can preserve privacy by the inclusion of flexible privacy-preserving mechanisms for similarity computation. The experimental results and comparison on a wide range of standard Semi-supervised benchmarks validate our proposal.

  6. Automated Recognition of 3D Features in GPIR Images

    NASA Technical Reports Server (NTRS)

    Park, Han; Stough, Timothy; Fijany, Amir

    2007-01-01

    A method of automated recognition of three-dimensional (3D) features in images generated by ground-penetrating imaging radar (GPIR) is undergoing development. GPIR 3D images can be analyzed to detect and identify such subsurface features as pipes and other utility conduits. Until now, much of the analysis of GPIR images has been performed manually by expert operators who must visually identify and track each feature. The present method is intended to satisfy a need for more efficient and accurate analysis by means of algorithms that can automatically identify and track subsurface features, with minimal supervision by human operators. In this method, data from multiple sources (for example, data on different features extracted by different algorithms) are fused together for identifying subsurface objects. The algorithms of this method can be classified in several different ways. In one classification, the algorithms fall into three classes: (1) image-processing algorithms, (2) feature- extraction algorithms, and (3) a multiaxis data-fusion/pattern-recognition algorithm that includes a combination of machine-learning, pattern-recognition, and object-linking algorithms. The image-processing class includes preprocessing algorithms for reducing noise and enhancing target features for pattern recognition. The feature-extraction algorithms operate on preprocessed data to extract such specific features in images as two-dimensional (2D) slices of a pipe. Then the multiaxis data-fusion/ pattern-recognition algorithm identifies, classifies, and reconstructs 3D objects from the extracted features. In this process, multiple 2D features extracted by use of different algorithms and representing views along different directions are used to identify and reconstruct 3D objects. In object linking, which is an essential part of this process, features identified in successive 2D slices and located within a threshold radius of identical features in adjacent slices are linked in a

  7. Nursing protocol for telephonic supervision of clients.

    PubMed

    Martin, Elisabeth Moy; Coyle, Mary Kathleen

    2006-01-01

    Access to care, client vulnerabilities, technology, and health costs affect not only the delivery of health care but also the roles, responsibilities, and opportunities for nurses. Patients are often managed in the home or discharged from hospitals before they or their families are ready. To address some of these needs, nurses are utilizing telehealth opportunities. For many nurses, telehealth translates to telephonic nursing. This article provides an algorithm that nurses can utilize in order to safely monitor patients in their homes. This can be a cost-effective program, particularly for those who are homebound or for persons, such as the elderly or those with chronic illness, who have long-term needs that vary between relative health and acute illness. This algorithm serves as a guide in our nursing practice for the telephonic supervision of patients in the home environment.

  8. Quality-Related Monitoring and Grading of Granulated Products by Weibull-Distribution Modeling of Visual Images with Semi-Supervised Learning.

    PubMed

    Liu, Jinping; Tang, Zhaohui; Xu, Pengfei; Liu, Wenzhong; Zhang, Jin; Zhu, Jianyong

    2016-06-29

    The topic of online product quality inspection (OPQI) with smart visual sensors is attracting increasing interest in both the academic and industrial communities on account of the natural connection between the visual appearance of products with their underlying qualities. Visual images captured from granulated products (GPs), e.g., cereal products, fabric textiles, are comprised of a large number of independent particles or stochastically stacking locally homogeneous fragments, whose analysis and understanding remains challenging. A method of image statistical modeling-based OPQI for GP quality grading and monitoring by a Weibull distribution(WD) model with a semi-supervised learning classifier is presented. WD-model parameters (WD-MPs) of GP images' spatial structures, obtained with omnidirectional Gaussian derivative filtering (OGDF), which were demonstrated theoretically to obey a specific WD model of integral form, were extracted as the visual features. Then, a co-training-style semi-supervised classifier algorithm, named COSC-Boosting, was exploited for semi-supervised GP quality grading, by integrating two independent classifiers with complementary nature in the face of scarce labeled samples. Effectiveness of the proposed OPQI method was verified and compared in the field of automated rice quality grading with commonly-used methods and showed superior performance, which lays a foundation for the quality control of GP on assembly lines.

  9. Quality-Related Monitoring and Grading of Granulated Products by Weibull-Distribution Modeling of Visual Images with Semi-Supervised Learning

    PubMed Central

    Liu, Jinping; Tang, Zhaohui; Xu, Pengfei; Liu, Wenzhong; Zhang, Jin; Zhu, Jianyong

    2016-01-01

    The topic of online product quality inspection (OPQI) with smart visual sensors is attracting increasing interest in both the academic and industrial communities on account of the natural connection between the visual appearance of products with their underlying qualities. Visual images captured from granulated products (GPs), e.g., cereal products, fabric textiles, are comprised of a large number of independent particles or stochastically stacking locally homogeneous fragments, whose analysis and understanding remains challenging. A method of image statistical modeling-based OPQI for GP quality grading and monitoring by a Weibull distribution(WD) model with a semi-supervised learning classifier is presented. WD-model parameters (WD-MPs) of GP images’ spatial structures, obtained with omnidirectional Gaussian derivative filtering (OGDF), which were demonstrated theoretically to obey a specific WD model of integral form, were extracted as the visual features. Then, a co-training-style semi-supervised classifier algorithm, named COSC-Boosting, was exploited for semi-supervised GP quality grading, by integrating two independent classifiers with complementary nature in the face of scarce labeled samples. Effectiveness of the proposed OPQI method was verified and compared in the field of automated rice quality grading with commonly-used methods and showed superior performance, which lays a foundation for the quality control of GP on assembly lines. PMID:27367703

  10. Automated multidetector row CT dataset segmentation with an interactive watershed transform (IWT) algorithm: Part 2. Body CT angiographic and orthopedic applications.

    PubMed

    Johnson, Pamela T; Hahn, Horst K; Heath, David G; Fishman, Elliot K

    2008-12-01

    The preceding manuscript describes the principles behind the Interactive Watershed Transform (IWT) segmentation tool. The purpose of this manuscript is to illustrate the clinical utility of this editing technique for body multidetector row computed tomography (MDCT) imaging. A series of cases demonstrates clinical applications where automated segmentation of skeletal structures with IWT is most useful. Both CT angiography and orthopedic applications are presented.

  11. Metrology automation reliability

    NASA Astrophysics Data System (ADS)

    Chain, Elizabeth E.

    1996-09-01

    At Motorola's MOS-12 facility automated measurements on 200- mm diameter wafers proceed in a hands-off 'load-and-go' mode requiring only wafer loading, measurement recipe loading, and a 'run' command for processing. Upon completion of all sample measurements, the data is uploaded to the factory's data collection software system via a SECS II interface, eliminating the requirement of manual data entry. The scope of in-line measurement automation has been extended to the entire metrology scheme from job file generation to measurement and data collection. Data analysis and comparison to part specification limits is also carried out automatically. Successful integration of automated metrology into the factory measurement system requires that automated functions, such as autofocus and pattern recognition algorithms, display a high degree of reliability. In the 24- hour factory reliability data can be collected automatically on every part measured. This reliability data is then uploaded to the factory data collection software system at the same time as the measurement data. Analysis of the metrology reliability data permits improvements to be made as needed, and provides an accurate accounting of automation reliability. This reliability data has so far been collected for the CD-SEM (critical dimension scanning electron microscope) metrology tool, and examples are presented. This analysis method can be applied to such automated in-line measurements as CD, overlay, particle and film thickness measurements.

  12. Ensemble Semi-supervised Frame-work for Brain Magnetic Resonance Imaging Tissue Segmentation

    PubMed Central

    Azmi, Reza; Pishgoo, Boshra; Norozi, Narges; Yeganeh, Samira

    2013-01-01

    Brain magnetic resonance images (MRIs) tissue segmentation is one of the most important parts of the clinical diagnostic tools. Pixel classification methods have been frequently used in the image segmentation with two supervised and unsupervised approaches up to now. Supervised segmentation methods lead to high accuracy, but they need a large amount of labeled data, which is hard, expensive, and slow to obtain. Moreover, they cannot use unlabeled data to train classifiers. On the other hand, unsupervised segmentation methods have no prior knowledge and lead to low level of performance. However, semi-supervised learning which uses a few labeled data together with a large amount of unlabeled data causes higher accuracy with less trouble. In this paper, we propose an ensemble semi-supervised frame-work for segmenting of brain magnetic resonance imaging (MRI) tissues that it ha